The state of AI in testing in 2020 [webinar panel recap]

 In Blog, Most Recent, AI and Machine Learning in Test Automation, Test Automation Webinars

The state of AI in testing in 2020 [webinar panel recap]

"The State of AI in Testing" webinar recap

Are you still confused about artificial intelligence (AI), and especially how it relates to software testing? If so, you are not alone.

As AI is continuing to increase in popularity, many still don’t understand the scope or impact of AI in testing. While QA professionals everywhere have heard about AI, many testers are still unsure about the extent of its impact, especially in 2020.

To address this elephant in the room (albeit a more robotic-looking elephant), TestCraft hosted a panel discussion with top leaders in the software testing industry to address the most burning questions about AI and machine learning (ML) in test automation. Our panelists are all experts in their own right regarding AI-driven test automation:

  • Paul Grizzaffi – Principal Automation Architect at Magenic
  • Jennifer Bonine – CEO and co-founder of Pink Lion AI
  • Eran Kinsbruner – Chief Evanlegist and Author at Perfecto
  • Dror Todress – CEO and co-founder of TestCraft

In this discussion, our panelists covered everything from predictions about AI automation in 2020 to how to select an AI-based test automation tool. Here is a brief summary of some of the questions our panelists answered below, and you can also listen to the full webinar recording here.

Is AI taking over testers’ jobs?

To kick off the discussion, our panelists first responded to the rumor that AI will take over testers’ jobs. Everyone agreed that AI is not a threat to testers’ job security; rather it is a helpful tool and opportunity for testers to be more productive in their work in a shorter amount of time.

Paul referred to AI as a “force multiplier” in testing, replacing tasks rather than people. Jennifer echoed this sentiment, saying AI offers a more sophisticated way to handle monotonous and repetitive functions. She pointed out that there are important factors that human testers bring to the table that AI can’t replace, such as creative thinking, empathy, and problem-solving skills.

“I see AI as a force multiplier in testing. It replaces tasks rather than people.”

– Paul Grizzaffi

Dror continued by emphasizing the unique knowledge that manual testers and business testers bring to the table that AI can’t replace. He asserted that AI was a useful tool to bring this knowledge to the forefront and integrate testers more completely into the software development process. Eran offered a different take on this question, stressing that AI will help different members of the testing team differently, depending on who is actually in charge of testing. While AI can help manual testers in one way, developers will leverage AI differently.

Where can AI and machine learning help with software testing?

Each member of our panel offered different ways that AI can impact software testing, but a few common themes stood out. One major way that AI can help with software testing is by identifying web elements consistently and reliably. Especially as these web elements change constantly to improve the user experience, AI can help tremendously with reducing test maintenance overhead. Dror specified that this is how TestCraft leverages AI and that tools that use machine learning in this way can both improve test stability and create a strong foundation for scalable test automation.

Another popular way that AI can help companies improve their software testing is by doing results analysis. Paul focused on this area, in particular, noting that AI is great for going through large amounts of data and advising on future data sets to analyze. In such a case, AI is helpful in areas where you don’t have to be “right,” but where AI can help detect patterns and other common trends.

Eran expanded on this idea with the third major way that AI can help with software testing, which is by showing testers issues within the test scripts themselves. With machine learning, testers can gain insight into why the test script isn’t testing a process as well as it can, and how making different changes can provide value.

How can testers know if AI is actually working?

While AI is certainly surrounded by a lot of excitement and hype, many testers want to see for themselves that it’s actually making a positive impact on their company. Both Paul and Jennifer posed that a major key to determining whether your tool’s machine learning algorithm is working is understanding your company’s data sets. The machine learns about your product based on the data sets you already have, which offers earlier patterns that you can look to for guidance. With a statistical understanding of your company’s previous data, you can set the relevant KPIs and decipher whether or not AI has helped your team.

Dror added to this by suggesting that testers ask the vendors themselves about their tools’ accuracy rates and misclassification rates. In other words, you should be able to find out how often it identifies the right elements and doesn’t produce false positives or negatives. While you will ultimately try out your test automation in your environment to see if machine learning helps your specific use case, this is information that should be transparent amongst all test automation vendors.

How to calculate a machine learning algorithm's accuracy rate and misclassification rateWhat is the future of test automation using AI? What should we expect in 2020?

AI has proven itself in 2019 as a great opportunity for organizations that either feel stuck with manual testing or feel that their existing test automation can no longer scale. Therefore, all of our panelists had high hopes for the continued impact of AI in testing to continue in 2020. Jennifer anticipated that even more companies will be looking at how to create an AI strategy that relates to automation, as more companies recognize that AI is a great augmentation for software testers. Eran predicted that AI would become more embedded into the software development lifecycle (SDLC) and that testing would become a more integral part of the development pipeline as a result.

Dror added that companies will start to see their AI implementations begin to show their value in 2020, and that we should see new use cases for AI in test automation come into the market. While companies may have implemented it earlier on as part of investing in their test automation tool, 2020 will be the time when they finally gain a better understanding of how AI impacted their environments. Paul echoed this sentiment, noting that in 2020 we should see an evolution of AI in test automation based on what the community has learned from the first round of tool options.

How can testers take full advantage of AI-based test automation tools?

Our panelists offered two suggestions for how testers can best leverage AI automation tools in their own companies: be specific and start small. Dror and Paul focused on the first piece of advice, stressing the importance of choosing a tool that meets your specific challenges.

Once you choose a solution, it’s also crucial to understand where this AI and machine learning tool can add value and use it there. They made it clear that you should not try to make your AI automation tool work in different places since then it will not provide the value you need. In order to prove its value to management, Paul also advised that your AI-based testing tool should support at least one business goal. This will set a strong precedent for company leadership to invest in tools that the QA team needs to grow and thrive.

Jennifer and Eran concentrated more on the latter piece of advice on how to best take advantage of AI and ML in test automation: start small. To do an effective proof of concept, it’s helpful to start by choosing 5-10 existing tests in your environment that you feel can benefit quickly from AI, whether they’re flaky or not using proper object locators. Once you determine that you see value in these tests after certifying them and embedding them into your DevOps lifecycle, then you can build on top of them with an additional layer of testing. This will help you be a more informed consumer when evaluating AI-based automation testing tools.

What criteria should testers use when choosing an AI-based test automation tool?

Our panelists ended the conversation by discussing the various test automation tools on the market that incorporate AI and machine learning, as well as the different criteria involved in making the right tool selection for your company. There are multiple tools in the market that offer some type of AI capability (including TestCraft) with different levels of maturity. As they mentioned repeatedly throughout the panel, each speaker stressed the importance of doing your homework to make sure that the tool you are looking at can address your specific needs.

There are many factors to consider when choosing an AI-based test automation tool, such as the skills required to use the tool, the type of testing you’re looking to do, and whether or not it can interact with the rest of your tool stack. Jennifer also pointed out that it’s important to continually reassess your tools. Advancements in AI are moving very quickly, so a tool that does not have the capability you need may offer it in another three months. By identifying these criteria ahead of time, you will set the expectations that you need to implement AI successfully into your testing operations.

 

Leveraging AI and ML in Test Automation eBook

Happy New Year from TestCraft!