In Blog

functional and machine learning

Functional Test Automation: Blending Functional and Machine-Learning Tests for Success

For practitioners who develop automated testing, it is no surprise that ensuring minimal false negatives, and continuously maintaining test code as the apps change are a constant challenge.

The market is clearly evolving towards the adoption of machine learning (ML) and artificial intelligence (AI) testing solutions that can reduce and, in some cases, eliminate the abovementioned challenge.

The questions that are asked in the context of AI and ML are:

  1. What is the destiny of tons of functional test scenarios that are code based (written in Java/JS and other languages)?
  2. Are there specific tests that best fit the ML/AI testing tools?
  3. Can traditional functional test code and ML automated tests live together in one single suite?
  4. What does it mean from a tester persona role perspective?
  5. Are there any workflow and DevOps process changes that are related to the adoption of ML/AI tools?

Before providing the perfect blend of the two test authoring and maintenance approaches (hint: blending is a valid option 😊), let’s highlight the key differences between the two.

choose functional and machine learningRealizing the Material Differences Between the 2 Approaches

Test Authoring Workflow

When coming to differentiate between machine-learning testing and traditional continuous testing coding, here are the main points:

  • Traditional coding would typically be following written test specs either BDD based, user story based, or test scenarios written in excel or word documents. Here, the test code engineer would start scripting using Selenium, Appium and other open-source frameworks the flows either in Java, JavaScript, Python, C#, etc. To implement the tests, he would need to use an object spy to learn the application object properties and interact with the app under test. Such coding would be managed and maintained in GIT or other source control management (SCM) system and will be authored inside common IDEs like IntelliJ, Eclipse, and others. The E2E time to write a test scenario will be a timely task.
  • Machine Learning on the other end, would be the exact opposite of the above. Here, the test engineer would use a smart recorder that has advanced object learning capabilities, and the test artifacts that will be recorded are codeless tests. Some differences are immediately popping out – The testing “canvas” for the authoring isn’t an IDE, and the test flows that are developed are recorded from the UI perspective rather than developed in a coding language as mentioned above. Test management within SCM is not really supported in that case. Obviously, the E2E test authoring in the ML approach is much faster than in the coding approach.

Test Maintenance Process

Writing the V1 of a test scenario is in many cases a fun activity, seeing the test passes in the first and perhaps second execution is great, but the challenge arises when such test flow gets executed on an hourly basis within continuous integration (CI), or other testing criteria trigger. How do we keep the tests stable robust, less flaky, and timely relevant as the app and its objects change?

  • Traditional coding maintenance can be painful since in many cases, there is a lot of debugging, re-writing pieces of code, dealing with changes to objects, and tailoring the tests to various platforms to run in parallel within CI or using traditional test data providers like TestNG, etc. Test maintenance is ranked as a high payload in DevOps software iterations. In these cases, if the test engineer isn’t pro-actively debugging and executing the tests to uncover regressions, it will be a huge problem in the official software functional testing cycles.
  • Machine learning maintenance tools, especially the current and emerging ones, put as their main objective to tackle the challenge of maintaining test scenarios, dealing with objects changes, and stabilize the test executions. ML tools refer to the process of automatically “fix” and maintain the test scenarios as self-healing mechanism. These tools are dynamically scanning the DOM and other objects of the app under tests, and in runtime can change objects that are used within the tests to prevent test flakiness and slowing down test cycles.

Test Execution Process

Here I would avoid from detailed differentiation between the two methods since both tools are well integrated into the CI toolchain and allow scheduling of test runs through Jenkins and other tools. There are tangible differences in the test automation triggering outside of CI between the 2, such as the use of test data providers in the coding approach (TestNG, IDEs, etc.) vs. test execution manager that is buildin the ML tools. Both methods are also supporting test cloud providers and can easily leverage the power of the cloud.

Overall Maturity of Approaches

This is a wider area to compare between the two approaches.

In this category, we should address things such as community support, documentation, reporting capabilities, integrations, support for more than functional testing types, etc.

  • Traditional coding obviously has the highest maturity from all the abovementioned perspectives. If to just focus from the entire plethora of frameworks on Selenium and Appium, these tools enjoy an amazing number of contributors in the open-source community. In addition, these frameworks are easily integrated into various CI tools, IDEs, unit testing frameworks, reporting, and other test management solutions. The expertise and the know how’s out there can benefit test engineers to be successful in what they are trying to accomplish
  • Machine learning based tools are ~1-1.5 years old, and this comes with a maturity price around integrations, documentation, best practices, mature logs and reporting to be used, as well as limitations in automating advanced mobile and desktop web capabilities that exist in the open-source tools like working with face and fingerprint sensors and more.

Target Persona and Test Authoring Usability

I believe this is one of the most important categories in the comparison between the two approaches.

The skillset required to move to the machine-learning tools is obviously and by design lower than working with the code-based solutions, but is this the only thing? I think it is a larger point than that.

  • Traditional coding would in most cases fancy the highly skilled test developers that feel more comfortable in writing Java, JavaScript or other test scenarios. The point here is that while these engineers know how to code, it is a time-consuming activity without even adding the test maintenance factor. Time is something that is clearly becoming a major issue and consideration in DevOps shops. Such test engineers must at all times keep an high-trust level by the teams they serve in the test execution results.
  • Machine learning is first and for most appealing to test engineers, business testers that are challenged in their daily tasks with performing a lot of manual and tedious tests, and that are using legacy and unstable test frameworks that aren’t coding based tools. These engineers are in many cases delaying the test cycles since manual testing obviously takes time, and the results are not always consistent (depends on the tester point of view).

Solving the Puzzle!

After drawing all the processes, a bit of pros and cons between the two approaches here is the bottom line. There is no good or wrong in here. The objectives of a DevOps team is to deliver high-value innovative solutions to their customers with high quality, confidence, and in a short time.

If we look at the pains in the test authoring (time, reliability), and going forward the maintenance, and try to provide a prescriptive approach to DevOps teams, it would look as in the following visual.

blending functional and machine learning pie

Traditional test code that covers the most reliable, repeatable scenarios that show constant value, should remain in place and continue to be plugged into the CI or other trigger testing engines. My feeling is that this bucket of tests in mature organizations would not surpass the 60% (referring to the functional regression suite for web/mobile apps).

The remaining 40% of the suite in my mind, is distributed into the following:

  1. Manual test scenarios that were hard to automate from various reasons
  2. Tests written in code that are providing inconsistent test results across platform or in general

Taking up to 30% of the manual and unreliable tests and challenging the machi8ne learning tools in such a case would be a huge win-win solution. For manual testers, they will join the test automation “machine”, and be plugged into the DevOps pipeline, and at the same time gain from reliable and predictable test results. For developers that are short in time, adding new scenarios that were either hard to automate using code, or too time-consuming will now be an easy record-playback test.

There will still be, and this is fine, a bunch of 5-10% manual UX/UI test scenarios that provide unique value to the business, but these are most likely to run not every day, and they will be converted into highly efficient tests.

If this isn’t clear by now – there is room for all personas in the DevOps testing activities, that if following the above or similar methodology, can become quite efficient and successful.

Happy Dual Approach Testing to All.

 

Selenium Testing eBook