Test Automation Aging in the Era of Continuous Testing

 In Blog, Continuous & Regression Testing

Test Automation Aging in the Era of Continuous Testing

Continuous testing aims to identify business risks associated with code changes and, such testing is triggered through Jenkins or other schedulers. This is not a new statement.

In addition, and again not a new fact, continuous testing is enabled through mature, robust, and reliable test automation. The problem that arises with these practices that are required and critical for maturing DevOps is: “how to ensure that these test cases are still valuable?”

Which Tests Should I Automate?

This is one of the most common and old questions that practitioners and test managers often ask. This question is related to the value of the tests as well as the ROI that comes from developing and executing these test cases.

Before addressing the above question, it is imperative to understand that whatever test scenario an engineer develops and adds to the regression suite/CI, is an additional weight or load from a test maintenance perspective. When done continuously, such load rises and might add delays, flakiness and additional resources like devices/desktop browsers to meet strict test execution schedules.

With the above in mind, it is important to go through the following or similar exercises when deciding which tests a team and individuals should automate – the time spent on such “justification” will pay itself in the long run.

The following practice takes test scenarios that are manual and can be divided into functional areas, and “questions” each of them from the following criteria:

  1. Gut feeling
  2. Risk
  3. Value
  4. Cost
  5. History

As the above categories state, Gut feeling stands for the initial aim to automate or not to automate a given scenario. Risk focuses on the probability of such a failure to occur, as well as the impact of when it does. Value in my mind is one of the most critical aspects and addresses the question of whether this test scenario adds new information and contributes to risk identification. Does it come as a duplicate of other unit or functional test cases? Cost relates to the time it takes the engineers/developers to automate, maintain and the ability to automate it as end-to-end scenario (efficiency). Lastly, History relates to the volume of historical failures around this functional area or feature – this should be a data-driven decision with defects of historical evidence.

Decision Table for Test Automation Scenario Selection

Fig 1: Decision Table for Test Automation Scenario Selection. Source: Angie Jones

Test Automation Aging in Continuous Testing

While we defined earlier in this blog which tests should be automated, this is not a static definition, and cannot be taken as a “once and for all” decision. The product and the platforms on which the product runs evolve over time, and the tests should also follow this evolution.

Test maintenance practices are essential to ensure that test automation continuously adds value.

Executing test automation within CI has a purpose: provide value, identify business and quality risks, and to meet this objective, these tests must always stay updated and relevant.

It is a fact and common knowledge that after the execution of the same test code too long without this test, finding new or any defects is a waste of time and resources. As a matter of fact, practitioners often add layers of exploratory testing to mitigate this reality and add “out of the box” testing to identify new and hidden defects. In addition, when executing too many tests at scale and some percentage of them are invaluable, the level of test data that is growing adds time at the end of the cycle for test report analysis and debugging. This is redundant and unnecessary noise.

Test automation code should be treated and maintained as production code, and that means, refactoring, updating and yes, retiring obsolete test scenarios to continuously add value through testing.

Best Practices in Test Automation and Continuous Testing

To address the above challenges, teams need to follow some basic practices that can be categorized accordingly:

  1. Test automation development – the decision-making process
  2. Test automation maintenance as an ongoing practice
  3. Test data analysis and trending via ML/AI

Let’s explore the above categories.

Test Automation Development Process

As highlighted earlier in this blog, deciding on each test automation scenario shall be given priority, since each new test scenario that is added into the pipeline has a “price”. The price is calculated as a factor of development costs, maintenance costs, execution across platform cost per night, per code commit. This is simply at the core of such a decision. Teams often chase the size of the test suite as a metric of their success, and they should focus on the value over time such test will provide.

The Path to Continuous Testing chart

As the above visual suggests, rushing into baking too many tests without stabilizing them, ensuring their value-add to the overall pipeline results in huge failure from both cost investments and quality objectives, hence, great disappointments.

Test Automation Maintenance

Test automation value must be measured over time and not only for the few first cycles. As mentioned above, platforms and product evolve and that has significant implications on the value and “price tag” that was initially placed on the test scenario. That’s why teams must have a structured and well-defined process to judge whether a test automation scenario is still relevant and shall continue running from within the CI. One way of addressing the aging of test automation code is by merging existing tests into a newer and advanced scenario that is either based on a newly developed exploratory test that is fresh, can find regression defects and extend functional coverage.

Test Data Analysis and Trending via ML/AI

It’s clear that teams cannot improve and measure data that they can’t visualize and understand. That’s why test data analysis that can be easily sliced and diced come into play and can be a game changer in determining test scenario’s relevancy, efficiency, etc.

In addition, test data analysis can also team teams whether the test automation scenario is reliable and stable enough or if it needs to be excluded and undergo updates to increase its stability and reliability.

Test Data Analysis Example

Continuous Testing Needs a Refresh

Test automation infrastructure in the era of continuous testing must always stand the test of time.

To make continuous testing valuable and not a liability, teams need to refresh existing processes, bake into their daily processes some of the above practices, and most importantly, understand the implications of test automation suite size and maintenance.


Book free consultation session

Top Use Cases for Automation TestingBreaking the Bottleneck: Driving QA Transformation in Agile Organizations