Architecting a test strategy [webinar recap]
Architecting a test strategy [webinar recap]
In what universe do test automation, “Avengers: Infinity War,” and cathedrals have anything to do with each other?
This question and many more were answered in our recent webinar, “Architecting a test strategy,” with Brendan Connolly, Staff QA Engineer at Procore Technologies and recent EuroSTAR Rising Star Award recipient.
In his presentation, Brendan shared his model of how to discuss and create a comprehensive test strategy. Looking specifically at the role of automation within a testing strategy, he gave important guidelines on how to identify how many automated tests your team will need and what types of tests to automate.
Here is a brief summary of what Brendan discussed in the webinar below:
What is a test strategy and why does it matter?
To open his talk, Brendan first gave a clear and insightful definition of the term “test strategy”. He defined test strategy as:
“Guiding principles that provide focus, context, and drive actions of the team to manage risk.”
He gave this definition at the beginning of his talk for two major reasons. First, he clarified the overall goal of a test strategy: to manage risk and release a high-quality application that meets the end-users’ needs successfully. Second, this definition gave more insight into answering the underlying question, “Why is it important to have a test strategy at all?”
He addresses this second point by noting that testing is not always understood, whether by developers, product owners, or the upper management team. Having a strategy in place is an effective way to protect testers from feelings of fear that might arise from this lack of understanding.
This will help organizations avoid situations where teams are blaming testers needlessly for issues, and instead help teams decide concrete ways they can improve their testing moving forward.
There are various elements to consider when building a test strategy, but Brendan focused specifically on the role that test automation plays in managing risk when building applications.
The role of automation in a test strategy
While there is much discussion about the various effects that test automation has on improving software testing, Brendan highlighted three points that testers consider when adding automation to a test strategy:
- Time. How much time should we spend on automated tests, and on testing in general in the larger context of software development?
- Quantity. How many tests should we automate?
- Choice. What types of automated tests should we run?
What test pyramid models have to say
To begin answering these questions, Brendan showed how he came up with a general heuristic for the amount of time to spend on testing by understanding various test pyramid models as actual triangles. Using geometry and trigonometry, he mapped out how each pyramid model broke down the various types of tests needed to build a quality application, e.g. unit tests, service-level tests, and UI tests (as well as component, API, and integration tests).
To add to this, Brendan noted that there are various tools in the market, such as TestCraft, that can help make sure that one type of testing doesn’t overwhelm another. He also suggests that while each team ultimately knows what works best for their application, it can help to use these numbers from the test pyramid as a general guideline.
As for the questions of how much effort to put into automation and what types of tests to automate, Brendan used the famous glove in “Avengers: Infinity War” as a metaphor. Like the infinity stones that Thanos needed to search for in the film, tests should be meaningful, reliable, and correspond to the development of a feature or maintain a specific area of the product.
Even with just a handful of automated tests, Brendan urged that automation can still make a tremendous impact. To determine which tests to automate, he offered two criteria: they should confirm that the application is releasable, or characterize key behaviors. Keeping these criteria in mind will help ensure that the tests that your team does choose to automate will yield results.
Remodeling the test pyramid
Yet even though the test pyramid can provide insightful guidance, Brendan stressed that looking at testing as a pyramid can mistakenly present a model of isolation. These models often do not do the best job of accounting for the team’s needs and expectations (which is more collective and doesn’t just apply to testers), making it difficult to determine success.
Brendan then suggested that instead of strategizing your test automation like a pyramid, it makes more sense to understand testing as a cathedral model. While still built on a foundation of unit tests (like the pyramid model), the central focus of the cathedral model is the application instead of a larger mountain of tests.
Instead of viewing other tests (service tests, UI tests, API tests, etc.) as layers, the cathedral model that Brendan suggested views them as supports. They reinforce the simpler unit tests that are part of the test automation suite, while also helping testers work towards enhancing the overall quality of the application. They all then work together to bear the weight of the rest of the “building” structure, which consists of the demands and stresses that your product faces from customers.
This cathedral model shifts the focus from making sure the team has enough time for various tests to targeting where your application needs support most. Brendan stressed that this mindset allows for building a test strategy that fulfills its original goal: allocating resources to manage risk effectively.
Keep your test strategy agile
With this cathedral model in mind, Brendan finished his talk by stressing that just like testing should be agile, test strategy should be agile as well. Heavy documentation, while done with the best of intentions, often leads to heavy bureaucracy without necessarily guaranteeing better outcomes.
Instead of documentation, Brendan proposed using test charters when “architecting” a test strategy in order to help testers stay focused and give clarity without being too restrictive. By maintaining this balance, test charters become living documents that are open to reevaluation at different stages of the testing and development process.
|Sample test charter|
|Use (test/automation technique)|
|Led by (owner/creator)|
|To support (desired outcomes)|
He ended with a reminder that there is ultimately no cookie-cutter approach to test strategy; rather, it’s important to remember that automation is ultimately there to support the team’s needs in order to empower quality. Testers should not be left behind when automating, and choosing the right tools will serve as a gateway to quality test automation.
With that, Brendan answered questions from the audience. For your convenience, you can read the questions and his answers below:
What is the difference between a test plan and a test strategy?
I worry when I hear the words, “test plan,” because it brings me back to waterfall days when testers are working in a silo. A plan also makes things too formal, whereas working in an agile, strategic way allows organizations the opportunity to achieve their goals. In addition, I also feel that the word “strategy” resonates more with high-level stakeholders.
How do you adjust the time one “should” spend on different types of testing when a lot of people are dealing with mass furloughs during COVID-19?
Time is a big challenge, both in the current pandemic and in other situations as well. It’s important to remember that the times I suggested are more of a heuristic guideline when thinking of testing in a pyramid structure. There is no magic number for how much time you should spend on testing, but focusing on the quality of the “handful” of tests I mentioned at the beginning will help make sure that you’re moving in a positive direction.