The 5 Do Nots of Software Testing: Best Practices & FAQ

 In Blog, Continuous & Regression Testing, Web Application Testing, Test Automation Webinars

The 5 Do Nots of Software Testing: Best Practices & FAQ

Developing company-wide software testing best practices is a great way to make sure your team stays on track with delivering a high-quality product to your users. But what happens when “best practices” get in the way of innovation?

We recently hosted a webinar with Melissa Tondi of E*TRADE’s Quality Engineering leadership to discuss some of the major software testing “donts” that not only have grown stale over time, but also actively impede the success of the SDLC.

You can watch the full webinar for Melissa’s take on automated testing, the relationship between testers and developers, and encouraging team-wide collaboration. Below are some of the highlights:

Which Software Testing “Best Practices” Should Testers Avoid?

“Innovation should be at the forefront of the software testing industry.” – Melissa Tondi

While software testing best practices are great to have, Melissa centered her talk around the need to revisit these best practices to make sure they’re still applicable.

She asserted that even the best laid out plans should be reevaluated periodically to make sure that a company’s testing best practices are aligned with the industry and agreed to in a collaborative way. Especially for companies that are looking to build mature agile teams, it’s important to make sure that all team members that are involved are helping to determine effective best practices.

With that in mind, Melissa spoke about the top 5 software testing “do nots” that testers should avoid:

  1. Do not be an enabler
  2. Do not automate everything
  3. Do not have QA-only sprints
  4. Do not own all testing
  5. Do not hide information

Melissa went through each software testing “dont” in-depth and proposed different best practices to focus on instead. Here is an overview of her points:

Do Not Be An Enabler

Melissa clarified this software testing “dont” by explaining it another way: don’t turn QA into a “why didn’t you catch that?” team. In a typical two-week sprint, testers should not have to scramble to test eight days’ worth of software development in the last two days. This sets unrealistic expectations for testers while causing unnecessary stress and pressure.

Instead, Melissa suggested using risk-based and context-driven testing approaches. This will let the team know where QA will focus their testing efforts when they only get stories at the end of the sprint. She also suggested that QA prioritize their tests in order to make this process easier, which will help them avoid cramming in testing at the last minute.

Risk-based testing approach as a software testing best practice

Risk-based testing approach.
Source: Guru99

Do Not Automate Everything

While many testers already know that there is no such thing as 100% automation, Melissa realizes that this is a concept that management teams don’t always understand. Not only does this misguided “best practice” remove creative license from those people working in test automation, but it’s also dangerous to assume that more automation yields higher product quality.

Instead, organizations should use what Melissa refers to as the “A of A,” or the automated of automatable principle to ensure that automated tests are providing tangible value. Instead of hitting a certain percentage of general tests that should be automated, this principle ensures that the tests that are going through automation will contribute to increasing the product’s quality.

Tying automation to criteria of when the product is shippable, or ready to send to users, creates team-wide accountability across the project team. Instead of everyone focusing on their own tasks, using “A of A” as a guideline also gives a more holistic understanding of the work being done and how it will benefit the end-users.

Do Not Have QA-Only Sprints

Another software testing “best practice” that Melissa advised testers to avoid is having QA work in a silo. Also known as QA hardening, Melissa emphasized that there should never be a time where testers are finishing up one task while the developers move on to something new.

Instead, she suggested that testers follow the ABC principle, or “always be coupled” with the development team. There should be established protocols in place to ensure that the QA team is always collaborating with their development counterparts.

Do Not Own All Testing

Melissa admitted that this software testing “dont” may seem counterintuitive at first glance, especially since QA’s responsibility is generally to test new application features or updates before they’re released to production. Yet she reminded the audience that even if QA has a larger stake in testing, there are other project team members who should be doing testing as well.

With that in mind, Melissa advised that software testers should know what their developers and other team members are doing to progress the application along. The QA team should also encourage consistency by establishing a “definition of done” so that each team will know exactly what each team does before moving their project along to the next stage.

As part of this “definition of done,” Melissa recommended that each group on the team, from development to project management to testing, commit to their own checks and high-level activities. By centralizing the checking process, this will remove the misconception that QA in solely in charge of testing and ensuring product quality.

Do Not Hide Information

While unintentional, there are times in application development when information can become hidden, or not shared with the larger team. Melissa highlighted this as her final software testing “dont,” encouraging teams to make this type of information explicit once it’s uncovered.

Suggestions Melissa had for making information both available and accessible included formalizing them, whether as part of your stories, acceptance criteria, or requirements. In addition, she advised that everyone involved in a project should be in refinement meetings to guarantee that everyone has the correct and up-to-date information.

Software Testing Best Practices FAQ

After her presentation, Melissa answered questions from the audience. See what she had to say below:

How do you reconcile the idea of not “automating everything” with achieving continuous test automation?

If humans are expected to consume our software, then there needs to be a human element of testing. We must be able to test on behalf of the user, and sometimes those tests simply cannot be automated.

When we create a prioritized list of what to automate first using the “A of A” principle and don’t move on to automating new things until everything to the left of that is complete and passing, this allows us to explore the software and be more creative when interacting with the software. Using “A of A,” we can in fact get to a much higher percentage of completion, which offers much more value than simple having a certain number of test cases that become automated.

Should QA be taking part in refinements and taking part in sizing the stories?

If QA has a responsibility on stories to complete work, then they absolutely should be part of these tasks. The next follow-up question normally is should we size the work from a development standpoint or a QA standpoint, or should we size it all together as a group?

To start answering these questions, first understand why QA has not part of these refinement discussions in the first place. If QA is tasked with completing work, they should be part of those conversations.

Can you please clarify the difference between invoke and smoke testing?

Intake tests usually refer to Integration and/or Unit tests, which are traditionally handled by Dev. Smoke tests, on the other hand, refer to CRUD actions (Create, Read, Update, Delete).

How do you recommend avoiding a testing time crunch at the end of a sprint?

Watch the sprint health. Sprint days 2-8 should see a consistent number of stories being delivered by Dev each day. If we aren’t Dev-complete with all stories by the end of sprint day 8 then we will be in a time crunch. Use the standups to see the health of the sprint and use that time to discuss each day when we anticipate many stories coming to QA the last third of the sprint. Then address it then on how the team will finish the testing, instead of the QA team only.

While it may be a software testing “dont,” it’s often the case that the customer or the supplier expects us to automate everything. How do you recommend tackling this, from your experience?

In my experience the larger the automated suite, the harder it is to maintain and the more likely it is that those scripts will become “shelf ware”. If you can have smaller suites that are run often in the pipeline that are also correlated to the agreed-upon critical functionality and user flows, that has much more value and a great quality indicator that ensures it’s always in a green state. The larger the automated suite the more cumbersome it is to get the right data and provide instant feedback.

Can you please share more resources about the 8/10 split?

An 8/10 split ensures that testers are working in conjunction with the development team for the entirety of the sprint, instead of just the last two days. It breaks down the sprint as follows:

  • Days 1-2: Developers focus on coding while the QA or Quality Engineering team builds tests in conjunction with Dev
  • Days 3-8: Dev and QA work together to ensure that stories are flowing consistently
  • Days 9-10: QA either swarms the rest of the testing or spends time on bug fixes and sprint hardening with Dev

Find out how codeless test automation helps testers embrace these, and other, software testing best practices in our eBook:

Codeless test automation eBook