In Blog

manual testing dying

We’ve recently stepped into a heated debate on LinkedIn regarding the future of manual testing. The post we published was about the future of “manual testers”; quotation marks are there on purpose since the term raises heated emotions in the testing community.

In fact, it was our very own Amanda Green that caused the stir by sharing a link to an article that claimed:

“…happening right now: manual testing is dying, and those who are not quickly upskilling will find themselves with no viable career prospects in the near future.”

Regardless of whether we agree with this claim, we wanted to share our views on how automation can – and should – complement testing by allowing testing teams to improve the quality of their testing as a whole.

Forget About 100% Test Automation

Let’s get one thing straight – there’s no such thing as 100% automation when it comes to testing.

We’ve heard it before – testing managers say they want to automate their entire testing operation. That kind of statement can only come from a lack of knowledge or misunderstanding of the limits of automation.

Automation is the act of taking a process, simple or complex, and making it run without human assistance; the said process is made of a series of actions that are pre-determined. Meaning, an automated process repeats the same actions over and over again on a schedule.

Manual Testing

Contrary to an automated process, a human thought cannot be automated. A thought isn’t predetermined. A thought is a momentary and impromptu cognitive process.

Why are we talking about thoughts suddenly? Because the thought process is the material humans work with. So thoughts represent human testers in this argument, as opposed to the automated process created by lines of code, or in our case by creating a model, that is later being turned into code.

Another thing that separates a thought from an autonomous process is the outcome. With any process, the result is woven into automation – an action needs to lead to a conclusion. If the outcome is achieved, the automation has succeeded. If the outcome isn’t achieved the automation has failed, what is commonly known as a bug.

In test automation, a failure to achieve the outcome is an indication that the code being tested is faulty, or in some cases that the app was changed and the test didn’t adapt. In a vehicle production line, a failure of the automation to achieve the desired outcome can result, let’s say, to a car with only three wheels.

Thoughts don’t have desired outcomes. That’s the whole beauty of thinking.

Now back to testing!

Machines Do Regression, Humans Do Exploratory

So – human testers think, machines don’t. Humans can also do stuff, but when it comes to repetitive, technical tasks let’s face it, machines are better than us. This distinction between technical (or operational) tasks and thought-induced (or emotionally driven) tasks is taking shape in recent decades as the defining differentiator between what humans should do and what needs to be allocated to the machines; or in their more animated label – robots.

In testing, we see the line drawn in the sand pretty clearly: machines do regression, humans do exploratory.

If there’s a human out there that wants to do regression we say go ahead but must ask – why? Theoretically, humans can remove the hulls of rice grains, but should they? There are machines that do it much more efficiently. Same goes for regression testing. Automation and regression are a match made in heaven. The nature of regression testing begs for automation since there’s no critical thinking involved, but rather checking “line by line” that newly added code hasn’t affected existing one. If automation can be used to tackle regression, we simply cannot find a reason why it shouldn’t.

Exploratory testing, on the other hand, is based on the tester’s experience, judgment, creativity, intuition, and wit. Machines don’t have wit. They might have “intelligence” (albeit an artificial one) but they definitely don’t have wit.

So we find the whole discussion of whether automation will replace human testers to be a bit misguided and, quite frankly, melodramatic. We can put this discussion away until the day machines can run a command line that says: “Give me your honest opinion – are the recent tweaks in UI will help less tech-savvy users while not alienating millennials?”

This example might seem extreme as not all exploratory testing is that elusive. That’s true. But even more mundane tasks as looking into how intuitive is the sharing process, or is the content download button visible enough, are probably still a few decades away from the grasp of machines.

(Or maybe not. Maybe an incredible breakthrough in AI will happen next week that allows machines to think and evaluate stuff on a human level. If that happens, we’ll converge again the following week to reevaluate our thinking on testing.)

Automation Is A Tester’s Best Friend

To conclude the “manual testers” mini-controversy on LinkedIn testers can perform testing in two ways – either using automation (external tools, legacy solutions, in-house developed framework) or manually. The kind of tests you are looking to execute will influence your choice of automation vs. manually. There’s no black or white, nor right or wrong in your decision of which testing method to use. However, your choice will determine the length of time it takes you to do the test, the accuracy of the outcome and, your mood at the end of the day. Our mission, at TestCraft, is to actually empower testers, enabling them to turn their test scenarios which are repetitive, to automated ones, without writing a line of code. We think they are the ones to do so, as they are the business experts and know the applications best, so we don’t think coders should do it. We want them to focus on the what and why to test, and we will take care of the how.

If we may add our take on the matter, we look at it this way: testing is a wide scope of tasks to assess the quality of a product before its release (or before each version cycle.) We think that automation is a great addition to testing. It’s not perfect yet, but it can relieve some of the pains of manually testing a software. It cannot replace all aspects of testing, humans will always continue to be part of the equation, the part that evaluates the usability. Machines will take care of testing functionality.

Until Skynet will wage the war against us, let’s keep using these machines for our own needs.


start Free Trial