logo logo

Something’s Rotten with Test Automation

We can blame it on poor training, poor testing, or on pointy-haired management. Either way, I disagree with the way many testers approach test automation. I think the prevalent approaches are wrong, and in this post I want to tell you why. 

Automate the Tasks

There’s a common notion that we write test automation to simulate human input, that we write tests to walk through user workflows, verify they work and catch errors. I don’t have hard data on this, but even from my biased and controlled browsing of web articles, I don’t think this is a straw man argument. When a user does action the software does outcome. Testers ask, “Let’s make sure the software works the way a user would use it” – but we have no idea how the user will use it. We have an idea of how we want them to use it, but all bets are off in the real world.

Going down the path of automating the user scenarios often leads to a discussion on whether writing this automation will replace the need for human testers. The mere notion that automated tests could replace a tester is asinine due to one of the following reasons: 

  1. Your automated tests are uncreative and wrong.
  2. The tasks you’ve assigned to the tester are uncreative and wrong.

Not only do I find verification and exploration of user scenarios to be an extremely poor use of automation efforts, I find it to be one of the best uses for human testers.

It gets worse. 

I (too) frequently see testers talk about “manual” and “automated” testing as two very distinct activities, often performed by two separate teams. However, good test design is impossible without considering both human and computer assisted testing synchronously.

An Alternative

The power of automation isn’t in automating user tasks (I am not discrediting all value, and I do concede value using this sort of automation to write simple tests that run regularly across multiple platforms and operating systems, to ensure compatibility isn’t broken). The real power in test automation – the power in writing code that tests code, is leveraging the power of a computer to perform testing that is impractical by any other means. 

Take, for example, something I use a hundred times a day – the input box in Slack



How would you test this? One (unfortunately) popular approach is to design test cases, and then either automate those same test cases (ugh) or pass those test cases to an automation team to automate (double ugh). Your test cases may look like this:

  1. Type a string, press enter, and make sure it is submitted.
  2. Verify that the sent message appears on other computers / accounts.
  3. Type a very long string…
  4. Type foreign characters…
  5. Paste from clipboard…
  6. etc. (you get the idea)

Good test design must take into account behaviors of the software (does it do the things I expect, does it handle the things it doesn’t expect), as well as non-behavioral attributes like reliability, performance and security. A good test design approach looks at these things as a whole. I often start this process by brainstorming with a mind map to determine my test approach. Here’s an example of such a map:

Slack input mindmap

While creating this map, I haven’t really put any thought yet into what I’m going to automate or what I’m not going to automate. I’ve just written a lot of notes, ideas and questions about the thing I’m testing. I eventually use this map to help me organize my thoughts around my test approach, and use it to communicate my test approach with others around me.

Now, I can think about automation! My mind map has a lot of different ideas for strings I can send. Testing each set manually would be mind-numbingly boring (coincidentally, a good heuristic for deciding what to automate), so I’ll start here:


For Each string in TestStrings.txt


   SendSlackMessage (string);




Oh look, a data-driven test. Sure, I’ve left some code out of the above, but not much. And now, when I have new test ideas, I can put them into TestStrings.txt (or “steal” liberally from The Big List of Naughty Strings), and I’m off to the races.

Humans get bored on repetitive tests, whereas computers are good at things like this. Send 10,000 messages, measuring delivery latency on each message; Save or graph the data and return an error along with diagnostic information if any of the message sends fall outside of two standard deviations; Send single character messages as fast as possible and determine if they arrive in the proper sequence. This kind of automation takes advantage of the power of a computer, and is typically much more trustworthy than an end-to-end UI test. This kind of automation is valuable! 

Similarly, the user behaviors that many of you try way-too-hard to automate without creating flaky tests, are probably much better suited for a human to verify anyway.

The good news for anyone reading, is that thinking about test design and test automation in this mindset removes all value and credibility of those worried about automation replacing human testers, or how to “convert” manual tests to automation. Good test design just doesn’t work that way.

The next time you are designing tests, remember my two laws of test design:

  • You should automate 100% of the tests that should be automated.
  • You should use a human for 100% of the tests requiring human verification.

Our goal isn’t automation, and our goal isn’t even testing. Our goal is to help our team deliver high quality software as efficiently as we can! Create test automation that achieves this goal and you’ll be golden! 


You are welcome to share your thoughts in the comments below  😎

About the author

Alan Page
Alan has been improving software quality since 1993 and is currently a Senior Director of Engineering at Unity Technologies. Previous to joining Unity in 2017, Alan spent 22 years at Microsoft working on projects spanning the company, including a two-year position as Microsoft’s Director of Test Excellence.
Alan was the lead author of the book “How We Test Software at Microsoft”, contributed chapters for “Beautiful Testing”,  and “Experiences of Test Automation: Case Studies of Software Test Automation”. His latest ebook (which may or may not see updates soon) is a collection of essays on test automation called “The A Word: Under the Covers of Test Automation”, and is available on leanpub 

Alan also writes on his blog, podcasts, and shares shorter thoughts on Twitter.


25 5 comments
  • Alan Page November 7, 2017, 11:22 pm

    One point I should have mentioned in the above, is that I expect developers I work with to write all of the small (unit), and most of the medium (acceptance/integration) tests. For those tests, the oracle (the part that determines pass fail) is straight-forward and reliable, and those tests should all be automated.

  • achilles November 11, 2017, 2:02 pm

    Nice article. Finding a bug using the automated test cases that try to stimulate a user is hard. I saw organisations spending a lot of time on test automation trying to cover everything or give up and just do manual testing for reliable test. This article puts those thoughts and problems into perspective. I especially like what you said about our goals. Nice work!

Leave a Reply