logo logo

The Code, the Test, and the Responsibility

The Code, the Test, and the Responsibility

Just about everyone has seen the testing pyramid. This simple model first introduced in Mike Cohn’s book, Succeeding With Agile simply says that as your automated tests become less granular, that you should write fewer of them. My experience with test automation has shown the best results when creating tests at the most granular level possible. If we can catch a bug or guard against regression with a simple low-level test, that’s where that test belongs. If we want to find issues we can only find at the intersection of multiple components, that’s when we write a slightly larger integration test. We write bigger end-to-end tests only for scenarios that need that size and complexity of large tests in order to verify.

To me, this model is simple – but despite that simplicity, I still feel the software industry has a bit of an unhealthy infatuation with automation of end-to-end (and especially, UI) scenarios – tests that should represent just a small portion of automated testing efforts.

I frequently interact with test automation engineers responsible for web testing. I ask them why they don’t write more of their tests at a lower level. Often, they’re testing software from another team (or company). Or the development team doesn’t make it easy to test the software at a lower level. They’re assigned to write tests, and they write the best set of tests they can in their context.

They’re doing what they can, but they’re not solving the problem.

Test Automation is Hard

I’ve written my fair share of end-to-end automated tests; and I’ve also written low-level code in high volume operating systems. In my experience, writing reliable and trustworthy automated end-to-end scenarios is hands-down the more difficult task. Granted, the tooling for test automation is much improved from the times where I wrote test automation, and tools like TestProject make the task much easier. 

To be clear, I’m not against writing end-to-end automation (or other large tests). I just think it’s difficult – and prone to error. In fact, I still recall the time I attended the Google Test Automation Conference (GTAC) in 2014. After a few presentations, I noticed that every single talk shared some sort of reference to “flaky tests”. This made me chuckle when a later presentation focused entirely on how a large organization dealt with their flaky tests.

Maybe We’re Doing it Wrong?

The web is full of blogs, articles, and job postings looking for testers to do web or application UI automation. Quora, Reddit, and other discussion sites regularly have posts from testers looking to “move into” end-to-end test automation. Twitter is full of posts from testers dedicated to writing automation celebrating their hundreds of thousands of automated tests.

Personally, I’m not sure if I’d be proud of a massive number of tests, or completely frightened at the maintenance nightmare I know they represent 🤯 Writing complicated code in order to test somebody else’s code isn’t exactly a path I want to recommend anymore.

But many organizations have separate, distinct teams dedicated to writing test automation. In fact, as of today, a Google search for “test automation team” returns 67,000 hits. For full disclaimer, I spent a big chunk of my career on test development teams – dedicated to writing automated tests for code written by a co-located, but separate development team.

As the way we ship software has changed – and as software engineering has evolved, I am seeing more and more development teams write their own automated tests, and I’m seeing highly successful results from these changes. In fact, on the best development teams I’ve worked with recently, developers write all the automation. They write everything from the tiniest unit test to complex end to end scenarios. In my experience, when developers write all the tests, they end up writing a tiny amount of end-to-end tests – just the minimal amount of tests needed in order to catch only the bugs that they can’t catch at any other level of testing.

The Research Says…

In Accelerate, Nicole Forsgren et al. highlight their research that shows no correlation with automated tests created by separate QA departments and product quality. 

“First, the code becomes more testable when developers write tests. Second, when developers are responsible for the automated tests, they care more about them and will invest more effort into maintaining and fixing them.”

This research-backed statement matches my experience and what I see in a lot of other teams in the industry.

It’s worth noting that Forsgren then says,

“None of this means that we should be getting rid of testers. Testers serve an essential role in the software delivery lifecycle…”

As you know, I could argue both sides of the “do we need dedicated testing specialists” argument, but for today, I’ll just say that my experience shows that we need testing (but may not always need testers).

But Why?

Forsgren’s research matches what I’ve seen personally while working with development teams that own their own automation, and I think it’s worth reemphasizing her points above. When the flaky automated tests are owned by some other team, they remain someone else’s problem to fix. Putting ownership of code and tests in the same team does a lot to improve the quality of those tests. Often, as alluded to by Forsgren’s first point, they make the code more testable in order to enable the tests to run more reliably. 

I have seen first-hand hundreds of times when working with developers that when they write their own tests; they write more testable code. This is true for the tiniest test to the largest test. It’s also important to note that testable code is better designed code – so the code will be more maintainable in the long-term as well.

Over time, developer owned testing has made me re-think Mike Cohn’s test automation pyramid. It’s actually not a model that gives you a target of how many of each type (or size) of test you should have – instead, I’ve discovered that the pyramid is a model of how your tests will look if you have well-designed code.

What’s Next?

The takeaway from this post is not to get rid of all of your testers and make your developers immediately do all the testing. However, I am convinced that your quality efforts will be much more successful if developers own much more of the automation effort. This claim, of course, potentially comes with two enormous challenges:

  1. Developers don’t want to write test automation.
  2. Developers don’t know how to write test automation.

One way I’ve solved both challenges quickly is to have someone experienced in automation pair with a developer when writing tests. This directly addresses the second challenge above, but I’ve found that once developers become more comfortable writing all levels of automated tests that they quickly see the benefits of that practice on their design and quality – and realize that testing makes them better developers overall. Similarly, I’ve seen pairing like this lead to career test automation developers improving their skills and discovering more direct ways to improve quality.

If you’re a tester, I challenge you to help the developers you work with write better (and more) test automation. Similarly, if you’re a developer, work with someone experienced in test automation and improve your testing (and design) skills. I think you (and your customers) will be happy with where this challenge takes you.

About the author

Alan Page
Alan has been improving software quality since 1993 and is currently a Senior Director of Engineering at Unity Technologies. Previous to joining Unity in 2017, Alan spent 22 years at Microsoft working on projects spanning the company, including a two-year position as Microsoft’s Director of Test Excellence.
Alan was the lead author of the book “How We Test Software at Microsoft”, contributed chapters for “Beautiful Testing”,  and “Experiences of Test Automation: Case Studies of Software Test Automation”. His latest ebook (which may or may not see updates soon) is a collection of essays on test automation called “The A Word: Under the Covers of Test Automation”, and is available on leanpub 

Alan also writes on his blog, podcasts, and shares shorter thoughts on Twitter.

Leave a Reply

FacebookLinkedInTwitterEmail