First, let’s talk about why we need regression test automation. For many this is obvious, for others it’s not exactly so, everybody has their own theories and experience.
[devil-advocate-mode] When features change
fast automated tests need tweaking all the time.
So, for many functional tests, if we need to have
them executed once per release, it might be cheaper
to run them manually. So, why bother with
There are many reasons why you actually do want to automate lots of things, and “test automation” is a very broad term and covers many different topics. I usually distinguish Sanity/Smoke and Regression automation due to their main goal (ranging from very frequent execution of a smaller subset of tests to ensure there are no bottlenecks to less frequent, but very massive amount of tests to ensure release criteria). For this article, let’s focus on test automation for regression coverage. Let’s discuss the problem we are trying to address.
Every new feature is a liability. That is – when it’s implemented you have to maintain it, one way or another. There could be bug fixes, or improvements, or integration and compatibility issues that need to be resolved and require code changes. All of these code changes can break things which used to work yesterday. On top of that for some of us, there are 3rd-party compatibility issues, when things can break even without code changes on our side.
So, in every release we should spend cycles on re-verification of previously implemented features; let’s call it regression cycle. To be more precise, regressions could occur not just in released features, but also in the features that have just been developed, sometimes very late in release cycle. This regression cycle could become very expensive part of release cycle, both requiring significant amount of resources and sometimes delaying release (Just imagine dozens of new features and hundreds of bug fixes, requested by customers, are on hold for several months).
For more on this and other off-cycle challenges see Is Agile working for you as a QA Leader?
Test automation, among other things, is supposed to address this. Timing benefit is obvious – you can run tests fast, and, if needed, even faster – in parallel. Cost part of equation, in reality, depends a lot on a product and test automation implementation, as test automation could become a huge liability with very high ongoing maintenance costs, so careful planning and prioritization is crucial. Still, while investment, it is a great investment!
So, we want to automate as much as practical, so we don’t re-run these tests manually over and over on every release (and we want to release often). Some time ago I was asked a question “when” do we write these tests, how does it fit in the Scrum model? “Should we start immediately after every release? Possible solution. Should we write and automate unit test case with every sprint (It’s again an overhead to create and manage)”.
Scrum says User Story is done when DONE criteria is met. It does not force you to have even unit tests, definitely no functional or system/integration tests. While often referencing tests in Scrum best practices, final decision on whether to have automated tests as part of DONE criteria is left to the team.
As a side note, many assume that after Sprint is done there is nothing left to be done for the finished User Story. As usual, ensure there is no confusion among stakeholders, regardless of what your process is.
How does a team decide? As with every decision it all depends on priorities. If test coverage is a priority and your DONE criteria says that all tests (you might want to be very specific here) should be implemented – so be it. If priority is to add more features now, and deal with problems later – most likely there will be less time left for automated tests to be implemented during the Sprint. As always, there is a famous triangle (or some make it a quadrant) of scope, resources, [some add time, which to me is a resource] and quality. While any priority and decision can be valid, some famous people think that quality is never to be sacrificed…
I prefer to optimize costs. Costs are functions of resources we spend now, resources we spend later due to bad quality (or, similarly, bad design, usability, performance), and on other hand, cost of being late to market. Some costs change with time – everybody knows a famous graph of a bug’s cost over time.
Simple suggestions for the timing of test automation creation, based on this optimization goal, would be:
- Consider TDD or any other approach, which integrates tests into a product. Or treat test automation as a product. Whatever you do – make it priority and visible.
- Automate what’s going to be static (= tests will be more stable, less costs). More precisely – automate cheap and important tests
- Don’t automate what will change tomorrow (or tests will become more fragile and, therefore, more expensive)
- Keep test automation a priority for the whole team (might require lots of explanation and pulling some strings)
- Think of stability and maintenance costs.
- Process-wise, it might make sense to: 1) Integrate at least some level of test automation into DONE criteria and 2) Ensure that remaining test automation tasks are in backlog. Latter, if nothing else, will provide visibility into level of team’s commitment.
Update: Some more on related topic here: Test Automation: Timing, Cost, Value
As usual, would be interested in hearing your real world stories, challenges and successes!