If you think that there are only pass/fail test automation outcomes, then you might want to think again. In this post, I will explain how introducing alternative test automation outcomes into our projects, has helped us to gather valuable information from our test automation reports and how this testing approach pays off in the long run!
In our company, we utilize this testing approach on end to end testing, but it is perfectly valid for system tests as well. End to end testing (aka ‘large tests’) is used to check how all the subsystems are integrated during a full, real-world user-scenario. These tests are typically run using GUI, without any mocks, and therefore they are quite expensive to automate and maintain. For these reasons, it’s widely accepted that such tests should be kept to a minimum even though they are the ones contributing to product quality the most.
So what happens when the test automation reports indicate that our end to end tests failed? Well, you might need to invest quite a bit of time in finding the cause. Although, it’s not until you start pushing for a fix that the real challenge emerges (at the end of the day, no user would care much about the amount of unit tests if their end to end scenario fails!) Especially, if you’re working in a big company and have a large amount of legacy code written without testing in mind.
You’ll probably hear sayings such as ‘Yeah, it’s a rare bug. We’ll fix it someday’ or ‘It’s definitely a bug, but the code where it breaks was written by a developer who’s not working here anymore’. All of these are valid points and probably fixing that bug is not your team’s priority at the moment, but still – every other night your test automation reports shows that one of your tests is red, which is certainly not good.
In our company, we found that it’s useful to broaden the concept of testing outcomes in our tests automation reports, so that we could avoid primitive generic messages and false results in test automation.
Blocked
We call the outcome “Blocked” if we can’t draw any conclusive information from the test run, but there’s a way to determine it’s that old legacy code which is causing the problem.
For example, let’s suppose there’s a cache in your app that normally makes a certain response incredibly fast, if it’s not sufficiently ‘warmed-up’(nearly empty), certain tests might fail. This might not be an issue for a user, but if you deploy a new version from scratch in your tests, you’re bound to stumble upon this problem from time to time. Now, there are many ways to deal with this particular example depending on the exact situation, but if you have a way to automatically classify such a fail during the test, you might want to just mark the test as blocked and focus on more important things.
Infrastructure
Everything that’s not directly related to the application in test or its test code, is testing infrastructure: test servers/controllers, mainframes, virtual machines, test labs. Testing infrastructure can cause your tests to fail too, examples include connectivity issues to cloud-based resources, unstable database availability and so on.
No one is happy with spending time on bugs that end up rejected as infrastructure problems, that’s why it’s a good idea to automatically classify those as well. On our projects we set up services that monitor the resources availability and notify the responsible parties when these availability checks fail. Our tests then query these services during the execution and mark the test ‘Infra-fail’ if the infrastructure is the culprit.
Your Test Automation Reports in the Long Run
On our projects we found out that ‘numbering all the exits’ can decrease the costs of maintaining our end to end tests:
- You don’t spend time analyzing the test runs, which means that you have fewer things to worry about on your path to continuous delivery.
- Suddenly fixing those rare and legacy bugs (which don’t really affect users) is not that important – and the team can focus on delivering new features.
- As a bonus, you get the accountability on the actual state of your infrastructure. It might turn out that your network connection or cloud availability is not as great as you were told by whoever your provider is!
- You might be even tempted to get more end to end user scenarios covered: after all, you can never be sure something works unless you verify it.
Conclusion
The key idea for both system testing and end to end testing is: if at any point you come across a test that is red, but not because of a bug – chances are you’ll benefit from this approach. If you rely on a set of criteria during a test run analysis, that means that the same criteria can and should be applied automatically.
It is also worth mentioning that you can’t solve all quality problems in a test automation project with only one type of test. However, since your users will ultimately be using the full version of your product without any mocks whatsoever, it’s worthwhile to have some end to end tests covering most popular user scenarios, and those better be good!
-Now you can be more confident when deriving conclusions from your test automation reports. Have you also had a recent discovery that helped your project in some way? Please share!