A brilliant article by Piet Van Zoen was recently published which discussed reasons not to skip tests. In this article I’m going to discuss reasons why we might need to do the opposite.
To clarify, I do not think we should be skipping tests. I strongly suggest reading Piet Van Zoen’s article as well. However, I think there are some scenarios where it can’t be avoided. There are also some cases where it might be beneficial to reduce the number of tests.
In these examples, I’m referring to regression test cases (automated or manual) that may be run before the software is released. Do we really need to run all of these before a software release?
1. The risks when releasing late is greater than the risks when releasing with bugs
As software testers, we do not have the authority to stop a release. All we can do is offer advice. We can advise that the software is not fit for release. We should back this up with information about untested areas of the software, and a list of known bugs that exist in the system. It is up to business leaders to decide if they can delay the release or not. They will base their decision on the information provided to them.
Sometimes, delaying release is not an option. There is a risk that a client will terminate the contract because of a failure to release on time. There is also a risk that a competitor will release a rival product which could hit sales. In these situations, we have no choice but to release the product with little or no testing.
2. We CAN run the tests later (but we must make sure this is done)
We may not have complete control of the software releases, but we can control the testing. Software being released, against our advice, does not mean we have to stop testing.
Let’s say it takes 2 weeks to run through the entire regression test suite, but the software MUST be released in 1 week. We must prioritize our test cases so the most important ones are run first. We may even have to consider skipping tests or delaying them until after release.
If a test is considered too important to skip, then it must still be run. Once the week is up, we must make sure that testing continues until all test cases have been run.
All critical defects must be caught as soon as possible. Finding the bug post-release is better than the bug being found by a customer. Better late than never!
3. More testing has been done earlier, so less is required later
We should not be leaving all the testing to the last minute. Bugs cost more to fix the later they are found in the software development lifecycle. Therefore, testing should take place during all development phases.
We should run all tests before release. But, when this is not possible, we can reduce the risk of unreported bugs by testing early as well.
4. Increasing focus on Exploratory testing
Exploratory testing is essential for any testing strategy. If we are running too many scripted tests, we may not have time for exploratory testing.
With scripted testing, we are running the same steps over and over again. Is there any benefit with this? There is a risk that we are leaving large sections of the code untested. Scripted tests should be reserved for verifying that the core features in the software work as expected.
Exploratory testing allows the tester to both simultaneously test and learn about the software. There could be areas of the software that are not covered by the scripted tests. It also covers potential usability issues which are harder to define in a test script.
A good testing strategy will include a mix of both scripted (automated or manual) tests and exploratory testing. Scripted tests can be used to verify the core features within the software. Exploratory testing can be used to find any unexpected, hidden defects which developers or testers never thought of but a user may still find.
5. Green isn’t always good
If we are struggling to run all tests before the release deadline, we should start considering the possibility that we have too many test cases.
I will concede that it is very satisfying to have a list of test cases (either automated or run manually) that have all passed. It is a clear indicator of the quality of the software…or is it?
Are these tests useful? Are any of these test cases testing the same thing? Do the tests include steps and validation checks that are useful? I could create several thousand useless tests that pass without much difficulty. However, all this would achieve is a very attractive looking test report. It would provide very little information about the quality of the application.
It is worth taking the time to review the test cases. Check that each test case has a clear reason why it has to be run, what the steps and checks are, and how to run them. Also, any duplicates or redundant tests should be removed so you aren’t wasting your time.
6. No point testing something we know is broken
There may be areas of the software that are incomplete or broken. If we already know which areas are broken, is it really worth testing it? It is probably better to prioritize our testing efforts to areas that we don’t know the status of.
However, I would not recommend allowing any new features or bug fixes to be released without proper testing. It is essential that these features are retested once they’ve been fixed or completed.
7. Documentation isn’t everything
The advantage of scripted test cases is that they provide us with documentation that describes the way the software works. However, as is common in agile environments, these requirements could change at any time. With the software constantly changing, we could be spending more time keeping the documentation up to date and less time testing.
There is one item in the agile manifesto that we should remember: Working Software over Comprehensive Documentation. There should be some documentation (note that it says ‘over’ not ‘instead of’) however we should question the level of documentation that is required.
Having fewer test cases to keep up to date makes the testing more manageable. We can also provide some less formal documentation providing brief information about the exploratory testing that was carried out. All people need to know is which features were tested, not how rigorously it was tested.
Summary
As testers, we are advocates of quality. It is our duty to ensure that our software is thoroughly tested. In a perfect world, software won’t be released unless all tests have been run and passed.
However, sometimes the pressure to release early is just too much. Therefore, we must optimize our testing procedure so that:
- Duplicate or redundant tests are removed.
- Testing starts early so that there is less pressure on getting the tests run before release.
- There is more time for exploratory testing.
Sometimes, the software has to be released before a satisfactory level of testing has been carried out. In this case, we should continue with the testing even if it carries on post-release. Critical bugs must be found as soon as possible.
Testing is essential for all features within the software that have been changed. Testing should be prioritized so that these areas are tested first. This ensures that, in the event that we run out of time, the most at-risk areas of the software have been thoroughly tested.
What is your opinion? I would love to discuss this in the comments below! 😉
– Louise Gibbs’ blog – https://louisegibbstest.wordpress.com/