This topic is something I find quite interesting to speak about with fellow testers or people I interview. I get all sorts of answers such as “I will continue testing until I have completed the acceptance criteria”, “I will continue till I find all the bugs”, etc. I like to learn from everyone and see what is their hard stop line for testing a feature/task or if the testing never stops. We hear a lot around what is the entry criteria and what is the exit criteria, but is that enough?
In this blog, I will share my opinion on when I think it’s appropriate to stop testing with enough confidence.
When do we start testing? 🏁
I would like to start with a question for you all. When do we start testing?
As mentioned in my previous blogs, start testing as soon as possible and do not leave it until later. Why? Well, this reduces costs, time, and effort to rework and retest features leading to better quality and a bug-free product. As explained in my previous blog about the software development life cycle, we can start as early as the requirements stage by understanding the details and what we already have in place to test, or what else we might need to complete the piece.
Once we are clear at this stage, the path towards deployment will become much smoother for testers. Furthermore, reviewing designs is also quite important and useful. It helps testers understand the requirements from the design’s perspective and vice versa, and as testers we can also make some recommendations or even praise the designer’s work.
When to Stop Testing? ❌
I have never came across a “stop testing” yet in my career. However, you can say that you are confident enough with your testing for a feature. Can the testing part come back if there is a part of the feature that has a new enhancement or a bug found? We don’t know, but it’s always good to have an open mind about it 👐
I believe testing is a never-ending process in a positive way. I have yet to hear from someone that the software is 100% complete from testing. If I did I would have to wonder, is this possible?
The following points can be some guidelines to help you stop the testing tasks:
- Decisions from your product owners, line managers.
- Deadlines that need to be met.
- Completion of the acceptance criteria and test cases.
- Good coverage of the functionality/code.
- Number of bugs in the feature (valid criteria for exiting a testing process is the discovery of a forecasted number of bugs).
As a curious tester I always have my antennas on alert mode. It is important to pay attention to finer details and errors in the product of the application, with the aim of finding out why doesn’t the product work as intended. It is not possible to find all the bugs that may exist, but the team management will make decisions on when to release the product anyway. The release might go ahead with some unwanted bugs! See, we haven’t really stopped testing. The question is, can we find these bugs prior to the release, and what is their severity?
Another thing I would like to add is when certain bugs do appear, we need to make sure they are fixed once and for all. It won’t be a good quality product if the error keeps reappearing! We should predict how many bugs are there so we can find them, in order to have a desirable level of confidence that the product is ready to be shipped/deployed. Thereafter ship it, but understand that it still has an unlimited amount of not yet discovered bugs 🐞
You can use this checklist to determine whether or not you have enough confidence for completing the testing:
- Stop the testing when deadlines like release deadlines or testing deadlines have reached.
- Stop the testing when the test cases have been completed with some prescribed pass percentage.
- Stop the testing when the testing budget comes to its end.
- Stop the testing when the code coverage and functionality requirements come to the desired level.
- Stop the testing when the bug rate drops below a prescribed level.
- Stop the testing when the number of high severity Open Bugs is very low.
- Stop the testing when the period of beta testing/alpha testing is over.
Some useful metrics you could also use 🟩:
- Have a dashboard of the percentage of test cases that have passed.
- Have a completion percentage based on the number of test cases executed.
- Have a percentage of failed test case.
So, how do you know when is the best time to stop testing? The correct answer is to combine several of the mentioned above practices/metrics and determine what is the definition of DONE in your test plan/strategy documentation 🥇
If you can select most of the items from the checklist and tick them off with a yes, that’s when you know you can potentially stop testing. On the other hand, if you see more No’s than Yes’s, you know that something might be missing and you can work on it and avoid any bug heading into production! 🤩