logo logo

7 Reasons to Skip Tests

main post image

A brilliant article by Piet Van Zoen was recently published which discussed reasons not to skip tests. In this article I’m going to discuss reasons why we might need to do the opposite. 

To clarify, I do not think we should be skipping tests. I strongly suggest reading Piet Van Zoen’s article as well. However, I think there are some scenarios where it can’t be avoided. There are also some cases where it might be beneficial to reduce the number of tests.

In these examples, I’m referring to regression test cases (automated or manual) that may be run before the software is released. Do we really need to run all of these before a software release?

1. The risks when releasing late is greater than the risks when releasing with bugs

As software testers, we do not have the authority to stop a release. All we can do is offer advice. We can advise that the software is not fit for release. We should back this up with information about untested areas of the software, and a list of known bugs that exist in the system. It is up to business leaders to decide if they can delay the release or not. They will base their decision on the information provided to them.

Sometimes, delaying release is not an option. There is a risk that a client will terminate the contract because of a failure to release on time. There is also a risk that a competitor will release a rival product which could hit sales. In these situations, we have no choice but to release the product with little or no testing.

2. We CAN run the tests later (but we must make sure this is done) 

We may not have complete control of the software releases, but we can control the testing. Software being released, against our advice, does not mean we have to stop testing.

Let’s say it takes 2 weeks to run through the entire regression test suite, but the software MUST be released in 1 week. We must prioritize our test cases so the most important ones are run first. We may even have to consider skipping tests or delaying them until after release.

If a test is considered too important to skip, then it must still be run. Once the week is up, we must make sure that testing continues until all test cases have been run. 

All critical defects must be caught as soon as possible. Finding the bug post-release is better than the bug being found by a customer. Better late than never!

3. More testing has been done earlier, so less is required later

We should not be leaving all the testing to the last minute. Bugs cost more to fix the later they are found in the software development lifecycle. Therefore, testing should take place during all development phases.

We should run all tests before release. But, when this is not possible, we can reduce the risk of unreported bugs by testing early as well.

4. Increasing focus on Exploratory testing

Exploratory testing is essential for any testing strategy. If we are running too many scripted tests, we may not have time for exploratory testing.

With scripted testing, we are running the same steps over and over again. Is there any benefit with this? There is a risk that we are leaving large sections of the code untested. Scripted tests should be reserved for verifying that the core features in the software work as expected.

Exploratory testing allows the tester to both simultaneously test and learn about the software. There could be areas of the software that are not covered by the scripted tests. It also covers potential usability issues which are harder to define in a test script.

A good testing strategy will include a mix of both scripted (automated or manual) tests and exploratory testing. Scripted tests can be used to verify the core features within the software. Exploratory testing can be used to find any unexpected, hidden defects which developers or testers never thought of but a user may still find.

5. Green isn’t always good

If we are struggling to run all tests before the release deadline, we should start considering the possibility that we have too many test cases.

I will concede that it is very satisfying to have a list of test cases (either automated or run manually) that have all passed. It is a clear indicator of the quality of the software…or is it?

Are these tests useful? Are any of these test cases testing the same thing? Do the tests include steps and validation checks that are useful? I could create several thousand useless tests that pass without much difficulty. However, all this would achieve is a very attractive looking test report. It would provide very little information about the quality of the application.

It is worth taking the time to review the test cases. Check that each test case has a clear reason why it has to be run, what the steps and checks are, and how to run them. Also, any duplicates or redundant tests should be removed so you aren’t wasting your time.

6. No point testing something we know is broken

There may be areas of the software that are incomplete or broken. If we already know which areas are broken, is it really worth testing it? It is probably better to prioritize our testing efforts to areas that we don’t know the status of.

However, I would not recommend allowing any new features or bug fixes to be released without proper testing. It is essential that these features are retested once they’ve been fixed or completed.

7. Documentation isn’t everything

The advantage of scripted test cases is that they provide us with documentation that describes the way the software works. However, as is common in agile environments, these requirements could change at any time. With the software constantly changing, we could be spending more time keeping the documentation up to date and less time testing.

There is one item in the agile manifesto that we should remember: Working Software over Comprehensive Documentation. There should be some documentation (note that it says ‘over’ not ‘instead of’) however we should question the level of documentation that is required.

Having fewer test cases to keep up to date makes the testing more manageable. We can also provide some less formal documentation providing brief information about the exploratory testing that was carried out. All people need to know is which features were tested, not how rigorously it was tested.


As testers, we are advocates of quality. It is our duty to ensure that our software is thoroughly tested. In a perfect world, software won’t be released unless all tests have been run and passed.

However, sometimes the pressure to release early is just too much. Therefore, we must optimize our testing procedure so that:

  • Duplicate or redundant tests are removed.
  • Testing starts early so that there is less pressure on getting the tests run before release.
  • There is more time for exploratory testing.

Sometimes, the software has to be released before a satisfactory level of testing has been carried out. In this case, we should continue with the testing even if it carries on post-release. Critical bugs must be found as soon as possible.

Testing is essential for all features within the software that have been changed. Testing should be prioritized so that these areas are tested first. This ensures that, in the event that we run out of time, the most at-risk areas of the software have been thoroughly tested.


What is your opinion? I would love to discuss this in the comments below!  😉 

– Louise Gibbs’ blog – https://louisegibbstest.wordpress.com/

Louise Gibbs

About the author

Louise Gibbs

Louise is a Software Test Engineer and Malvern Panalytical, an organisation that develops scientific instruments. Her role requires her to test and assess the quality of the software used to control the instruments. She develops, maintains and runs both manual and automated test scripts.

Louise has also worked for companies who develop software for the Automotive industry, testing software used for running loyalty schemes with the aim of ensuring customer loyalty and retention.

Join TestProject Community

Get full access to the world's first cloud-based, open source friendly testing community. Enjoy TestProject's end-to-end test automation Platform, Forum, Blog and Docs - All for FREE.

Join Us Now  


8 3 comments
  • What I read last week (10th March 2019) | Louise Gibbs – Software Tester March 10, 2019, 8:49 am

    […] 7 reasons to skip testsArticle written by myself about the importance of optimizing your regression test suites.7 reasons NOT to skip testsArticle by Piet Van Zoen which focuses on the importance of making sure all tests are run. […]

  • Do I really need to test this? | Louise Gibbs – Software Tester April 24, 2019, 4:48 pm

    […] early March I had the following blog post ‘7 reasons to skip tests‘ published on testproject.io. This post looks at reasons why we may need to cut back on […]

  • How to Decide What to Automate: Do's and Don'ts | LogiGear Magazine September 17, 2019, 11:06 am

    […] Your application probably runs on a wide variety of browsers, and your inclination may be to run your tests on all of them. But it could be that only 1% of users are running your application on a certain browser. If that’s the case, why go through the stress of trying to run your tests on this browser?  Similarly, if there is a feature in your application that will be deprecated soon and only 1% of your users are using it, your time would be better spent automating another feature. […]

Leave a Reply

popup image

Complete E2E Automation Solution!

Join over 10K organizations (from Wix, IBM, Payoneer and many more!) using the world's first FREE cloud-based test automation platform, supported by the #1 testing community.
Sign Up Now right arrow