logo logo

7 Reasons Not to Skip Tests

We’ve all been tempted to skip writing tests. Whether it’s time pressure, business pressure, the complexity of testing, or we just want to get on with something else. We might be tempted to say “YOLO!” and move on. So here’s a list of 7 reasons why you might want to think twice before doing that.

Feature Security

Making changes to existing programs is risky. You and your team have spent a lot of time and hard work getting it to where it is. Are you confident that if you implement a new feature you won’t break an existing one? Tests give you this confidence.

If you think you have confidence in your code without tests, then I would ask you to prove it. If you start poking around the program, triggering various behaviors, I’m pretty sure we could sit around most of the day validating edge cases. This is not easily repeatable proof. 

However, with good test coverage, you can run one command to quickly validate that your changes have not broken any tested behavior.

They’re Not Just Tests, They’re Documentation

Well structured tests with good descriptions provide a clear picture of expected behavior, including how the program should and shouldn’t be used. If you come across a piece of code and don’t understand its purpose, try this out: Make some changes, run the test and see what failed. The failing steps should indicate the expected behavior of that code.

Viewing tests as documentation is also a great way to get to know how a program works. This is particularly useful if you’re new to a code base. Also if you’re on-boarding new people, walking through the tests is a great way to help them get familiar with a project.

Obviously, if you skip writing the tests, you lose this valuable resource.

Testing Makes You Faster

Despite common misconceptions, sticking to the tests makes you ship value faster. This works in two ways:

First of all, with a well-tested code base, you can massively reduce the amount of time you spend code spelunking. This is where you go from line to line, debugging, reading, and figuring out what some piece of code actually does. Tests give you a picture of what the program does in a fraction of time. You still need to do code reading, but the tests provide an invaluable companion to understanding what the code does.

The second way tests make you ship value faster, is by vastly reducing the number of bugs you’re shipping. Bugs are a debt against your time and the features you want to build, and they are a drain on the value of what you’re shipping. If users want x feature, but they end up with and z bugs too, the value of x is going to be reduced.

Tests Are Automatable, You Are Not

The time you spend running commands, or clicking around a screen to validate behavior, is lost time. Wouldn’t you rather spend this time actually writing features? These things are not automatable. Tests, on the other hand, can be configured to run as a part of your continuous integration processes. You create a pull request and tests run automatically, and your whole team can see the results. Manually validating code for all your team is laborious, and not easily repeatable, and so your time is wasted.

That’s not to say that all manual testing is wasted time. Some are necessary, indeed. However, manual testing that could easily be automated, is wasted time. And when you’d rather be shipping value, time is one of your most important resources.

Green Just Looks Good

Ok, this is a rather subjective point, but… Green looks and feels good. Seeing that beautiful green band streak across the screen of your test is just lovely. The confidence, the speed, the ease. Ugh. I love it. 

You Probably Won’t Go Back and Do Them Later

“How about we ship it now and write the tests later?”. If you’ve ever said or heard this before, I challenge you to find out if those tests were ever written. If you’re lucky, there’s a tech-debt ticket floating around that no-one really wants to work on. Once your code hits the master branch, the expectation, the motivation, and the incentive to write those tests simply disappear.

Even for someone who has experience writing tests – writing tests after merging is much harder than when you’re actually working on the feature. When you’re working on a feature, your mind is centered around the problem you’re trying to solve. In that state, it’s much easier to think about what tests you need to write. Once you’ve moved on to another problem, remembering everything about that old one will be a struggle.

Your Future Self (or Colleague) Will Appreciate It

Imagine this.

You settle down to add a new feature. Your good self (or a colleague) has left a nicely structured test suite documenting the behavior of this program you need to modify. You add new tests, make your changes, fix some things you broke, and submit your PR. You feel confident about your newly added functionality. You feel good that you didn’t break any existing behavior. The process went quite smoothly.

It can be this blissful. Not all the time. Shit can hit the fan in a myriad of ways with software. But it can be blissful. Either way, consider the alternative.

You settle down to add a new feature. There are some tests. But it turns out there are tests missing for some of the behavior. What do you do? If you implement your feature you may unknowingly break some untested feature. So, you have to first put your new feature on hold, find out what logic is not tested, then reverse engineer tests for it. But you’re not working on your feature! You’re doing someone else’s work for them. That’s no fun.

Do your future self (or colleague) a favor and keep your program tested. And be sure to thank your past self (or colleague) for leaving a nicely tested program  😀 

Caveats and Tips

These reasons are all good, but we don’t live in a perfect code base. So here are some notes and tips on maintaining your tests while living in an imperfect world.

Good coverage

Good coverage is a life saver. Obviously, if you’re starting from an existing code base with little or no tests, you’re going to have a harder time writing tests. But I strongly encourage you and your team to set the standard that any new code should be tested. As the test coverage grows, so will your confidence and ability to ship value quickly.

Testing is a skill

Unfortunately, writing good tests is a skill learned with experience. If you’re not comfortable writing tests or are having a hard time figuring out how to test a program, then this is the perfect time for pairing. Find that person that enjoys testing (they do exist), and ask them to help you. You may find that their enjoyment of testing rubs off on you. You never know!  😉 

What (not) to test

Not everything is worth testing, but a lot is. I don’t believe in 100% coverage. Most apps have a lot of startup logic and configuration that isn’t worth testing because without it the app just won’t start.

My golden rule for what to test is: ‘if it makes a decision, test it’. I guess you could also add to that: ‘unless it kills the app on start’.


Stick at it, especially if you’re adding tests to an existing code base. Coverage reports in CI builds (such as Coveralls) can be a great motivator. Make the coverage public, put it on a dashboard, and watch it go up!


Skipping tests in the name of shipping faster is deceivingly harmful to your productivity. Be very wary if you’re being asked by colleagues to skip writing tests in order to save time. Ultimately, it is slower and it will hurt your team. Find other ways to save time if deadlines are an issue, such as reducing the scope of the feature. But making testing an integral part of your workflow will benefit you, your future self, and anyone else who touches that code.

Happy testing!

– Piet van Zoen: Blog & Twitter

About the author

Piet van Zoen

Web dev and pragmatic programmer at YoungCapital in the Netherlands. Enjoys TDD, all things command-line, and fries with mayo.

Leave a Reply