logo logo

How Code Coverage Tools Can Produce Fragile Tests

How Code Coverage Tools Can Produce Fragile Tests

Code coverage tools are helpful. This article is not about convincing anyone to avoid or ignore them. It’s about human behavior and gamification. Developers suffer when they work for the tool instead of letting it work for them.

What Are Code Coverage Tools?

Code coverage tools help developers remember to test blocks of code. They attach to the tests while they execute, and present a report about missing coverage after the suite completes. The results display any uncalled conditional blocks, functions, or lines of code.

code coverage output
Jest code coverage output

The output is meant to provide a cursory glance at a project’s state. If a project has 30% coverage, developers can expect that it’s not well-tested, and to tread carefully. However, if a project has above 70% coverage, then adding new functionality will probably have fewer regression bugs.

With projects that have 100%, adding new functionality will likely break existing tests, adding an extra task for developers.

How Code Coverage Tools Can Produce Fragile Tests

The purpose of automated testing is to produce confidence in the product, not to satisfy a metric. For some people, seeing something incomplete is aggravating; they see the code coverage percent as a measure of progress. For others, a managerial requirement or continuous integration (CI) pipeline enforces a minimum.

Unfortunately, the percentage is flawed.

Excluded Files

module.exports = function add(a, b) {
  return a + b;
module.exports = function subtract(a, b) {
  return a - b;
const add = require("./add");

describe("add", () => {
  test("adds the numbers", () => {
    expect(add(1, 2)).toEqual(3);

In this example, there are no tests for the subtract function. Someone might expect a zero value for code coverage on that file, but that’s not the case. As mentioned earlier, the coverage tools attach to the tests, so they only see what the tests import. If the tests never reference the file, it’s not included.

no reporting on the subtract file
The subtract.js file is missing from the report.

Missing Assertions

The following test produces the same result as above (ignoring the missing file).

const add = require("./add");

describe("add", () => {
  test("adds the numbers", () => {
    add(1, 2);

A human can see that the test coverage is actually zero; there are no assertions. The tool, however, reported that all is well.

Code Coverage Can Encourage Useless Unit Tests

By only testing behavior, developers can refactor implementations without breaking the tests. However, when coverage tools are warning about specific missed lines, they encourage looking at the internal details. Below are a few contrived examples (based on real ones) where code coverage introduced poor testing habits.

Testing Meaningless Side-effects

module.exports = function doSomething() {
  if (process.env.ENV === 'DEV') {

  // ... Internal details

  if (process.env.ENV === 'DEV') {

To achieve full coverage, the developer must look at the function internals and make a test that sets the ENV variable. The test is inappropriate because the timing is not the intended behavior of the function. The test will fail if the developer removes the timer code, even though the behavior remains intact.

Testing Constants

class SomeClass extends SomeOtherClass {
  item() {
    return "SomeClass Item";


const SomeClass = require("./someClass");

test("SomeClass item returns 'SomeClass Item'", () => {
  const instance = new SomeClass();
  expect(instance.item()).toEqual("SomeClass Item");

There is no behavior here, and no reason to include a test except to please the coverage tool.

Testing unrealistic cases

module.exports = {
  someFactory: (type) => {
    if (type === "factoryOne") {
      return new FactoryOne();

    if (type === "factoryTwo") {
      return new FactoryTwo();

    throw new Error("Invalid factory type!");
const { someFactory } = require("./someFactory");

describe("someFactory", () => {
  test("throws on invalid factories", () => {
    expect(() => {
    }).toThrow(/Invalid factory type/);

Assuming all of the integration tests are thorough, the coverage might have a gap for the error case. That would be because it’s impossible to get the error in the full application code. The check is a case of offensive programming, designed to fail loudly in development.


Don’t let this article discourage you from using code coverage tools, just don’t let them dictate your development workflow. They are great for discovering major gaps in tests that were from human oversights, but the gamification can lead developers astray. A good rule is to try to keep 70% coverage, not 100%, which will come naturally from integration testing.


About the author

Kevin Fawcett

Programming is my passion. I continuously pursue knowledge, regularly exploring new technologies, and methodologies. Over the years, I have collected experience with design patterns, best practices, and architecture that I enjoy teaching others. Mentoring reinforces my learning.

Join TestProject Community

Get full access to the world's first cloud-based, open source friendly testing community. Enjoy TestProject's end-to-end test automation Platform, Forum, Blog and Docs - All for FREE.

Join Us Now  

Leave a Reply

popup image

Test, Deploy & Debug in < 1 hr

Leverage a cross platform open source automation framework for web & mobile testing using any language you prefer, and benefit from built-in dashboards & reports. Free & open source.
Get Started