logo logo

Be Descriptive With Your Test Failures

main post image

Descriptive test failures can save time and money. While the return on investment may not be immediate, future developers (possibly yourself), will appreciate the effort. Tests that catch legitimate failures are a blessing 😇 However, a non-descriptive error requires debugging to find the real issue.

Short-circuited assertions

A short-circuited assertion occurs when the first assertion fails and subsequent ones do not execute. The following example is a simplified version of a common scenario. It uses supertest with jest to test an API, but the same idea can be applied elsewhere.

app.get("/user/:id", function (req, res) {
  // ...logic
  res.status(403).send("SOME_VALUABLE_ERROR_CODE");
});

test("a user can be fetched", (done) => {
  request(app)
    .get("/user/1")
    .end(function (err, res) {
      expect(res.status).toEqual(200);
      expect(res.body.name).toEqual("Slugathor");
      done();
    });
});

The test above expects the call to always succeed and may have historically passed for a long time. However, when a change causes it to suddenly fail, it leaves a vague message that could become a headache for whoever troubleshoots 🤕

jest output
expected 200 “OK”, got 403 “Forbidden”

Diagnosing in a local environment

If a developer can recreate the failure locally, debugging becomes easier to manage. The person troubleshooting can add a logging statement above the error to get the real message or place a breakpoint.
test("a user can be fetched", (done) => {
  request(app)
    .get("/user/1")
    .expect(200)
    .end(function (err, res) {
      if (err) {
        console.log(res.body);
        throw err;
      }
    });
});
jest output
{ “code”: “SOME_VALUABLE_ERROR_CODE” }

💡 Logging the whole response object, while verbose, would allow inspecting other issues, like invalid headers.

After discovering the error code, the investigating developer can pinpoint the issue. Unfortunately, some people view console.log statements as pollution and will remove them before committing code. Doing that only recreates the issue and loses time for the next person.

Diagnosing in a Continuous Integration (CI) environment

When a developer cannot recreate a CI test failure locally (due to an environment error), the messaging becomes more valuable. Since the code is already bundled/containerized, adding a console.log is difficult unless the developer can somehow ssh into the CI machine (unlikely and not worth it). One option would be to add a new commit, push it to the remote, and trigger a new build. For a pull request, this might be an easy task, but what about failures on the main branch? Additionally, a CI build will run all the tests, among other steps, instead of just the one in question, which wastes time.

Rewriting the test

There are other ways to write the test more descriptively.

Separating assertions

The following example separates the assertions into separate tests, so one assertion failing does not skip the subsequent.

describe("/user/:id.", () => {
  let response;

  beforeEach((done) => {
    request(app)
      .get("/user/1")
      .end(function (err, res) {
        if (err) {
          throw err;
        }

        response = res;

        done();
      });
  });

  test("returns a 200 status code", () => {
    expect(response.status).toEqual(200);
  });

  test("returns the expected name", () => {
    expect(response.body).toEqual({ name: "Slugathor" });
  });
});
jest output
Both tests render output, the second including the error code.

A custom, reusable assertion

Test suites can combine multiple assertions, allowing them to control their error messages or possibly overwrite existing ones. Doing so can reduce complexity, but some would argue that it violates the single responsibility principle.

expect.extend({
  toHaveOkBody(response, expectedBody) {
    if (response.status === 200 && response.body === expectedBody) {
      return { pass: true };
    }

    return {
      message: () =>
        `Unexpected response: ${JSON.stringify(response, null, 4)}`,
      pass: false,
    };
  },
});

test("a user can be fetched", (done) => {
  request(app)
    .get("/user/1")
    .end(function (err, res) {
      if (err) {
        throw err;
      }

      expect(res).toHaveOkBody({ name: "Slugathor" });
      done();
    });
});
jest output
Logs the whole response on failure

Uncaught exceptions

Sometimes assertions work with a happy path but mask the error message when failing. Regardless of the framework (Chai/Jest), Expect APIs provide descriptive failures on their own. Those messages are meaningless in the following example, which has an unexpected result.

it("should fetch a user", (done) => {
    request(app)
      .get("/user/1")
      .send()
      .then((res) => {
        expect(res.body).to.deep.equal({ name: "Slugathor"});
        done();
      });
  });
jest output
The test times out instead of on the error

What happened?

Test frameworks (Mocha in this example) wrap each test, catching exceptions that are thrown from expect statements. By using the done callback, the framework continues execution until it’s called. Since the expect statement throws, the callback is skipped. This results in the appearance of the test failing from a timeout, masking the real assertion. Unfortunately, this error could trick a developer into thinking the target service went down, and they’ll potentially waste time on the wrong investigation path.

How to avoid uncaught exceptions

Forcibly failing tests is a good practice when first writing them. For example, changing the assertion to “Slugathor2” will present the message above.

💡 Using Test-Driven Development will prevent this every time.

Other ways involve writing the test differently. There isn’t much need for the done callback with promises, but legacy code haunts many developers. Here are a few examples of a better format:

it("should fetch a user", () => {
  // Exceptions will be caught by mocha because it has a surrounding catch block
  return request(app)
    .get("/user/1")
    .send()
    .then((res) => {
      expect(res.body).to.deep.equal({ name: "Slugathor" });
    });
});

it("should fetch a user", async () => {
  // Using await causes exceptions to occur on the same code path
  const res = await request(app).get("/user/1").send();
  expect(res.body).to.deep.equal({ name: "Slugathor" });
});

it("should fetch a user", () => {
  // The end function in supertest handles uncaught exceptions
  request(app)
    .get("/user/1")
    .end((err, res) => {
      if (err) {
        throw err;
      }

      expect(res.body).to.deep.equal({ name: "Slugathor" });
    });
});

Conclusion

People should not write tests to satisfy the acceptance criteria (AC) on user stories, pass a code review, or please the code coverage numbers. Tests are for your benefit and confidence. Do yourself and your team a favor, and spend some time to make sure error messages are descriptive and verbose for test failures. Doing so can help save time and money—and prevent headaches 😎
Avatar

About the author

Kevin Fawcett

Programming is my passion. I continuously pursue knowledge, regularly exploring new technologies, and methodologies. Over the years, I have collected experience with design patterns, best practices, and architecture that I enjoy teaching others. Mentoring reinforces my learning.

Join TestProject Community

Get full access to the world's first cloud-based, open source friendly testing community. Enjoy TestProject's end-to-end test automation Platform, Forum, Blog and Docs - All for FREE.

Join Us Now  

Comments

11 1 comment

Leave a Reply

popup image

Test, Deploy & Debug in < 1 hr

Leverage a cross platform open source automation framework for web & mobile testing using any language you prefer, and benefit from built-in dashboards & reports. Free & open source.
Get Started
FacebookLinkedInTwitterEmail