Create Better Selenium Tests with WebdriverIO

TL;DR: WebdriverIO is fantastic.  If you’re using Selenium-Webdriver (the official JS library) instead, you’re almost certainly making a mistake.

A core part of testing Build Focus is ensuring it concretely works in real browsers. Automating that with Selenium keeps me honest, gives me confidence, and saves me time and effort checking builds myself.

The wider testing setup is a whole post in itself, but I do find there’s big value in having a small suite of tests that make it essentially impossible for me to accidentally release a totally broken build.

These tests are optimized for that case: smoke testing basic functionality throughout the app, to ensure it’s possible to install it, open it up and look at your city, to start focusing, to fail, and to successfully concentrate and start building a city.

I added all these early on in the project, and they haven’t changed much since; most real testing happens in unit tests and acceptance tests run at a slightly lower level, where I don’t need to startup a whole fresh browser and install the extension each time.

This is all well and good, but occasionally there’s some maintenance required here, and in this post I want to look at why & how I migrated my tests from Selenium-Webdriver to WebdriverIO recently, and the many ways it’s made everything involved drastically better.

Fighting the rot

Recently some of these tests became unstable in CI for unclear reasons, and needed revisiting to stabilize them and fix them up a bit.

Unstable tests are the devil’s work, and quickly rot any sense of confidence in your build — you need a build you can rely on if you want to quickly and safely ship things — so after briefly disabling the tests to wrap up my current feature, I set about sorting this out properly once and for all.

Many of these tests were failing, in CI only, due to the dreaded “stale element reference” error, where you attempt to access an element (to click it perhaps, or examine its properties), which is no longer part of the page.

These are difficult to reliably avoid in any Selenium tests, particularly of JavaScript-heavy pages; normally you need to repeat your initial search for the element you have, to find the new equivalent one that’s now present.

Internally, they’re typically caused by timing issues, where you start manipulating a page too early, while it’s still being changed, and it’s easy for different environments (like CI vs locally) to have differing performance that can bring this to light.

I started on improving this, but these tests were already a bit unwieldy and painful, and in search for a nice solution I found a pull request adding a fix to handle stale element exceptions automatically, built into the Selenium API. That sounded great, until I realized it wasn’t built into the official Selenium library (Selenium-Webdriver), which I was using, but an alternative one: WebdriverIO.

Enter WebdriverIO

Fundamentally, WebdriverIO is an attempt to provide a much nicer, more powerful and more usable Selenium API for JavaScript. Their site can talk you through all the features, but for me the really key magic is:

  • Nice, usable, high-level API
  • Easily extendable
  • Automatically handles stale elements
  • Abstracts away asynchrony almost entirely

This is best illustrated by example. First, a simple example of a previous test:

it("Can open main page", function () {
  return driver.get(extensionPage("main.html")).then(function () {
    return driver.wait(sw.until.elementLocated({
      css: ".city > canvas"
    }), 1000);
  }).then(function (cityCanvas) {
    return sw.promise.delayed(200).then(function () {
      return cityCanvas;
    });
  }).then(function (cityCanvas) {
    return canvasContainsDrawnPixels(cityCanvas);
  }).then(function (canvasContainsDrawnPixels) {
    expect(canvasContainsDrawnPixels).to.equal(true,
      "City canvas should have an image drawn on it");
  });
});

This works, and is fundamentally fine, but it’s certainly not good.

Critical, most of their content is just ceremony — dealing with asynchrony and faffing around with promises — which hides the real intent of the code, and makes it much harder to glance at this and work out what’s going on.

There’s also lots of standalone functions like canvasContainsDrawnPixels, which don’t neatly fit, and which expose more asynchrony that has to then be actively handled and thought about by any caller (note the ‘return’ there; critical, but easy to miss), so are very easy to silently use wrong.

There all sorts of other slightly different ways to manage this, for better or worse, but that’s exactly my point: there’s not one clear way to manage this, and you have to think about how you want to manage async as you go. That’s not time well spent.

More importantly though, at that moment, this test didn’t work. Try as I might, I was now consistently getting a stale element error when trying to examine the canvas directly.

I took a look at fixing it, but it requires lots of catching stale elements errors, or trying to detect those issues in advance, and explicitly repeating the failing (asynchronous) step again with the new element, and it’s messy, painful and generally deeply unpleasant in this kind of structure.

Here’s the same test, rewritten to use WebdriverIO:

it("Can open main page", () => {
  return client
    .url(extensionPage("main.html"))
    .pause(500)
    .hasDrawnPixels(".city > canvas").should.eventually.equal(true,
      "Canvas should have an image drawn on it");
});

Much better.

For starters, it now works. I didn’t have to actually change any of the core logic of fiddly code (like the contents of hasDrawnPixels) to make this work, but the automagical stale element handling miraculously solves my problems.

Better still, WebdriverIO handles all the asynchrony magically too (building a list of operations to run, and ensuring they happen itself internally later on), lets you add first-class functions (hasDrawnPixels) to the API, and takes almost all the ceremony and faffing away immediately.

There are other changes here too, which are worth noting: Chai-as-promised lets me drop ‘.eventually’ into assertions to handle waiting for promised results, and I’ve swapped a big ‘ole function () { } definition for a nice ES6 arrow function, for clarity, and to match the style I’ve been using elsewhere more recently.

All of these are incremental gains though; the real magic is that WebdriverIO manages away all the other complexity for me, gives me tools to hide more myself (as with hasDrawnPixels) and, in a broken windows style result, makes it much easier and more worthwhile to spot incremental improvements like these, and follow through to genuinely better quality tests.

Overall the signal-to-noise ratio here has skyrocketed, vastly easier to see what’s going on, and the tests become much easier to understand, manage, and write in future. A nice win, for very little effort.


Reference: https://blog.buildfocus.io/better-selenium-tests-with-webdriverio-2ed5dc04d651#.fh37owhye


We care about your opinion  😎

How did WebdriverIO changed the quality of your tests?