logo logo

Testing and the Path to Quality

Testing and the Path to Quality

In my last two posts here, I talked about whether developers could test, then how they should own testing (or at least, test automation). Discussions about quality and testing often intertwine, but quality is much more than testing and worth a deeper discussion. In my conference and web interactions, I often worry that people apply too much correlation between testing and quality. Testing is a critical component of software development, but it’s only a means towards an end goal of quality.

We can debate endlessly on who should do the testing and on what kinds of tests we should run. Ultimately, we need to remember that customers do not care about testing – they care about quality. There is definitely a relationship between quality and testing, but not a correlation, and the best testing efforts don’t always result in a quality product. Testing is crucial in mitigating risk and preventing regressions, but I’d argue that at least some portion of testing may not produce customer value. The management expert Edward Deming said, “we must focus our energies exclusively on producing outcomes that the customer perceives as valuable”. Our focus needs to be on customer value.

An important thing to remember is that customers don’t want software. They don’t want software testing. They want their problems and challenges solved – easily and intuitively ✅

What is Quality?

I have had many discussions and debates throughout my career on the definition of quality. Jerry Weinberg’s definition of quality, “Quality is value to some person” is one of my favorite definitions (and one of the most cited on the internet). This definition implies that we need to understand who the customer (some person) is, and what problems they want to solve (value). 

In Quality Is Free, Philip Crosby said, “The definition of quality is conformance to requirements.” I read this book over fifteen years ago, and this definition never sat quite right with me. The word ‘requirements’ was my hangup. If the requirements were bad in the first place, conformance to those requirements could never result in a quality product. I recently realized that Crosby’s definition is quite similar to Weinberg’s definition if you think of “requirements” as value defined by the customer. Or, to reframe, quality is conformance to requirements, as defined implicitly by the users of your product or service.

In both definitions, the customer is at the forefront of defining quality. We don’t define quality from code coverage, analysis, or testing (although we use all the above to identify risk). Regardless of which definition of quality you prefer, my stance is that only the customer of our software can truly evaluate what quality is. If we don’t understand who our customer is and what problem they want to solve, we cannot define quality for ourselves.

Building a Quality Product

We want to identify risk and prevent against regression, but testing and analysis activities only get us to a baseline for quality. Functional correctness isn’t nearly enough to build a quality product. Quality products require that we make something that helps our customers solve their most important problems.

Testers often see themselves as the customer – or as the customer advocate. It’s certainly good to have people on the team advocating for the customer, but try as we might, we are not the customer. Part of building a quality product is learning about the customer and how they interact with the products we are building.

Building the Right Product

What can we do to add more of a quality focus to our development efforts? First off, the team should be clear on the customer problems they’re solving. To me, whole-team quality means less about the entire team doing testing, and more about getting the entire team completely aligned on the customer problem and how they hope to solve it.

Next, the team should frequently ask themselves how they will know if they’re solving the customer problem, and to what degree they are solving the problem. For example, if I’m building a video streaming platform, the problem statement could be as simple as, “people are bored”. In a non-hypothetical example, teams would do customer research, test ideas, and debate these problem statements. The solution to this problem as stated might be a video streaming platform, but could also be a traveling circus (with social distancing), a book lending service, or any other “cure” for boredom.

To continue with this hypothetical example, our team is attempting to solve this customer problem by presenting an engaging video platform that tailors content to their interests. A measure of success could be to track engagement (hours streaming), and what percentage of their total streaming time comes from suggested content. 

The Kano model is one way to look at the relationship of features and help determine which are needed in order to achieve a baseline of value, and which features will help differentiate a product from its competitors. Besides highlighting “must have” features, the Kano model helps identify more-is-better product attributes (e.g. performance), and “Delighters”, which are features that will not cause user dissatisfaction if they’re not there, but which will increase customer satisfaction when they exist. A lot of customer perceived quality comes from these differentiating features (e.g. “watch party). But if the video streams fail, or if basic functionality like search doesn’t work, the user experience is going to be bad no matter how well curated the content is. 

Getting There

In a 2007 presentation, Ed Keyes stated that “Sufficiently Advanced Monitoring is Indistinguishable from Testing”. Some people view this approach to monitoring to mean that customers are doing the testing (and finding bugs that the team should have found). It’s important to note yet again that testing for functionality is something that the team should do while developing software. My take on this is that wherever possible, we should examine how customers are using our software so we can understand if they are being successful in solving their problems. Good monitoring and logging practices are a great way to get this necessary insight.

Let’s say I’m building an e-commerce website and I want to learn about my customer experience. I should have metrics or monitoring in place to know how many customers are visiting my site. We should know where they’re coming from, what they look at, and what they search for. Measuring whether their searches result in things they add to their cart, how often they remove things from their cart, how long they take to check out once they create a cart – along with dozens of other questions all will help me determine if my software is helping them solve their problems. 

The Challenge

I’ll end the article by asking you how your teams measure quality. Are you measuring customer quality, or are you measuring the quality of your engineering efforts? If you’re measuring the latter as a proxy for quality, how do you know if you’re solving a customer problem? Do you care? 🤔

We all want to build quality software, and testing is a critical part of that challenge. Remember that quality is the destination, and that testing is only the vehicle we’re using to help us with that journey 🚩

About the author

Alan Page
Alan has been improving software quality since 1993 and is currently a Senior Director of Engineering at Unity Technologies. Previous to joining Unity in 2017, Alan spent 22 years at Microsoft working on projects spanning the company, including a two-year position as Microsoft’s Director of Test Excellence.
Alan was the lead author of the book “How We Test Software at Microsoft”, contributed chapters for “Beautiful Testing”,  and “Experiences of Test Automation: Case Studies of Software Test Automation”. His latest ebook (which may or may not see updates soon) is a collection of essays on test automation called “The A Word: Under the Covers of Test Automation”, and is available on leanpub 

Alan also writes on his blog, podcasts, and shares shorter thoughts on Twitter.

Leave a Reply