logo logo

Act Like a Developer to Become a Ninja Tester!


We often say testers have to “think like an architect” or “think like a builder” or, perhaps even, “think like a developer.” Here’s the problem: to actually think like any one of these people, you have to try to do something they do. So, really, you have to act like a developer. Let’s talk about this and where the testing relevance comes in.

Let me repeat: to think like a developer, you have to be able to act like one. At least for a period of time. While you are developing, you are not testing. (Just as when you are testing, you are not developing.) You will find, however, that you can interleave those activities. This is something we ask developers to do all the time when we ask them to “test their code.”

Develop Something

To act like a developer, even for a short time, means you actually have to develop something.

This is important for testers.

You don’t need to be an expert developer. But you should have tried to develop somethingAnything. This thing you develop will allow you to see those little micro-decisions in action; the ones that you had to make and now have forgotten you even made them. This is where you see the little bugs creep in, some of which you managed to catch, but wondering about those you didn’t.

You need to get the feeling of what it’s like to actually build and construct something out of source code.

Along the way, will learn a lot about certain categories of error.

If a tester has built something themselves, they will have a much better idea of how errors can creep in. Coupled with an understanding of the thought process that led to the error, this can be a powerful arrow in the testing quiver. This not only enhances your ability to find bugs but it enhances your ability to have conversations with developers about bugs and about design and about how good design can help mitigate the introduction of bugs.

In fact, if you read my testability series, you’ll see it’s essentially all about this very notion. I take you through developing a small application but with testability (of value and correctness) front and center.

Developing at the Intersection

I want to sidetrack for one second and say that it’s not just thinking like developers.

It’s also about the business; thus it’s about thinking like a product manager or a business analyst. This is the time of requirements writing. It doesn’t matter if you are thinking about use cases, Gherkin scenarios, whatever. Here is where we make most of our mistakes.

Here is where we introduce subtle inconsistencies, ambiguities, or even outright contradictions related to the business domain we are developing for. Here is where we apply confirmation bias and hindsight bias the most. Here is where we commit our narrative fallacies.

This is the intersection of where business meets technology and it’s important to act like a developer in that context as well.

Practicing What I Preach

With that said, let’s go back to developing in the sense of writing an application.

A book I really like is Faraday, Maxwell, and the Electromagnetic Field: How Two Men Revolutionized Physics by Basil Mahon and Nancy Forbes. In that book, I read the following:

“Simply hearing or reading of such things was never enough for Faraday. When assessing the work of others, he always had to repeat, and perhaps extend, their experiments. It became a lifelong habit—his way of establishing ownership over an idea.”

Now that resonated with me and I felt a kinship with Mr. Faraday. I’m by no means comparing myself to Faraday, of course, but that exact same thinking is what happened to me in my career regarding test tooling, particularly test automation tooling. By way of example, I built all of my test tools (SpecifyTapestryTestable and others) by essentially taking what others had written and re-writing them from the ground-up.

In terms of learning different technologies to write applications, I learned how to write an API wrapper by creating Thanos. I learned a bit of React by creating Thanos React. I learned AI tooling by writing Pacumen and Flappy Tester. I learned JSON, language syntax, and publishing to Atom with my languages series of files. I learned JavaScript, Node and publishing to npm by taking existing tools and re-crafting them as my Scribal series. To learn data science, I created a series of Jupyter notebooks.

Essentially and concisely: I had to act like a developer, which took me some ways towards thinking like one.

All of this — literally all of it — was based on taking what others had done before, breaking down their projects, and then essentially rewriting them from the ground up with my own ways of doing things.

The book also says:

“Faraday felt the need to repeat the experiments and see the results himself. For him, this was the only way to understand what was really going on in the physical world.”

Exactly! This is why I did what I did in terms of my above tools and applications.

Yes, I could have just used, say, Capybara or SitePrism or one of the many other tools out there. I could have used the existing language syntax tools that others had written. I could have used Open AI Gym rather than writing my own machine learning applications. But instead I wanted — actually, needed — to craft my own. This is how I learned to have trust in such a tools because I knew exactly how they worked. And why they worked. And, often, why they didn’t work.

James Bach captured a bit of what I meant in one of his tweets:

“To be genuinely responsible, any human who uses a tool must understand the range of behaviors and capabilities of the tool, and how it might be misused or misleading.”

I find, at least in the context of software, that I can’t be genuinely responsible until I’ve got my hands on actually doing the work: writing a small Rails application, creating a small consumable API, creating a responsive web front-end, etc.

Yeah, But… I’m a Tester

Should testers really have to do this? I get this question all the time.

When asked that question, I find that there’s usually some understanding that the basics of programming are good to learn so that the tester can be up-to-date and relevant around writing automation.

However, learning programming can be, and demonstrably is, good beyond automation. In fact, I would argue automation is very secondary to why testers should learn programming.

Or, rather — since obviously careers can benefit from automation knowledge — I would say it’s quite easy to learn just enough programming to write automation. It’s much harder to actually learn good programming and how the act of “programming” takes input from design and ultimately becomes development.

Which is to say, being a programmer and being a developer is not the same thing. Actually building things helps you see that. And as a tester, that should matter to you.

Developing, Fast and Slow

There’s yet another way I can make this argument.

A lot of testers eventually come across the book Thinking, Fast and Slow by Daniel Kahneman. This is a great book to read but one thing I encourage is that the message of this book should also lead testers to realize that they have to be able to look at source code, not just an application that runs as source code. And ideally be able to create some of their own source code. Why?

Because the creation of software is done by “thinking fast and slow.” This means it’s subject to the same categories of error that all humans are subject to. And a large part of testing is about understanding categories of error: how do we make mistakes? Under what conditions? What types of mistakes are there? How are those mistakes likely to manifest as observable consequences? What are the earliest possible times we can catch those mistakes? How often does what we create preclude us from finding those mistakes as easily or as early as possible?

Those last questions are important. We want to catch errors (in our thinking and in the artifacts resulting from our thinking) as quickly as they are introduced. That is the part of testing that focuses on the cost of mistake curve. One of the earliest times we make errors is in the creation of our code.

An even earlier phase is when we put to form the requirements that the code is designed to satisfy.

I sincerely believe that when testers start to realize all of the above more intuitively, by actually having done it, they will start to truly practice the craft and discipline of testing across abstraction levels.

If you want to be even more practical here, let’s make it a little mercenary in nature: the above makes you more skillful. It thus makes you more relevant. And the combination of the above makes you more rare. That rarity makes you more valuable. Ultimately that means more dollars to your pocket 😉 

About the author

Jeff Nyman

Epistemology is about the way we know things. Ontology is about what things are. Ontogeny is about the history of changes that preserve the integrity of something.

I reduce the epistemological opaqueness and ontological confusion due to cognitive biases we have when we perform ontogenetic change across the boundaries where humans and technology intersect.

Leave a Reply