The title of this blog post was taken from a question which was asked by an attendee at TestCon 2020 to the discussion panel that I was a part of. We sadly ran out of time so could not go into too much depth on the subject, but I hope to dive a little deeper in this blog post.
Despite this blog posts title, I really don’t like referring to these different types of testing as manual and automated testing (perhaps that’s for another post), but these are labels which many people in the software industry are familiar with and so that’s what I will refer to for these two labels in this blog post. A differentiator is testing using tools at various degrees in different contexts.
When it comes to striking a balance between manual and automated testing, of course it’s a balancing act. It depends on so many things that it would be difficult to put it all into one blog post. Instead what I would like to do is to discuss some of the benefits of one approach to testing over the other based on my experience in this area, as well as to dispel many misconceptions about each type of approach so that individuals and teams can instead make decisions on which type of testing best suits them and their project, at that point in time, based on what it is that they are trying to achieve.
When I hear the phrase manual testing, I think of skilled testing. Manual testing, unfortunately I think, typically gets a really bad name. When some people think about manual testing I bet they think of somebody who sits there creating, managing and then maybe running through test scripts in order to test various scenarios within an application. One problem with this approach is that very rarely do the individuals running through these scenarios go off script. Apart from being a pain to try and update these scripts, it’s enormously demotivating for somebody to have to keep maintaining as well as executing these over and over again. Yes these can be written from requirements and before a line of code is written, but very rarely do people test anything which isn’t within the script, they don’t test edge cases and by the time they’ve handcrafted or updated the scripts they often do not have time to fit in any other type of testing because more often than not, they need to just get the thing they are helping develop out to their customers.
On the flip side of that, one of the biggest problems with solely relying on automation is that it isn’t a human running through the scenarios. I really like the way that Graham Ellis frames this type of testing when referring to automated testing and the downside of automated testing:
“It doesn’t think.
It doesn’t reason (beyond the algorithmic constraints you’ve told it to use, which may or may not be correct)
It doesn’t interpret.
It doesn’t make value judgments.
It doesn’t experience.
It doesn’t question.”
Automated checks don’t test all the things, they more often than not simply do the things that you tell them to.
There’s so much more to ‘manual testing’ than most people who do not identify primarily as a tester (QA, SDET, test engineer…<insert fancy title here>) tend to think about. When I think of ways I can add value when not thinking about writing checks, I think about things I might explore during exploratory testing sessions. These sessions are usually timeboxed and help me to focus on a goal, for example exploring a datepicker of a hotel booking application to learn about it. There are so many great books and blogs on the subject but if you haven’t read them already, I highly recommend you take a look at the following books on the subject; Elisabeth Hendrickson – Explore It!, and Maaret Pyhäjärvi – Exploratory Testing and the TestBuddy blog which has some great posts on exploratory testing.
Teams where possible should be looking to strike the balance and where possible do manual testing but also where it makes sense (weighing up investment/value/risk), write automated checks depending on what the team feels is right for that project.
Some people might consider the feedback that they get from carrying out testing which doesn’t rely on tools quite slow, often feeling it takes time away from a team where they believe their time might be better spent elsewhere. What this type of testing does do though is give a team insights into how an application does or doesn’t work, which automated checks simply cannot give, it’s invaluable feedback from experience based testing which can help you to find the things which checks cannot. Automated checks simply give you feedback that something is doing something the same. Skilled testing simply can give you better feedback about the SUT (System Under Test) than any automated checks.
Something worth considering though is that repetition can often be something I like to refer to as a test smell, a sign that this might be something you and the team could consider automating. It isn’t always the case but it’s always something that you should consider – there is no silver bullet to any type of testing however and you as a team or individual need to decide what you believe is necessary based on your experience and knowledge at that time.
Much of the problems I’ve seen within teams from my experience have been when teams perhaps with little experience in testing or perhaps with a mindset which isn’t geared to be test first, try to go so far to the right where they want to automate all of the things. When I refer to test first, I don’t mean TDD (Test Driven Development).
The following is taken from AgileAliance.org defines what TDD is:
“Test-driven development” refers to a style of programming in which three activities are tightly interwoven: coding, testing (in the form of writing unit tests) and design (in the form of refactoring).
It can be succinctly described by the following set of rules:
write a “single” unit test describing an aspect of the program
run the test, which should fail because the program lacks that feature
write “just enough” code, the simplest possible, to make the test pass
“refactor” the code until it conforms to the simplicity criteria
repeat, “accumulating” unit tests over time
When developers write TDD (test driven development), they are writing tests to support their development, sometimes using it to help guide how they should approach how they design a system and give them feedback as they go. When I talk about gearing toward test first, I mean both static and dynamic testing – testing the requirements, design documentation, the testing that goes in before a line of code is written, as well as the testing that goes on to support the code, as well as to check that it does or doesn’t do the things that it does after the code is written.
There are many people out there who might identify as developers or SDETs (software development engineers in test). When thinking about testing sometimes people in these roles might fall into a trap whereby what they really want to do is to ‘save time’ and just build the thing. These teams don’t want to spend lots of time manually testing the application, they just want to write automated UI checks because that’s enough, right? The problem with this is that this isn’t testing the majority of the time… it’s checking.
So what do I mean by this? Well if you write a UI test that goes to a login page, inputs ‘user’ in to the user text field, types ‘password’ into the password text field and clicks the login button, what are we actually testing? Maybe it’s that we login as planned, maybe it’s that an error message we expected was displayed, but each time we run this test what are we actually testing? It’s the same input and expected output. We are not using our experience to help us make informed decisions, observing other things which might make the application behave in a way that is unexpected, or trying out new test ideas. Of course, you could write automated tests to do combinations to test boundary values, naughty strings etc, maybe throw in a little pinch of AI (Artificial Intelligence), but how much effort would this all take, how much value is there when compared to the reward for writing, maintaining and executing these types of tests?
This is where individuals and teams need to use their experience to weigh up what to automate and what should not be automated. Teams should be attempting to strike a balance based on their requirements, the investment required, and the type of risk it poses if they choose to not implement that test or check.
Communicating with one another, not relying on tools to do so and actually collaborating, working closely together, is in my opinion absolutely key to starting to be able to start to strike the balance for all things testing.
“The whole delivery team works together to build quality in throughout the process. By `whole team,` we usually mean the delivery team – the people who are responsible for understanding what to build, building it, and delivering the final product to the customer.” – Agile Testing Condensed.
Often I am asked, how can we fit both automation and manual testing into a sprint? Or how can I start to learn automation? Or even, I’m not sure where to start, I am being asked to automate all of our tests…
Firstly, never put pressure on yourself to learn test automation. If this isn’t something that you want to do, don’t do it. Why do something that makes you feel stressed or unhappy if it isn’t something you want to do? The engineers on your teams who write the front-ends, backends etc, writing code is what they do. I think lots of the time people think: “Oh we need to write automated tests, let’s get the testers to write them”. Why? I believe within a team anybody should be able to pick up for example a user story and be able to write an automated check based on the scenarios specified within that story. These checks should and can be written before a single line of code is written.
When it comes to what should and should not be automated, start from the beginning, the ideas. The team should be working with the business where possible to formalize these ideas and in turn helping to turn them into requirements. On teams I’ve worked on before, we looked at each story and tried to break down the requirements. We would make use of regular 3 amigo sessions to ensure our scenarios for each story demonstrated the business intent behind them, using BRIEF to help us frame the scenarios.
Once these scenarios were written and the story agreed as being ready to be picked up by the team, anybody within our team was then able to pick up that ticket, and then write the automated check before a single line of production code was even written.
Whoever picked up the ticket would simply only write automated checks for the agreed scenarios within that ticket. When stories were sized we as a team would always size as a team considering both the testing and development of each feature/story which included not just writing the automated checks and production code, but any other type of testing that we deemed applicable which included timeboxed exploratory testing sessions. If you still have questions or things to discuss on that ticket then it simply isn’t ready, you need to consider having additional 3 amigo sessions or reaching out to stakeholders in order to get those questions answered. There should be no known unknown before any person within your team picks up that ticket!
When you shift left and start with the why, it really does help you as a team to decide on the what. What should we test? What is the value in doing so? What is the risk in not doing so? How much effort is involved?
“Whilst shifting left is incredibly valuable, it doesn’t mean that you should push everything to the left and forget the rest of the testing that occurs?” – Graham Ellis
Every requirement and every team is different. There is no hard and fast rule. There is no silver bullet. Try to weigh things up. Don’t jump two footed at things because it’s what you’re told, or because it is what you think you know. Work as a team, use your combined knowledge and experience to determine what you believe are the right things to do at that point in time to help you find that balance 🙌