In the last couple of years since I opened my business, I had the opportunity to work in a variety of companies (small and large), and to give advice on how to build an infrastructure, which exposed me to many critical problems and mistakes along the way.
In most cases, the beginning of something seems to look very promising and easy. But in some cases, a tester writes a couple of code lines on a particular framework, immediately begins to see results, and makes the managers fantasize about dozens of automated tests.
In this article, I want to share some of the common automation failures that happen before writing the code 👩💻
Common Automation Failures – Table of Contents
- Expecting a 100% Coverage of Automated Tests on the Product
- Not Setting Goals
- Bad Choices of Products/Features for Automation
- Poor Resource Planning
- Excessive Reliance on UI Testing
- Choosing an Unsuitable Test Tool
- Is it Open-Source? Great, We Love Open-Source!
- Minimal Investment in Running Reports
- Lacking Technical knowledge in Automation Developers
- Poor Automation Infrastructure = Lots of Code Maintenance
- Poor (manual) Tests will Lead to Poor Automation
- No Parallel Testing
- Work Environment for Automation Only
- Disconnect Between Testers and Developers
Expecting a 100% Coverage of Automated Tests on the Product
In most cases, in most software products, there is no such thing as 100% coverage of automated tests on the product. Not everything can be sealed with a return on investment that will justify the automation.
It can sound really cool – “running tests – without human involvement”. Automation developers are trying to bring automation into full coverage with pipelines alongside, while combining several automation tools and several different technologies.
But you can’t write automation on products/features whose ROI is low. But what does high ROI mean? This leads us to the project definitions and success goals 🎯
Not Setting Goals
A project that doesn’t have success automation metrics, has no success goals set – you could say it’s set for failure even before the it begun.
In fact, this can be said about any project, probably in the field of software. But for some reason, in the field of testing, an automation project is considered by some managers as something that is “weak”, because what matters is the product that sells and brings revenue, right?
Bad Choices of Products/Features for Automation
We must understand that we cant cover the product with 100% automation. So the question is, which product or features in the product will be worth spending time on? The issue here is not just writing the infrastructure for features but more a matter of maintenance 🔨
A new feature will change and can be rolled many times, so this is not the time to invest in its automation, as its behavior is likely to change. So is its visibility, and you will have a hard time maintaining such a project.
There are also features that include areas that are technologically still difficult for us to optimize, such as graphic elements, sounds, or dynamic features.
Poor Resource Planning
When an automation project is not managed properly, in addition to not setting its goals from the previous point, it probably won’t have the right resource planning.
This means that schedules will be set without even knowing the challenges we face in the project, the number of resources (the people that are allocated to it) will usually not be enough.
Excessive Reliance on UI Testing
For some reason, some (wrong) assumption has been made, that UI-based automated testing is the hottest thing, because of claims that this is what the customer sees and therefore the maximum coverage and efforts of the staff should be based on such tests.
I am not saying that these tests should be completely ignored, but they should not completely take an increased amount of all tests (of course there are exceptions).
I must say that to my delight, in recent years I have encountered this approach less and less, but I still get to try and explain to those who claim otherwise. I can already recite the ice cream cup while sleeping 🍨
Choosing an Unsuitable Test Tool
There are lots of articles on the Internet about choosing the best automation tool while showing usage rates and over-enthusiasm Google Trends. But I believe that there is no such thing as “the best automation tool”, there is such a thing as “the best automation tool for you – for your needs, your product, and your team”.
Don’t get me wrong, I’m very much in favor of the wisdom of the people, and if a particular tool is very popular, there’s probably a good reason for it!
If most of the industry uses it, it means it has a large community, which can raise our product trust level (low bug amount, high level of maintenance, frequent updates, and removing the fear that it will disappear from the field and be replaced by something else overnight).
But yes I will say, that despite the information available, there are still teams that choose automation tools that are not most suitable for them and this is one of the criteria for failure.
When I ask – “Why did you choose this tool?”, The answer is usually: “It is for historical reasons, even before I started working here at the company…”
Is it Open-Source? Great, We Love Open-Source!
Directly following the previous section, people will almost blindly choose the option of open-source products, but why? Because it’s free.
Without understanding the implications of open-source products, those same decision-makers can find themselves getting lost very quickly in a large, poorly maintained project and with a great deal of time of development over its open-source libraries.
Yes, I also like open-source and strongly believe in it. This is also the reason I chose to make 99% of my courses on these products. But I argue that this is not a magic word that will make your project rise.
Sometimes I would advise companies to avoid open-source because of their product, because of the background of their testers, because of the field they work in (e.g. a field that has hardly regulation).
I try to make them realize that their automation development time may not pay off in the face of the alternative of taking a commercial product and using its ready-made infrastructure. An infrastructure that has been written, updated, modified, and tried in front of hundreds of different customers over the years.
Maybe it’s worth the resources rather than trying and reinventing the wheel, what do you think? 🔎
Minimal Investment in Running Reports
Many tend to diminish the importance of run test reports and this is a mistake. It is one of the most important features in an automation project. But why?
Because we have quite a few downs in the project and many times these downs are defined as false alarms (a test that failed not because of a bug in the product, but because of environmental problems, usually related to timings and gifts).
It is true that part of our goal is to fight false alarms, but it all starts with an improved report system, one that will allow us to identify the problem or crash of our test case in minimal time.
A good report system will save automation developers valuable time allocated to to others tasks instead of letting them chase after their own tail.
I have seen cases where 90% of the time the developers were trying to figure out why the tests failed. Nowadays there are smart tests reports, that can combine several parallel runs, with an effective and pleasing dashboard 📊
Ones that store the data in some repository or database and have a smart engine of segmenting and extracting relevant information, ones with AI capabilities A prediction of tests that are going to break (according to statistical analysis in the history of the runs) and should be kept an eye on.
Lacking Technical knowledge in Automation Developers
There are cases where managers give the team’s senior tester to launch the automation project from scratch. In the best cases they can buy them some basic automation course on Udemy.
At the worst cases, they let them research on their own on the Internet, from LinkedIn or YouTube videos.
In some cases, testers lack programming background and don’t have much experience in setting up automation infrastructures. Eventually, the tester comes out frustrated when things do not work.
Poor Automation Infrastructure = Lots of Code Maintenance
Directly following the previous section, a lack of knowledge in writing infrastructure will lead to the creation of low-quality infrastructure, both in terms of its design and in terms of code quality.
When the infrastructure is not good – the entire project is not good. In the latest stages of the project, infrastructure repair is something that is usually avoided because of the effect it will have on all those test cases we have already written.
In the opposite case – if a good, smart and efficient automation infrastructure is written, updates in it will not be such a big headache for the developers.
Let’s look at it this way: why do we even need to write an automation infrastructure? So that later we will have it cheaper to maintain it.
Poor (manual) Tests will Lead to Poor Automation
Well, that seems pretty trivial – what will a good automation infrastructure help us if the tests are not good? We do not write automation to show – “Here’s how beautiful we can write”.
At the end, automation has a purpose. We need it to give us the support when things are not broken in the current version, and if the tests we write don’t meet that goal, then what did we do here?
No Parallel Testing
One of the major advantages of automated testing over manual testing is the ability to run test suites in parallel to save time ⌚ Some managers prefer to skip this important advantage because of lack of knowledge or complexity that they do not currently have time to resolve.
Like the infrastructure – the matter of parallel runs is meant to save time. There are now enough solutions to overcome certain problems in terms of running environments.
Work Environment for Automation Only
The automation environment should be sterile. Product developers are not supposed to connect to it “just to test something in the latest fix”.
Product managers are not supposed to log in and introduce the environment to other people in the company, and of course salespeople don’t have to introduce features about the testing environment.
The data, which is an important and painful issue in testing should be predefined and entered as a prerequisite for testing.
Most importantly – do not enter the data through the UI, but through the API or database loading. If our test involves creating new data, no problem, but this test should not be the critical mass of data entry, not through the UI anyway…
Disconnect Between Testers and Developers
In most organizations this is not the case, certainly, if it’s R&D teams in high-tech companies (companies whose main product is a software solution), but sometimes when I get to organizations that are not “high-tech”, the testing team is outsourced.
It is very difficult to succeed in an automation project when there is no interaction between the teams. Setting standards and interdisciplinary work processes do not exist, which sometimes causes more work on the examiners part, burning unnecessary time on problems that would have been easily resolved in a joint sitting..
There are many challenges when it comes to creating successful test automation projects, but you can see there are ways we can avoid most of these failures with the above tips 💫