Quite recently, I gave a talk at Selenium Conference 2020 Virtual on How to build an automation framework with Selenium: Patterns and practices. In this session, I talked about what are some of the basic building blocks of a good test automation framework implementation and some good design patterns to follow and anti-patterns to avoid.
After this talk, it led me to think more deeply in terms of what are the mistakes that we as testers/automation engineers make in our test automation, and could we rather avoid these in the first place and, well, not do it in the first place 🤔?
I’ve been personally burnt due to some of these mistakes in my career and so I wanted to take some time out and write this post, in order to shed some more light on these mistakes and how you can avoid making them in your test automation code.
We will discuss these in the UI automation context but some of these are general test automation anti-patterns to avoid.
Let’s get started, shall we?
Writing Assertions Inside The Page Objects
When we write UI automation using the page object pattern, initially it could be quite tempting to even put assertions inside the page object classes, I’ve personally been guilty of this. However, doing so leads to unnecessary complexity in the page objects wherein you might end up writing multiple methods to satisfy both positive and negative scenarios on a given page.
However, if your assertions are kept outside the page objects and mostly in your Test files or Test helpers, then you can treat the page object as just an abstraction for a given Page or Component and reuse the same method to verify different outcomes.
Writing assertions in tests is a very natural readable thing to do as your colleagues don’t have to dig into a long page object to figure out what the test is trying to assert. Angie Jones also alludes to this in her very insightful post on Tips for healthy page objects.
Naturally, you can adopt the same philosophy even when writing Functional API automation. Keeping assertions in test files instead of deeply nested layers of abstraction ensures you follow the KISS (Keep it simple, stupid) principle while resulting in good separation of concerns.
Writing WebDriver API Methods in Tests/Page Objects
While creating a UI automation framework, it is quite easy to fall in the trap of writing Selenium WebDriver/Appium Driver methods (driver.sendKeys(), driver.click()) directly in your page objects, or even in tests.
The main problem with such an approach is that you are mixing different responsibilities into page objects and test classes. Your test classes should be all about performing a single atomic action and verifying its outcome using meaningful well-written assertions.
Having WebDriver methods in tests is a really bad idea since if there are any changes in Selenium WebDrivers API or different versions of the library, then you would have to make these changes in all your test files/page objects.
It’s always a good idea to wrap third-party libraries into your own abstractions since this prevents these dependencies leaking throughout your framework and is a way to write clean code (You can read more about clean code in my article or in this great Clean Code article by Corina Pip).
Instead, It is much better to write wrapper functions over WebDriver methods in a BasePage class and either inherit or compose this in all your page objects. This way, if anything changes, you have to make these changes in a single place instead of fixing these in multiple places. This very nicely adheres to the DRY (Don’t repeat yourself) principle which is considered a very good programming practice:
If you have WebDriver APIs in your test methods, You’re Doing It Wrong.
– Simon Stewart
Writing Long E2E Tests with Multiple Actions and Assertions
When writing test automation, it is often quite convenient to model entire user journeys as long winding E2E tests with multiple intermediate assertions. You can even argue that it leads to writing less code. However, these tests in the long run turn out to be a nightmare because of multiple reasons:
- They execute very slowly since most of these are from the Web/Mobile level which inherently has multiple factors leading to slow speed.
- If one earlier assertion fails, none of the succeeding assertions are executed, leading to a lack of coverage.
- These tests are quite flaky and non-deterministic, when they fail it’s often due to multiple different reasons, leading to great difficulty in debugging and maintaining these.
Above are some of the reasons why this is a bad idea. So what can we do?
Instead, prefer to write shorter, more targeted atomic tests that verify a single aspect of your system under test functionality.
The most obvious benefit that you get by writing atomic tests is that they are very easy to parallelize and deterministic in nature. When they fail, it usually is for a single reason and hence it’s quite easy to debug and maintain them.
Treating UI Automation as The One and Only Hammer for Every Nail
UI automation that works is quite wonderful to watch. Quite often one common mistake made by new budding automation engineers (after learning the WebDriver API) is obviously to try and use it for every use case that they can think of.
What could very well have been a medium-sized test suite to verify essential functionality (that could only be verified on the UI), turns into a gigantic suite of UI automation that tests every possible functionality from the UI 🤯
Such a huge suite is a debugging and maintenance nightmare and very quickly turns into a point of diminishing return wherein you spend more time maintaining these flaky tests than spend time improving the quality of the product by the numerous other activities that you could have done.
I know, it’s not easy to let go of the temptation. But trust me, this is a hard choice that should be made sooner than later.
So what could we do?
Be more intentional and mindful of your automation. Treat UI automation as a part of your automation stack, but not the only part. You could instead test more functionality via the API or push some of these tests into Component Level tests instead of E2E tests.
Test Automation can Replace Exploratory Testing
One common mistake that a lot of organizations make is to think that automated testing is the only flavor of testing that’s needed to handle the scale of the organizations.
You would hear crazy claims/call to actions to automate 100% of the manual test cases as if that’s going to magically get translated into a high-quality product (valuable to the customer).
Test automation, while being a very powerful tool in the tool belt of a skilled tester, is not the only weapon and is quite inefficient in creatively exploring the product and thinking of its different flaws all the while exploring it. It is often inflexible in that it cannot change on the fly to different demands that your product might have, however, exploratory testing can.
Thus, instead of one being the replacement of others, think of them as two amigos and two sides of the same coin: with test automation as a way to get rid of the boring repeatable steps and to take care of giving the fast feedback that the team needs, whereas exploratory testing as a way to find the different areas of the system that are not yet explored and can feed into future automation projects or find really hard to detect bugs.
Are these the only mistakes that an automation engineer can make? Heck no, I’m sure you would have made some yourself. Why not share your experience around it with the community so that we can all learn from them and maybe not make them in the first place
If you found this post useful, do share it with a friend or colleague. Until next time, Happy Testing/Coding ☮️