logo logo

7 Proven Methods to Increase Your Automation Code Quality

How to improve the Code Quality of your Automation Framework

I still remember the day when I wrote, for the first time in my life, a few lines of code to automate the launching of browser, opening and logging in to a social media site. I felt elated and went on to call some of my friends and tell them how cool and easy it is to just sit back, relax and watch webpages open up automatically without any intervention. This excited me so much that I went on to watch videos after videos on Test Automation to learn how to perform different operations using automation code. I enrolled on to some MOOCs too and completed them quickly.

There was a feeling of accomplishment and a sense of defeating the fear of coding which I always had. A high confidence inside me whispered – “Most things related to coding in Test Automation are so easy and straight-forward, and for the difficult ones there is Google. It’s fun.” Then, after completing some small individual POCs in my project when I got the opportunity to work on a Test Automation Framework Development team which was helping multiple projects, reality struck 🤯 I realized that, although I know “How to write Automation Code”, I didn’t really know “How to write Good Quality Automation Code”. The main reason being that I rarely “saw” good quality code.

As a matter of fact, most of us “WRITE” bad code because most of us have “READ” bad code.

If you are in a Framework Development team consisting of good experienced programmers, then the consequence of not writing a good quality code is that your Pull Requests will never get approved and merged. So I started spending more time reading their code than actually writing code myself. At the early stages, reading someone else’s code is pretty hard, until it turns into a habit. To me, reading code has become a habit since then. I read a lot of GitHub repositories and blogs of some amazing coders regularly and learn things.

In this article, I will go through some of the high code quality principles which I learned in my journey. I always try to keep these principles in mind while coding which resulted in me coming up with some Good Quality Test Automation Code over the years. Let’s get started 👉

How to Improve the Code Quality of your Automation Framework

  1. Writing Intention-Driven Code
  2. Preferring Clear Code to Clever Code
  3. Adding Comments describing the “Why” part and not the “What” part
  4. Avoiding Long Functions/Methods and using Meaningful Names at all Cost
  5. High Code Cohesion and Low Code Coupling
  6. Scheduling some time and Addressing Technical Debt
  7. Doing efficient Code Reviews
  8. Conclusion

Writing Intention-Driven Code

Most of the time we start writing the code in the IDEs and then (maybe) we start to think about the “purpose” or “intent” of the code while writing. By code-purpose or code-intent I mean the activity that the particular piece of code was originally intended to do. The approach should be – to have a clear understanding of the purpose beforehand, to think over that purpose deeply and to come up with some design/ prototype/ flow-chart. And all of this should happen before we write the first Line of Code. The intention of what the code should do, needs to be clearly captured in some form of design/ prototype/ flow-chart.

In my opinion, one of the best approaches to drive this principle is to write small “Unit Tests” for that unit of automation code before writing the actual automation code. Writing a test first will explicitly make us think about the code and what it will be doing. This will result in us writing some simple and minimalistic code, with very little complexity. The higher the complexity, the less will be the framework maintainability. We don’t want to put ourselves in an awkward situation in the future where looking at the automation code we say “Hey, the code is working and the tests are getting executed successfully, but we have no idea how it is working”.

Preferring Clear Code to Clever Code

We always try hard to write some clever and smart automation code, particularly when someone else is watching it – this is something that most of you will agree with me. We want our code to be appreciated by others and we never feel good if someone says “The code you wrote is so simple”. Even in absence of anyone, it gives us immense joy when we write that 12 lines of clever code to call an API or DB and perform some assertions, instead of putting in 5 lines of simple minimalistic code. We appreciate ourselves by saying “We are amazing Test Automation developers because we write complex and clever automation code”.

This is one of the worst things that we can do while working in a team. The main problem of being obsessed with Clever and Smart code is that – the automation code becomes hard to read and hard to understand. None of our team members will appreciate that. Even more so if they have to work on it to fix something or add something. You see, the code should act as a medium of communication between the developers, like a story. It should not be a “puzzle” for others to solve. A famous quote from the book “Structure and Interpretation of Computer Programs (SICP)” (by Professor Harold Abelson and Professor Gerald Jay Sussman) points exactly to this:

“Programs must be written for people to read, and only incidentally for machines to execute.”

Our automation code should be clean code, clear, highly readable and easily understandable to our team members.

Adding Comments describing the “Why” part and not the “What” part

One of the many factors which have a direct relation to the clarity and understandability of our automation code is what we add in the “Comments” section. I always enjoyed writing comments (wherever needed) to my code but what changed over the years is the approach that I am taking now while writing comments.

During my initial years of coding, I used to write comments focusing more on the “What” part of the code – what the classes/methods are doing, what inputs are they taking, what actions are they performing, what other methods are they calling and so on. With experience, I realized that the comments should describe the code purpose or code constraints and should never be misused as a cover-up to a bad, unreadable, complex code. Our code itself should be self-documenting with a clear message on what it is doing. Comments should be focusing more on the “Why” part of it.

For example – the reason why some other automation utility method has been called, the possible behaviors of the code that people would not expect, the reason why it is written in the way it is written although there was a better way, why multiple levels of abstraction have been used in the tests and so on. We can try to put ourselves in someone else’s shoes who will be reading and running our automation code in the future and the task of adding good comments will become easy for us.

Avoiding Long Functions/Methods and using Meaningful Names at all Cost

A few years back, I created a desktop utility based on JavaFX which consisted of a long “void start (Stage stage)” method. The application was supposed to do many things and I kept on writing the things one after another inside it. I compiled and ran the program and everything worked perfectly fine. One day I was asked to add a feature in-between and I realized what mess I have created.

Long methods are “code smells” and should be refactored as quickly as possible if already written. Three design principles that long methods definitely violate are – “DRY (Don’t Repeat Yourself)”, “SRP (Single Responsibility Principle)” and “SoC (Separation of Concerns)”. I will not go into the details of these principles which I assume you are already aware of. On top of these, a long method can decrease the code readability, extensibility and maintainability. Refactoring and Rewriting will become hard. Creating Unit Tests running through this kind of methods will also be difficult and we may end up writing long Unit Tests which will beat the overall purpose of writing Unit Tests.

High Code Cohesion and Low Code Coupling

I became aware of these two factors, affecting the quality of code, at a much later part of my coding journey. Our test automation code should always be “Highly Cohesive and Loosely Coupled”. What is the meaning of these terms in coding context?

Cohesion” means how all the elements inside a class/package/module are functionally united together as a whole. Related code should be present as close as possible to each other and all of them together should provide the functionality that the class/module/package was created to deliver. Low cohesion means that the code written to perform a particular task is spread across the whole codebase which results in something that is difficult to maintain/refactor and reduces developer productivity. Examples of low cohesiveness can be:

  • An automation utility class which pulls resources from many other classes to do something.
  • Usage of a method inside a class which is not related to that class at all.

From OOP point of view, high cohesion can be thought of as – how object parts are closely related to the object behavior.

On the other hand, “Coupling” means how different classes/packages/modules are inter-dependent on each other and is a measure of the strength of their relationships. For an automation framework to be highly maintainable and scalable, coupling should be as “low” (weak/loose) as possible. If the coupling is “high” (strong/tight), then any change in a particular class/module/package, due to some bug fix or feature addition, can have a ripple effect across the whole codebase which will directly impact the time and effort taken by the engineers to work on them. Also, reusability will become very hard and possible duplication of code (violation of DRY principle) will take place throughout the entire framework.

Scheduling some time and Addressing Technical Debt

Addressing Technical Debt usually takes a back seat when the Sprint planning happens and is seldom included in the definition of done. But before discussing why it is important to address it quickly and regularly, let’s first break it down and understand what “Technical Debt” actually means to us.

It is a metaphorical term used to help us think about how the accumulation of technical issues impacts business delivery throughout the product releases. To be more precise, it is a concept which reflects the cost of additional rework caused by an easy solution implemented quickly rather than taking a better approach (mostly due to time constraint).

The first step to address technical debt in your automation framework is to “identify” the problems causing it. Most of the developers are not aware of particularly which part of the framework is adding to this Technical Debt. This requires taking some timeout and collaboration with each other to find out the root causes and to list them down. Even when the issues are identified, the next challenge comes in scheduling some time per sprint to address those issues. This requires an active initiative, involvement and dedication from the team members to rewrite, refactor or remove the identified underlying problems. Otherwise, the same problems will add up and lead to the end of the product over a period of time.

Also, “paying” Technical Debt regularly has a direct impact on how the framework will evolve over time. Suppose, long back a developer wrote the functionality of passing parameters to Jenkins from the framework using a long, complex, almost unreadable method consisting of several complex parsing before passing those parameters. Though the code smells, since the functionality is working fine, the team does not refactor the method to make it more readable, self-documented, understandable, simple and well designed. Later on, in a particular sprint, there comes some changes in the parsing processes which are needed to be addressed. What happens then? A lot of time will be spent in just understanding it, checking the dependencies and changing things. This may solve the problem for the time being, but not addressing the core problems will only make work difficult for the team in future sprints.

Doing efficient Code Reviews

We love our own code and we pamper it. We usually don’t feel that our code can be improved upon but when it comes to reviewing someone else’s code, we are always quick to point out issues in it. Experienced and good Test Automation developers try to overcome this bias. No code can be perfect but its quality can be definitely improved by doing efficient code reviews with the other team members.

Code Review should not be seen as only a “task” to go through each other’s code, but it should also act as an opportunity for the team members to collaborate with each other and understand the underlying problems or design decisions. In my work, I always prefer small incremental commits and Pull Requests for review. I have noticed that if you put a large amount of code for review, then most of the things are overlooked by the developer reviewing it. But smaller commits attract some good code reviews with clearly explained review comments to act upon.

Conclusion

Having worked on multiple Test Automation frameworks myself and being in touch with many Automation framework developers across the industry, one thing I can definitely say is that most of these principles are not being taken into consideration or not given serious thoughts, maybe due to the stringent project timelines or due to reluctance. Having said that what I can say from my experience is that these are not just hypothetical concepts, but proven methods to increase the code quality. You can definitely take the help of Static Code Analysis tools or Linters to assist you, but at the same time, you should always be looking to improve on how you write your own code if you want to make Test Automation a success in your project.

Sumon Dey

About the author

Sumon Dey

Sumon is a Senior Software Engineer with expertise in Java, Python, JavaScript and DevOps. He has worked on multiple products across multiple domains which include Communication and Media Technology, Retail, Insurance and Banking. He enjoys solving problems and deliver high-quality products. With all of the work that he does, his goal is to engineer and deliver valuable software that keeps up with the latest trends in technology.

In his career, Sumon has written many tech articles on various international magazines/platforms and has a keen interest on Data Science, and Machine Learning. He has a great admiration for the Open Source Software community. As for his future goals, he would especially like to work on a product which utilizes Artificial Intelligence. In his spare time, he loves to write blogs and articles on various topics, ranging from programming, tools, technologies to actionable tips, techniques and best practices, in his personal website (http://www.sumondey.com) for the tech community. To him, learning new technologies, coding and writing is a passion. When not at work, he enjoys spending time with his family, reading books, cooking, running or watching soccer.

You can connect with him on Twitter (@blackrov2sum) or LinkedIn.

Leave a Reply

FacebookLinkedInTwitterEmail