logo logo

Now That You’ve Created Automated Tests, Run Them!

Now That You've Created Automated Tests, Run Them!

In my previous article, we learned how and why it’s important to write our tests in small increments. So now you’ve created your automated tests, and there are quite a lot of them. There was a good reason for creating them- therefore, there is no good reason for just keeping the tests in ‘storage’ and not running them 🏃‍♂️

It happens a lot that once an automated test gets written, it is immediately abandoned and not run. Many teams work in an Agile environment, therefore they allocate time each sprint on creating automated tests.

Automated tests only run on the tester’s machine

For example, one might create these tests in order to validate a feature that is being developed during that sprint. A lot of time is spent writing the automation. Hopefully, the result is reliable tests. These tests, once started, should correctly validate the feature they are dedicated to. As long as the feature exists, these tests could run and validate it.

Because teams work in Agile, the feature is usually finished from a development standpoint sometime during the (or a) sprint. The tests are run on a dedicated test environment during that sprint. Many times they are run from the tester’s machine, and not from a CI/CD tool like Jenkins. When the tests run successfully the feature is considered ready to be released ✅

There may be one or two other occasions when the tests are run, this time in a different production-like environment. This time also, they will run from the tester’s machine.

But then, once the feature goes into production, the tests might never be run again. Especially since either the testers are reluctant to run tests in production (not to break something) or because they are not allowed to run them, in the current format, as not to break something.

So, all the time spent creating these tests is wasted on a total of 2-3 test runs, and even those done from a local machine instead of CI. What is going on here? Why are the tests not run periodically, since they are available anyway once created? Or, why was there so much time spent on creating them?

The answer to this last question might be: because we have to. Management pushes us to create automated tests even for such small features or low priority ones, just to add some numbers to a statistic. ‘We need to automate everything’.

Or, maybe, it’s just that we are so used to automate things, that we don’t even stop to wonder: is there value in automating tests for this particular feature? Sometimes the answer is no. We need to realize that and spend our time better. If the test is not worth automating, maybe we should just write a ‘manual’ test case for it instead, and manually test the feature at the stages I mentioned earlier.

How can the tests bring more value?

Maybe the answer to the previous question is that time was not wasted. Now that we have the tests, and we decided they do bring value, we can actually include them into our CI/CD pipeline.

After all, you want to make sure the feature you just released to production will still work in the future, even if updates to it are made. Or even when updates that might affect the feature are made. Having automated tests gives us exactly this: an automated way of making sure the feature works after every or any commit.

Having a tool like Jenkins allows us to schedule the tests to run, without us having to interfere. You simply need to decide how often and when you want them to run.

The Jenkins scheduler will then start the test run, and the only thing you need to do is analyze the results. If it’s enough to run these tests once a week, you can schedule them to run during the weekends. This way, on Monday, you can check the results 📊

Or, if the tests are so important that running them after each deployment on a test environment is required, that is what you will do. Nightly test runs can also be achieved.

Basically, whatever schedule makes sense, is a schedule you can create in Jenkins. This way, you get fast feedback when the feature does not work properly and fixes are required. As opposed to finding out the feature is broken right before or after the next release.

I keep saying that we spend a lot of time writing automated tests. And I expect this is the case since this involves the following steps:

  • Analysis of what the test steps are and how to approach the automation
  • Writing the initial implementation (using coding best practices)
  • Running the test to make sure it is reliable
  • Making any required tweaks to make it reliable, if needed
  • Sending the code to code review and making any updates if required
  • Rerunning the tests to validate the reliability

Yes, we do need to spend time on writing good tests. Otherwise, unreliability will also be a reason for which the tests will be abandoned as soon as the feature goes to production. Who wants to run tests whose results are unclear? Now the same test passes, now it doesn’t, but the code it is testing has not changed. How many times can you rerun an unreliable test before decommissioning it?

Conclusion

So, if you have automated tests, don’t hesitate to run them, and a lot. And if they are unreliable, fix them, and then run them. Make sure you are properly validating even those features you already released to production 💪

Avatar

About the author

Corina Pip

Corina is a Test & Automation Lead, with focus on testing by means of Java, Selenium, TestNG, Spring, Maven, and other cool frameworks and tools. Previous endeavours from her 11+ years testing career include working on navigation devices, in the online gaming industry, in the aviation software and automotive industries.
Apart from work, Corina is a testing blogger (https://imalittletester.com/) and a GitHub contributor (https://github.com/iamalittletester).
She is the creator of a wait based library for Selenium testing (https://github.com/iamalittletester/thewaiter) and creator of “The Little Tester” comic series (https://imalittletester.com/category/comics/). She also tweets at @imalittletester.

Leave a Reply

FacebookLinkedInTwitterEmail