logo logo

Adapting Recorded Test Cases to Improve Your Test Automation

A lot of test automation tools include a record and playback feature (I wrote about the benefits of this feature in a previous article). Record and playback enables user actions to be recorded as they are carried out on an application. These recordings are played back when the test is run. The code required to run these tests is automatically generated.

It is very difficult to create reliable test cases using recordings alone. In this article, we will demonstrate a few ways to improve recorded test cases. 

Separating a single recorded test into several smaller recordings

Let’s say we want to automate a test that runs through the following steps:

  1. Open LoginPage
  2. Type ‘User1’ into LoginNameTextbox
  3. Type ‘User1Password’ into PasswordTextbox
  4. Click OkButton on LoginPage
  5. Verify login is successful

It may be tempting to have all these steps saved in a single recording. Doing so allows the instant creation of an entire test case. 

However, over time there are likely to be changes to the software, which may affect the running of the test case. For example, changing the way the login credentials are submitted should only affect step 4. But this small change would require the entire test case to be recorded again.

Separating the test case into a series of smaller recordings makes the overall test case more maintainable. Most changes should only require a small section of the test case to be recorded again. In the example above, only step 4 would have to be recorded again. The remaining steps will not require any updates. 

Some test automation frameworks allow the user to separate out certain steps into alternative recordings. Therefore, recording entire test cases in one go is still an option as these steps can be separated out afterwards. During initial development, this method may be quicker than recording each individual step separately.

Reusable recordings

By separating out individual recordings, we are able to reuse the recording on other tests.

In this example, we have a test that checks a user cannot login with an incorrect password. We have created this test existing recordings. The only new recordings are steps 3 and 5. 

  1. Open LoginPage
  2. Type ‘User1’ into LoginNameTextbox
  3. Type ‘IncorrectPassword’ into PasswordTextbox
  4. Click OkButton on LoginPage
  5. Verify InvalidPasswordMessage appears

From these recordings, we could create more test cases covering the same functionality. Other tests could include:

  • Creating tests covering blank passwords
  • Attempting to submit blank usernames
  • Clicking different buttons on the login page (Cancel buttons for example)
  • Attempting to login with a deleted or disabled account

All these tests share similar steps to the original test cases, requiring only a few extra recordings. All these extra test cases can be created quickly thanks to previous work done on earlier test cases.

Another advantage of this is that a change made to a recording will apply to all test cases that use that recording. This helps reduce the overall maintenance cost.

Hard-coded variables 

All code required for recordings to playback is automatically generated once the recording has been done. In a lot of cases, the code generated from recordings uses hard-coded variables. Hard-coded variables are where the input data has been embedded into the code. As a result, if we have several test cases that require a different value to be used for the same variable, a separate recording has to be created for each test case. By adapting the recording so that alternative values can be assigned to a variable, the same recording can be used for each test case. 

For example, in the previous examples, we have 2 separate recordings for typing in a password:

Type ‘User1Password’ into PasswordTextbox
Type ‘IncorrectPassword’ into PasswordTextbox

A single recording that allows different inputs could be used instead. The code might look something like this:

string password = "User1Password";

The data variable ‘password’ has been hardcoded making it difficult to use for other tests. Here, the code has been modified to allow alternative values to be used: 

string password = newPassword;

Now we only need a single recording for entering text into the password textbox. The variable ‘newPassword’ will need to be set externally within the test case itself instead of the recording. The same recording can be used in multiple test cases. Each test case will have a different value assigned for that recording. 

Assigning unique or random values to variables in the code

Setting the parameter value in advance is not always the best way. Sometimes a value must be unique. When creating a new user account, the new login name must not already exist. The login name will already exist when the test is run a second time. Therefore we cannot use predefined values. 

To make a value unique, a date stamp can be included. The current date and time will never happen again, making it unique. In the following example, by pre-setting the username value and concatenating with the current date and time, we have a unique username: 

string username = newUsername + DateTime.Now.ToString();

Another issue with setting parameters in advance is that the test will always run with that specific value. If we run the same test with the same value assigned to a variable, we are only confirming that the test passes with that setting. Generating a random value can vary the inputs used in the test.

Exception Handling

It is always a nightmare when something unexpected happens while running a test. I’ve quoted Bas Dijkstra previously, and I’ll quote him again, because he said it so well:

“[When something unexpected happens, the automation] simply says ‘f*ck you’ and exits with a failure or exception, leaving you with no feedback at all about the behaviour to be to be verified in steps 11 through 50.”

Exception handling is one method that allows the test to fail in a more graceful way. What I mean by graceful is that instead of ‘f*ck you’ it calmly says ‘oh dear, something went wrong’, reports a failure and then moves on with the next step or test.

It can be hard to anticipate failure. After all, if we knew what was going to go wrong, we wouldn’t be running the test in the first place. However, the use of exception handling can improve the way the test recovers from potential failure. You insert the code into a try catch statement. The try section tells the program what to do. The catch statement tells the program what to do if something specific goes wrong during the try section.

In the example below, we’ve inserted the mouse click step into a try catch statement. The test will attempt to click on the login name textbox. The catch section is run if the text box can not be run. The test fails but the remaining steps in the test can still be run. 

string username = newUsername + DateTime.now.ToString();

       //Report test failed because LoginNameTextbox could not be found

Other code adjustments

Other changes that can improve the recorded test cases include:

  • Converting variables to different data types: Generally, with automatically generated code, the string data type is used by default. A string is a sequence of characters. In some cases, converting the string to an alternative data type could improve the test case.
  • Extra reporting: The test report will contain a list of all actions and verifications that took place during the test. Additional code can be included so more information about the state of the software is reported.


Record and playback features can be incredibly useful in the initial development of automated test scripts. Simple test cases can be created really quickly. Complex test cases can also be created by adapting the code generated while recording the actions. 

We should not rely solely on this method. It is important to adapt the code to improve the reliability and maintainability of the tests, and allow the tests to recover gracefully from failure.


What are your thoughts on record and playback? Let me know in the comments below  🙂 

About the author

Louise Gibbs

Louise works as a Senior QA Analyst at MandM Direct, an online sportswear and fashion retailer. Her main involvement has been testing updates made to the website and checkout and developing the automated test suite.

Before this, she worked at Malvern Panalytical, a company that developed scientific instruments for a variety of industries, most notably pharmaceuticals. She was involved in testing the software used to carry out automated particle imaging and raman spectroscopy to identify particles in a mixed sample.

Louise graduated from Bangor University with a degree in Computer Science for Business. Her first job after university was as a software tester for Loyalty Logistix, a company that produced Web, Mobile and Desktop applications that allowed members of the Automotive industry to run Loyalty schemes for customers.

Leave a Reply