A/B testing is a user experience research methodology consisting of randomized experiments with two variants – A and B. It is a popular pattern used in software development to determine the best fit of features, UX and messaging for the users. Organizations such as Google, Facebook, Instagram, WhatsApp, LinkedIn, WIX, YouTube, and many others apply A/B testing on a daily basis with experimental features based on audience and user personas 👨👩👧👦 The goal is to adjust the same application for multiple user personas or use cases, increase engagement and determine the best fit for the user audience.
There is a variety of factors that can be considered for such testing experiments: user role, geolocation, age, gender, technical skills, etc. In some cases, the flow will be completely different for the same application features (text, layout or even functional flows). In other cases, some of the features won’t be available for particular user audiences.
For test engineers testing features that are under A/B testing experiments, it can be an extremely challenging task since they are required to treat each variant as different application flows. Meaning, testers need to make sure each application flow behaves as expected. For instance, the login process can have changing variants for different users and the testers will need to test all the possible variations for a single process that is the login.
Test automation can be very useful for A/B testing and checking flow variations, since we can automatically execute multiple variants and flows helping test engineers increase the coverage of the tested scenarios based on user persona. That being said, there is a hidden challenge here: how can you make sure the same automation script behaves differently based on different user personas? 🤔
Another question that’s raised here is related to the ROI. Since A/B testing experiments don’t reside in the application forever, is it even worth investing in automating such flows in the first place? 🙇♀️
I’ve been reflecting upon these thoughts for a while now and in order to try and address them – I’ve decided to take up the challenge of creating a single automated test script while utilizing TestProject‘s AI-powered test recorder (that does not require any coding experience). I’ll create one automated test, without a single line of code, that will support two different flows based on different user personas that should perfectly simulate A/B testing scenarios as well.
For this example, I used the Orange HRM Demo web application that is publicly available here. I will cover two user personas within the same test based on the application functionality. After the login process, we will see that some of the application’s optional functionality is different for each persona (we’ll even view a different UI for each user type) 👀
The Test Method for A/B Testing
- Record a single test using TestProject’s AI-powered test recorder.
- Parametrize user login credentials of UserName and Password, making our test data-driven (DDT).
- Record common steps that are valid for both user persona (admin/regular user). In our case, we will navigate to the “My Info” section and edit/save a new nickname.
- For the Admin user, I’m going to add a conditional step that will run only if the Admin section appears on the page (Utilizing the “Get text if visible” action), according to the following steps:
- Create a parameter IsAdmin with default value “false”.
- Read the admin text label with the Get text if visible action only if Admin section is Visible.
- Assign Admin text to IsAdmin parameter.
- Add a condition to 8 steps that are related to admin (execute only if IsAdmin contains “Admin”).
Below we’ll take a look at how the application UI and functionality screens, as well as how our test case looks like.
Application UI and Functionality – Admin User Persona
Application UI and Functionality – Standard User Persona
Here is how my test case looks like with the common scenario test steps using TestProject’s recorder:
Below is the same test case, but here I changed the hardcoded values to become parameterized:
In order to create our data-driven test, let’s generate a CSV data source file as seen below:
Now let’s add the user credentials:
Next, we simply need to upload the CSV file back to the TestProject platform:
Now, let’s Run the test with our uploaded data set:
The Entire A/B Testing Test Creation with HTML Test Reports
Below you can watch all of these steps in a short 12 min video, demonstrating the entire process of A/B test creation, including examining execution test reports 💪📊
As demonstrated in the video above, it took me less than 15 minutes to create a robust single automation test case that tests two different flows (based on the user flow and persona).
- 14 steps were executed for the standard user persona
- 22 steps were executed for the admin user persona.
I used the “get text if visible” action that read the text from the element only if it is visible (for the admin’s user the Admins section was indeed visible). Then, I assigned its content to a parameter that I created with the default value false, and from there on the rest was pretty easy 🚀 I recorded another 8 steps that were applicable only for admins, and added a condition for each step to run if IsAdmin contains Admin.
This method can be used with A/B testing, Ads, or any other conditional steps you might have. The secret here is to try testing the flows that are “automation ready”, and not to chase after 100% automation coverage, which we all know is a myth…
Still, having some automation help in the mix is better than no automation at all, just use it wisely! 😉