In this article, we’ll look at the top reasons and solutions for test flakiness. A flaky test generates a different result although there are no changes to the code. We’ll investigate the outcome to identify why the same test Passed and then Failed or vice versa. The inconsistent results are problematic and cause delays with our testing process.
Table of Contents
- What Is Test Flakiness
- Main Reasons For Test Flakiness
- How To Avoid/Reduce Test Flakiness Using TestProject AI Tools
Test flakiness is a concern many of us experience when viewing the results of a test in our projects. The unreliable Pass-Fail results provoke doubt towards the test and sometimes development code. It’s uncertainty regarding the test due to test data, test steps, etc. There’s also caution with the development code because we’re not sure if the code was thrown over the wall 🧱
From the opposite point of view, test flakiness could have emerged from a different source. For example, the server can be one of the reasons for a flaky test. The server impacts information. When executing our test, the information travels to the server location and then all the way back from the server location.
Our browser sends a ping to the server asking for all of the information so the website can load. If our server’s performance is slow, then it will take a long time to respond.
Whether test flakiness is caused by the test, development code, or an outside source, it’s time-consuming to pinpoint why a test is flaky. However, we must perform an analysis and then establish a solution for the conflicting results.
There are many reasons for test flakiness. Here’s a list of the 5 main reasons for a flaky test:
- Internet Connection Speed
- Application DOM
- Dynamic Elements
- Device Variations and Configurations
- Asynchronous Events
The internet connection speed is a reason for test flakiness because there’s an increased traffic load on a line. As a result, the connection speed slows down and causes page elements to load slow. The test may Fail with a NoSuchElementException since page elements are not visible. Afterwards, the same test returns a Pass due to a good internet connection speed with visible elements.
The application document object model (DOM) represents a web page. Therefore, a significant change to the DOM also displays a change in the User Interface (UI). Depending on our test script, it can become flaky when there is an update to the DOM’s style, structure, and/or content 📃
Dynamic elements are challenging to handle. They remain on the web page but their attribute’s value changes upon reloading the web page. The test returns a Fail result if it does not take care of a dynamic element.
For example, in the following screenshots, the value for id changed from description_3c621b896e37 to description_03560d123649. 1st test Pass without a problem, but the 2nd test will Fail because description_3c621b896e37 is no longer the id value.
For mobile applications, there are many Android and iOS devices. Therefore, test flakiness happens with the variations and configurations. Some of the testing challenges are connection types, screen sizes, browser fragmentation, and execution speed.
It’s virtually impossible to escape test flakiness. Selenium, Appium, codeless technologies, functional testing, record, and playback tools struggle with erratic results. In spite of the struggle, TestProject is a free platform that reduces test flakiness and secures our test process with the following tools:
Automation Assistant is an AI tool that “Analyzes each step and detect cases where an action didn’t reach its target goal and attempts to fix it automatically”. The benefit is to provide suggestions that form a stable and useful test. Automation Assistant aims to resolve the flakiness factor of False-Positives and False-Negatives.
- A False-Positive is an action that returns a Pass result when the test expects a Fail
- A False-Negative is an action that returns a Fail result when the test expects a Pass
Adaptive Wait is an AI tool that “Ensures all environment conditions are sufficient for an automation action to succeed”. It addresses several test flakiness reasons such as internet connection speed, device performance, and asynchronous events by automatically supervising page load deviations. As a result, we can trust our test without setting up wait statements between each test step 🐾
The following are several screenshots that explain and illustrate different ways to implement an Adaptive Wait in our test.
The Self Healing Technology is an AI tool that “Automatically constructs alternative locator strategies for use when others fail”. It deals with the most difficult testing scenarios such as dynamic elements, iFrames, and pop-ups. The process involves identifying an element with multiple locator strategies. If the primary locator strategy fails to find the element, a backup locator strategy will find the element.
The Self Healing tool works when recording and executing a test. Here are a few screenshots that demonstrate where locators are added/removed, how a better locator was found during recording, and a Self Healed test step.
Test flakiness is common when testing an application. It happens when a test indicates a Passing and Failing result without a code change. Automation endeavors and functional testing efforts experience this issue within the Waterfall methodology, Agile methodology, and/or CI/CD pipelines.
The main reasons for test flakiness include internet connection speed, application DOM, dynamic elements, device variations & configurations, and asynchronous events. There’s no need to be alarmed by the flakiness because TestProject maintains 3 tools to assist with our test pains 💪 Each tool involves AI which makes it straightforward to create, execute, and manage our test.
These were my 5 main reasons for a flaky test and how to solve them ✅ Share with me if you have any other reasons 🙏 Happy testing!