logo logo

Top Reasons & Solutions For Test Flakiness

Top Reasons & Solutions For Test Flakiness

In this article, we’ll look at the top reasons and solutions for test flakiness. A flaky test generates a different result although there are no changes to the code. We’ll investigate the outcome to identify why the same test Passed and then Failed or vice versa. The inconsistent results are problematic and cause delays with our testing process.

Table of Contents

What Is Test Flakiness

Test flakiness is a concern many of us experience when viewing the results of a test in our projects. The unreliable Pass-Fail results provoke doubt towards the test and sometimes development code. It’s uncertainty regarding the test due to test data, test steps, etc. There’s also caution with the development code because we’re not sure if the code was thrown over the wall 🧱

From the opposite point of view, test flakiness could have emerged from a different source. For example, the server can be one of the reasons for a flaky test. The server impacts information. When executing our test, the information travels to the server location and then all the way back from the server location.

Our browser sends a ping to the server asking for all of the information so the website can load. If our server’s performance is slow, then it will take a long time to respond.

Whether test flakiness is caused by the test, development code, or an outside source, it’s time-consuming to pinpoint why a test is flaky. However, we must perform an analysis and then establish a solution for the conflicting results.

Main Reasons For Test Flakiness

There are many reasons for test flakiness. Here’s a list of the 5 main reasons for a flaky test:

  1. Internet Connection Speed
  2. Application DOM
  3. Dynamic Elements
  4. Device Variations and Configurations
  5. Asynchronous Events

The internet connection speed is a reason for test flakiness because there’s an increased traffic load on a line. As a result, the connection speed slows down and causes page elements to load slow. The test may Fail with a NoSuchElementException since page elements are not visible. Afterwards, the same test returns a Pass due to a good internet connection speed with visible elements.

The application document object model (DOM) represents a web page. Therefore, a significant change to the DOM also displays a change in the User Interface (UI). Depending on our test script, it can become flaky when there is an update to the DOM’s style, structure, and/or content 📃

Dynamic elements are challenging to handle. They remain on the web page but their attribute’s value changes upon reloading the web page. The test returns a Fail result if it does not take care of a dynamic element.

For example, in the following screenshots, the value for id changed from description_3c621b896e37 to description_03560d123649. 1st test Pass without a problem, but the 2nd test will Fail because description_3c621b896e37 is no longer the id value.

id value

id value

For mobile applications, there are many Android and iOS devices. Therefore, test flakiness happens with the variations and configurations. Some of the testing challenges are connection types, screen sizes, browser fragmentation, and execution speed.

When it comes to an asynchronous event, it allows more than one thing to happen at the same time. That’s good because our browser does not need to reload the complete page when making a small change.

However, an excessive Asynchronous JavaScript and XML (AJAX) page take longer to restore the frontend because of the AJAX calls. As a result, a flaky test occurs when something Passes-Fails on our local machine then displays something different on another machine.

How To Avoid/Reduce Test Flakiness Using TestProject AI Tools

It’s virtually impossible to escape test flakiness. Selenium, Appium, codeless technologies, functional testing, record, and playback tools struggle with erratic results. In spite of the struggle, TestProject is a free platform that reduces test flakiness and secures our test process with the following tools:

Automation Assistant

Automation Assistant is an AI tool that “Analyzes each step and detect cases where an action didn’t reach its target goal and attempts to fix it automatically”. The benefit is to provide suggestions that form a stable and useful test. Automation Assistant aims to resolve the flakiness factor of False-Positives and False-Negatives.

  • A False-Positive is an action that returns a Pass result when the test expects a Fail
  • A False-Negative is an action that returns a Fail result when the test expects a Pass

This tool is on by default to enhance our web testing and mobile testing. The following are screenshots that show and explain the Automation Assistant.

Automation Assistant

Automation Assistant

Adaptive Wait

Adaptive Wait is an AI tool that “Ensures all environment conditions are sufficient for an automation action to succeed”. It addresses several test flakiness reasons such as internet connection speed, device performance, and asynchronous events by automatically supervising page load deviations. As a result, we can trust our test without setting up wait statements between each test step 🐾

The following are several screenshots that explain and illustrate different ways to implement an Adaptive Wait in our test.

Adaptive Wait

Adaptive Wait

Adaptive Wait

Adaptive Wait

Self Healing

The Self Healing Technology is an AI tool that “Automatically constructs alternative locator strategies for use when others fail”. It deals with the most difficult testing scenarios such as dynamic elements, iFrames, and pop-ups. The process involves identifying an element with multiple locator strategies. If the primary locator strategy fails to find the element, a backup locator strategy will find the element.

The Self Healing tool works when recording and executing a test. Here are a few screenshots that demonstrate where locators are added/removed, how a better locator was found during recording, and a Self Healed test step.

Self-Healing

Self-Healing

Self-Healing

Self-Healing

Self-Healing

Summary

Test flakiness is common when testing an application. It happens when a test indicates a Passing and Failing result without a code change. Automation endeavors and functional testing efforts experience this issue within the Waterfall methodology, Agile methodology, and/or CI/CD pipelines.

The main reasons for test flakiness include internet connection speed, application DOM, dynamic elements, device variations & configurations, and asynchronous events. There’s no need to be alarmed by the flakiness because TestProject maintains 3 tools to assist with our test pains 💪 Each tool involves AI which makes it straightforward to create, execute, and manage our test.

These were my 5 main reasons for a flaky test and how to solve them ✅ Share with me if you have any other reasons 🙏 Happy testing!

About the author

Rex Jones II

Rex Jones II has a passion for sharing knowledge about testing software. His background is development but enjoys testing applications.

Rex is an author, trainer, consultant, and former Board of Director for User Group: Dallas / Fort Worth Mercury User Group (DFWMUG) and member of User Group: Dallas / Fort Worth Quality Assurance Association (DFWQAA). In addition, he is a Certified Software Tester Engineer (CSTE) and has a Test Management Approach (TMap) certification.

Recently, Rex created a social network that demonstrate automation videos. In addition to the social network, he has written 6 Programming / Automation books covering VBScript the programming language for QTP/UFT, Java, Selenium WebDriver, and TestNG.

✔️ YouTube https://www.youtube.com/c/RexJonesII/videos
✔️ Facebook http://facebook.com/JonesRexII
✔️ Twitter https://twitter.com/RexJonesII
✔️ GitHub https://github.com/RexJonesII/Free-Videos
✔️ LinkedIn https://www.linkedin.com/in/rexjones34/

Leave a Reply

FacebookLinkedInTwitterEmail