logo logo

Effective performance testing management

Load Testing – On Premise vs. Cloud

What is performance testing, and why should we do it?

Many times, we hear these words – performance testing. What is performance testing precisely? We used to use functional testing on projects mostly, and we rarely had a chance to do non-functional testing on the project. Non-functional testing is performed to verify quality factors such as reliability, scalability, etc. These quality factors are also called non-functional requirements.

With non-functional testing, we are improving the user experience and covering all areas not covered with functional testing. It is equally important to test how the system performs as well as ensure it is functional. One type of non-functional testing is performance testing which is used to validate the scalability, stability characteristics, and speed of the application under test.

In the current IT market, the performance of applications plays an important role, and the success of a business depends on the mitigation of the risk of availability, reliability, and stability of a web application, for example. We are trying to achieve particular response time, throughput, and resource utilization for our web application, and performance testing is crucial to ensure this. Further, there are multiple types of performance testing that should be addressed, including load testing, stress, endurance, spike, volume, and capacity testing – all of which can discover potential performance problems in our web application.

We often ask ourselves why we are doing the performance testing and whether it is essential. When we are developing our web application, we aim to have a good product and to be able to address some bottlenecks and find the cause of performance issues. Performance testing gives us the information we need and the possibility to have a good quality of our web application prior to releasing it into production.

How to start understanding and managing performance tests

Performance testing isn’t as simple as just using some tool and trying to generate results. It is not as simple as it seems. If we are talking about performance tests in our application, we need to know how to manage that our tests work efficiently. What does that mean? Here is a generic process on how to perform performance testing when we are talking about on-prem solutions.

Identify the test environment

In some situations, a staging environment can be used for performance testing. When staging is used for performance testing, many companies require it to be identical to the production environment, and the costs of maintaining this environment include higher costs for the company. Because of the reduction of costs mainly, we are choosing to have staging with a smaller number of resources. The performance testing results are also valid if the number of resources is bigger than 70% compared to the production environment.

Performance acceptance criteria

What are the acceptance criteria? The product owner or customer, along with the help of QAs, should define the application’s criteria that determine when it is ready to be accepted. They are defining together performance criteria and goals. If we are not limited by time, comparing our application with something similar is great. We need to define plans and constraints, but this is also the point at which we define resource allocation. Outside these goals and constraints, we are defining project success criteria. After defining, we are starting to measure parameters and estimate results, comparing the actual and expected in order to have a baseline for the tests.

With a defined baseline, we can track the progress of the project. Using these metrics, QA will be able to find an issue, and during the time, we can estimate the impact of code changes.

Plan and design performance tests

Plans for the performance testing include identifying key scenarios to test possible use cases. It is necessary to simulate a number of end-users, plan performance test data, and determine what metrics will be gathered. It is needed to understand the application, customer needs, and test goals for performance tests.

We will have different expectations for an application that has been alive for many years than we might for a new application. If we want to have relevant performance tests, we need to understand our application, its functionalities, and how it is used. . This will help us to make realistic performance scripts and find possible issues.

We need to determine customers’ needs in order to determine the application’s expected usage. It is essential to know how many times per day the application is used, how many users are authenticated and how many are not, and what the expectation is about the responsiveness of our application.

It isn’t good enough to have a task for performance testing. We need to understand the goal of testing and whether the application will handle the expected load, what is the maximum throughput for the application, how quickly our application can respond to requests under the expected load, and how quickly it can respond to key requests.

Configuring environment and implementing tests

Before executing our tests, we must prepare the environment and arrange tools and other resources. In the first performance testing phase, we gathered all possible details about the production environment, server machines, and load balancing. In this phase, we need to prepare something similar. Everything should be documented with all the data about these steps. We need to ensure that our environment is isolated. If we have some active users in our environment, it is impossible to discover the bottlenecks.

Our network bandwidth is also essential to achieve realistic performance test results. If the network bandwidth is low, the user requests begin to produce timeout errors. So, this is the reason why we need to isolate the network from other users. If we have a proxy server between the client and the web server, the client will be served with data in the cache and will stop sending any requests to the web server, and we will have a lower response time and not realistic results.

One of the responsibilities of the QAs in this phase is to make sure that the number of test records is the same in both the test environment system and database. If the database is small, we need to generate the necessary test data for better accuracy.

After configuring the environment, it is time to start implementing the tests using the test design that we previously made.

Execution of the tests and analyzing the results

After we finish developing our tests, we can begin running and monitoring them. We can then analyze the tests and share the execution results. Our next step is to improve our performance tests by fine-tuning and retesting to check if there are some performance improvements. The most often gathered metrics that we are tracking are: processor usage, memory use, bandwidth, number of bytes a process has allocated that cannot be shared with some other process used to measure memory leaks and usage, amount of virtual memory used, CPU interrupts per second, response time, throughput, maximum active sessions, hits per second, top waits, thread counts, garbage collection. We can finish the performance testing when we get all metric values with acceptable limits according to the baseline.

Conclusions

First of all, non-functional testing is equally important as functional testing. If we want to have a good picture of our overall product quality, we need to include performance testing in our testing processes. We need to follow all those phases to have the correct way of managing performance testing because it is equally important as writing the performance tests themselves. We must plan performance testing very carefully. If we do this, the result will be reliable tests, bottlenecks identified and remediated in the test rather than production, and higher-quality product.

You can learn more about performance testing with Tricentis NeoLoad.

About the author

Dusanka Lecic

Dusanka Lecic is Test Lead and Department Manager at Levi9 IT Services. In the previous 6 years, she was actively involved in several different projects, using different technologies and tools every day. She likes to share knowledge and supports various initiatives, so she is active in leading an internal expert community that gathers the best experts and enthusiasts for testing in Levi9. She is also dedicated to her academic career as a Doctor of Science, so she often writes papers for conferences, where she points out current trends and the importance of testing in software development.

Comments

2 1 comment

Leave a Reply

FacebookLinkedInTwitterEmail