On Web Load Testing

How to analyze a load test report? Part 1: Basics

After you run a test with a load testing tool such as WAPT, you normally get a long report with many tables and probably some charts with performance parameters. At first glance all these numbers do not help you answering a very simple question that you actually have: was the test successful or not?

However if you know where to look, in most cases you can get this answer in just few minutes if not seconds.

The information in the test report can be divided into the following categories.

Note that most tables in the report contain multiple columns. Each column corresponds to some period of the test. Usually the last column shows the averaged or summarized values that correspond to the whole test. Let’s take a look at the following example.

This table contains the information about the number of successful and failed sessions during the test. The total test duration was 3 hours. The very last column contains the total number of sessions executed during the test for each virtual user profile. For example, the number of successful sessions for Profile2 is 10,255. The total number of failed sessions in the test (all profiles) is 6,941.

Note that the table also includes 6 columns, each of which contains data for the corresponding 30 minutes period of the test. For example, the number of successful sessions for Profile3 during the first 30 minutes of the test is 53.

Now when we know how to read each table, let’s think about the order in which we should check them. First of all it is worth taking a look at the tables that show the parameters of the generated load: the number of virtual users and the number of successful and failed sessions.

I very much hope that everyone who is trying to read a report already knows the meaning of the mentioned terms. However let me remind this again. WAPT creates virtual users to emulate multiple real users visiting your web site. The number of virtual users can be fixed for the whole test, or it can grow over time (ramp-up load).

Each virtual user executes some profile that you initially create with help of the WAPT recorder. A single execution of a profile constitutes one user session. As soon as virtual user finishes executing a session, it starts the next one, and so on. During the test each user can execute several sessions, thus emulating several successive real users.

When you read a report, first of all you should take a look at the generated load. This is just to check that the load specification was set correctly and WAPT generated the load you wanted to create. The number of successful sessions in each test phase can provide valuable information only if it does not correlate with your expectations significantly. For example, if you see that for some profile the number of completed sessions is 0, this will mean that you need to increase test duration, because typical user session is too long to execute during the specified time. You may also want to adjust user “think times” inside the profile to make sessions run faster.

What is most important is the number of failed sessions. A session is considered failed if some page request in it completes with any type of error (HTTP, network, timeout or validation). In such case the virtual user executing the session drops it and starts a new session from the beginning of its profile. By the way, this is the reason why failed sessions take less time to execute than successful ones.

If you see that for some profile the number of failed sessions exceeds 10% of all executed sessions, you should look more closely on this problem before doing anything else. It is important to understand that in such case it is completely useless to analyze any other performance parameters, because there is a high probability that your test was not correctly designed. Only after you eliminate errors and get acceptable error rate (usually below 1% for sessions) you can take a look at other data.

I am going to continue this theme in the next post which will be fully dedicated to handling errors.