General

Efficiency and capacity of load testing tools

If you are looking for a load testing solution to check the performance of your web site, or to start using permanently in your web development process, you may need to analyze various characteristics of many tools before you make your choice. One of such characteristics is the efficiency.

If you want to test a web application used internally by your company staff, you may need to check that it supports, say, 100 concurrent user sessions. This is not a very big load for any tool. You will hardly need more than one generating system to run such test.

The situation is completely different, if you need to stress test a popular internet portal. A load of 100,000 concurrent users can be a good check for it.

In both cases you may have typical user sessions of similar complexity, so it will take same time to design the tests. However they are very different in capacity. That is why for the second case you need to pay very close attention to the ability of your testing tool and hardware to generate a big load volume.

Note that many vendors charge for their tools depending on the number of virtual users. Others have different versions of their products with different capabilities. In any case, if you plan to create a big load, you should choose an extendable solution: your load testing tool should be able to use several systems for load generation.

At the same time you should check (or ask vendor about) the efficiency of each load generating unit. In other words, you should plan how many virtual users can be produced by a single system in your testing environment. Surprisingly, this number ranges from 200 to 10,000 (and even more) for different solutions.

The main reason of such significant difference is that “slow” tools use browser component to run each virtual user. This consumes almost as much system resources as launching a real browser to run each user session. Imagine a computer with 200 open Internet Explorer windows and someone clicking new links in each window every 5-10 seconds. Will it have resources for more users to run?

In fact, such approach is good for functional testing, because it provides a better control over each user session. You can see it step by step in the browser window and easily understand (or check automatically) if something goes wrong. Some testing tools initially targeted on functional testing have been extended by their vendors with the load testing functionality. As a result, they can now run many concurrent sessions, but do this relatively slow.

The “fast” tools record and execute user sessions as sequences of HTTP requests. Of course, these sequences can be dynamically modified for each instance, so different virtual users can send user-specific data to the server. However the main approach is that each user session is a sequence of HTTP requests. Such tools do not need any browser to run the test. They perform all the emulation themselves. Instead of spending significant system resources on rendering HTML and running JavaScript code on the received pages, they provide possibility to find and extract significant data from server responses and use it in subsequent requests.

Of course, if you need to create a really high load, a tool that uses “fast” approach is much preferable.
Another thing that should be noted regarding the efficiency is that it can be measured in different ways. Usually people talk about the number of concurrent virtual users that can be generated by a testing tool. However such estimation does not take into account the behavior of these users. The most obvious question that should be asked is: how often each emulated user will send requests to the tested web site?

A human user does not click links every second. People need some time to perceive the content of a page in order to understand what to do next. Virtual users simulated by load testing tools also make pauses between successive actions. This is done just to make them more similar to real users. However, if you reduce all these pauses, say, 10 times, same user session will be executed much faster (not 10 times faster, because some delays also take place because of the server response time, which will remain the same). Obviously, if you check how many such “fast” concurrent virtual users can be created using the same testing environment, you will see that the number will be much smaller than before.

The above example shows that talking about the number of virtual users only is not always correct. This concerns both the efficiency of a load testing tool and the specification of the test load for the target web site. You should also take into account the “weight” of each user, which (in first approximation) is determined by the number of requests produced per second.

Actually, this is not the only “weight” parameter, because we should remember that different requests require significantly different system resources to produce (on the client system) and to serve (on the target web site). When user submits a form or requests some information from a database, the corresponding HTTP request requires much more server resources than a request to a static image or text.

The client also spends resources on processing the web site responses. This is often required to extract important session-specific values that will be reused in subsequent requests. Searching for a keyword inside a 100 kb HTML page will not take significant CPU time, if this is done once. But what if we need to search for 10 different keywords and do this for each of 10,000 concurrent users?

So, when you check the efficiency of a load testing tool, you should test how fast it can process long server responses.

The above observations suggest one useful trick that can help you evaluate a load testing tool. Usually demo versions are restricted to some limited number of virtual users. This is normal, because vendors do not want to let you perform the real tests with the demo. At the same time such limitation does not prevent you from evaluating all the product features.

At first glance this does not let you check the efficiency of the tool. However there is a way to do this. Just remove from your test all the delays between requests. As a result your virtual users will start working very fast. Even if you are limited to 10 or 20 virtual users, you will be able to create load that is normally created by hundreds, if not thousands. Now check how many requests the tool is able to produce by second. This will be good efficiency estimation. Of course, you should perform such tests on a very fast web site, because otherwise server side delays will spoil the whole plan.

By the way, you can apply same trick to overcome the license limitation of your tool. As I mentioned above, some vendors charge for load testing solutions or services depending on the number of virtual users in your test. However if you need to test a web site with 2000 virtual users, but your license only allows you to run 1000, you can simply reduce all pauses between requests in your test and run it with allowed 1000 users. The resulting load will be approximately the same as the load with 2000 normal users with the initially set delays.

I would not say that the above practice is highly recommended, because such approximation does not take into account many other things. For example, it may appear that the web site can serve 1000 concurrent connections very well even on high speed, but gets broken on 2000 connections even with very slow users. So, if you want to get 100% correct results, you should make your virtual users as much close to the real ones as possible and create the required number of them. However as a “rough” test the above scenario will work.

The ability to change the user speed and its influence on the resulting load is something you should know about. As any other good trick, it should be applies reasonably.

4 Comments

  1. Hello Ivan,

    If we have to test a website of 100,000 users can that be achieved with the WAPT’s basic version which is on market for about $450 or do we need load agents?

    Kind regards,
    Anshul

    • Hello Anshul,

      For 100,000 users you will definitely need the Pro version of the product and 3 or more x64 Load Engines. Such load cannot by created with only a single system, so I would recommend to use at least 3 x 64 bit servers with at least 16Gb RAM on each of them. Depending on your test parameters this may be sufficient or not for your specific test. In such cases I usually recommend to try using a single engine with the specific test scenario and see how many virtual users it is capable to create at the peak performance. After that you will know how many additional systems you need to add to your testing environment to generate the required load.

      A very rough estimation for an average test is: 3 engines for 100,000 users.

      Regards,
      Ivan

  2. Hi

    For 1000 users is basic version sufficient.
    Also can small load engine (8GB machine) do the work?

    Anand

    • WAPT does not have any predefined limit on the number of virtual users that it can generate. For different tests this number is different and depends on several parameters including the hardware of your system, network capability, tested server itself and, most importantly, the behavior of each virtual user.

      A very rough estimation is about 2000 virtual users per load agent (or regular version of WAPT). So, most probably the regular version will be sufficient for your test. Note that the Pro version also has additional advantages. They are listed here: http://www.loadtestingtool.com/pro.shtml

      x64 Load Engine is useful if you need to create 5000 or more virtual users. We recommend to use at least 16Gb systems in such cases, however this also depends on the exact test parameters. With 16Gb the engine can create 10000 or more virtual users.

Leave a Comment

Your email address will not be published. Required fields are marked *

*