When you plan a load test, one of the first things you need to know about the backend configuration is if it includes a load balancer. This is important because most load balancers distribute new user sessions by the client IP address. Some of them allow changing the distribution method. However this may be achievable only by a significant configuration change. You will hardly persuade the site admins to make such changes in the production environment.
What is a bit embarrassing in this respect is that even if you select the “distribute to the least loaded” option in LB settings, this may still mean that it will use this rule only for new IPs. After the initial connect, each IP is remembered for a certain time (from 1 hour to few days) and all new connections from the same IP will be directed to the same web server.
This creates a very big problem for load testing. If you use WAPT Pro, or any other efficient load testing tool, by default all your virtual users will share the same IP address and will be directed to the same web server behind the LB. As a result, you will not be able to test the whole system correctly.
There are several possible solutions. The first one is to setup several IP addresses on the system running load generation unit and use IP spoofing feature available in WAPT Pro and similar tools. Another solution is to use several systems for the load generation. Each of them will have a different IP. In both cases we will have all sessions coming from N different IP addresses.
The most common mistake in this situation is to think that if you need to test, say, 5 web servers behind a load balancer, you can just run the test from 5 different load agents. Unfortunately most likely this will not work, because nobody guarantees that two agents cannot be distributed to the same server. As a result, most likely you will not load all your 5 servers this way. Moreover, after you run your test for the first time, your LB will most likely remember the distribution of IP addresses, so next time it will distribute all agents to the same servers. Even if you can change all IPs and try again, this will hardly be useful.
For some reason people think that when you run 5 agents to 5 servers and they all are distributed randomly, there is a good chance that each agent will be directed to a different server making this a one to one distribution. Let’s introduce some math to this. Remember the probability theory? The first agent is distributed to the first server. This is always a success. The second agent should be distributed to one of the remaining servers to make the whole thing work. The probability of this is 4/5. It is 3/5 for the third agent, because this time 2 of 5 servers are already occupied. If we continue this observation for the remaining load agents, we will have the following formula for the total probability of success:
1 * 4/5 * 3/5 * 2/5 * 1/5 = 0,0384
Unfortunately this means that you have less than 1 chance out of 25 to get the one to one allocation.
The right solution is to “unrandomize” this process. By default the load balancer will distribute each new IP to a least loaded server. However a difference in load produced by a single user session will hardly be regarded. We can archive our goal if you create a significant number of sessions from the first IP address, then start adding sessions from the second IP and so on.
WAPT Pro offers two ways to achieve this.
- You can assign each virtual user profile to a specific load agent and specify a certain delay for each profile, so that they would be started one by one, but not at the same time.
- You can start the test with a single load agent and add other ones with some time intervals during the test. This can be done from the “Current Load” page that appears in the left view after you run the test.
If you perform the above procedure accurately, LB will remember all the IPs, so you will not need to repeat this again, if you want to restart the test. The created one to one allocation will be preserved and you will be able to test the LB and the site behind it.