Question

I have created a simple test (just to download a file from famous site like flickr or google.) I run the test locally (either from jmeter directly or talk to the locally running jmeter-server,) the average time is 250ms and the throughput 29.4/s. Then I remote start this test on a host (which has much better internet connection,) the resulting average time is 225ms but the throughput is extremely low -- like 2/s or even below 1/s. The average time number looks reasonable. The throughput number is totally useless. It appears that the jmeter is somehow counting the time between the local jmeter driver and the jmeter server, rather than just averaging the throughput a experienced by every jmeter servers. How can we get the right throughput numbers in remote/distributed tests?

Was it helpful?

Solution 2

figured out. The reason is that when you have multiple remote jmeter servers configured, but start only one, jmeter is not smart enough to know that! so it keeps waiting for other non-starters to reply, causing the stats to plummet. The work around is to ensure all jmeter server started and working,

OTHER TIPS

One more addition (after removing the inactive slaves from jmeter.properties):

Time must be synch'd between all the machines: Master and all the slaves. If time is not synch'd then throughput will plummet. As said by Hacking Bear Jmeter is not smart enough to aggregate things in local machines and sum it up in server. Rather it sends all the start-time and finish-time to the Master, and the Master will do the aggregation. So if time is not synch'd between all the machines, we wont get proper throughput.

If you want to set time-date of one machine(machine-A) to all others, then run

sudo ntpdate <machine-A-ip-address>

on all machines where you are running Jmeter(slaves) and also in the Master machine.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top