Question

I just started using JMeter. For the time being, I am only interested in HTTP request sampler. There are a few terms that I think I understand, but I can't find official documentation that clarifies them. So, I would like to share my understandings and know if they are correct.

Let's take one single HTTP request case for example. Suppose at time t0, the request is sent from JMeter. At t1, JMeter starts to receive the response stream. At t2, JMeter receives the entire response stream. So, below are my understandings of the terms I find in the Graph Results output.

timestamp -> t0  
elapsed   -> t2 - t0  
latency   -> t1 - t0  

So, are my understandings correct? If not, what should they be?

Thank you very much.

Était-ce utile?

La solution

Does this help?

Latency . JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client.

it's from here: http://jmeter.apache.org/usermanual/glossary.html

Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top