Question

I wrote simple load testing tool for testing performance of Java modules. One problem I faced is algorithm of throughput measurements. Tests are executed in several thread (client configure how much times test should be repeated), and execution time is logged. So, when tests are finished we have following history:

4 test executions
2 threads
36ms overall time

- idle
* test execution
       5ms    9ms     4ms      13ms
T1  |-*****-*********-****-*************-|
      3ms  6ms     7ms      11ms
T2  |-***-******-*******-***********-----|
    <-----------------36ms--------------->

For the moment I calculate throughput (per second) in a following way: 1000 / overallTime * threadCount.

But there is problem. What if one thread will complete it's own tests more quickly (for whatever reason):

      3ms 3ms 3ms 3ms
T1  |-***-***-***-***----------------|
      3ms  6ms     7ms      11ms
T2  |-***-******-*******-***********-|
    <--------------32ms-------------->

In this case actual throughput is much better because of measured throughput is bounded by the most slow thread. So, my question is how should I measure throughput of code execution in multithreaded environment.

Was it helpful?

Solution

How about throughput[t] = numberOfTests[t] / overallTime[t] separately for each thread t, and then calculate the mean of all throughput?

Then you can also calculate things like range and standard deviation to get a better picture. Personally I'm very fond of box plots. But just the numbers themselves would be interesting.

OTHER TIPS

I know its a bit late, but I have two blog posts associated with your question. The first describes how you measure throughput (and response time). The second describes a way to graph throughput.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top