Question

I am testing 4 different algorithms in ruby and I"m having trouble interpreting the results that are returned from ruby's benchmark utility.

I ran each set of algorithms twice, once using Benchmark.bm and then again using Benchmark.bmbm. Here are the results:

Benchmark.bm:

  real   stime  total   utime
1 214.91  3.44  154.93 151.48
2 208.85  3.03  161.37 158.34
3 224.40  3.23  161.63 158.41
4 234.02  3.34  163.49 160.16

Benchmark.bmbm:

   real   stime  total  utime
1 252.61  3.50  163.89 160.39
2 278.56  3.65  164.61 160.96
3 241.89  3.37  162.73 159.36
4 256.12  3.56  163.91 160.35

Which algorithm performs the best(1,2,3 or 4) according to these results? What are the practical differences between using Benchmark.bm and Benchmark.bmbm.

Apologies if this is answered elsewhere, but I couldn't find a direct answer.

Was it helpful?

Solution

There don't seem to be significant differences between the algorithms. What's usually the most relevant number is 'total', which is the total time spent by the CPU on running your code. These are all pretty close to each other.

The difference between Benchmark.bm and Benchmark.bmbm is that the latter runs the benchmark, throws away the result, then runs it again and returns this second run's results. The reason to do this is to avoid unfair disadvantage to some of the subjects: it is possible that there exist common resources between the subjects that only one of them needs to initialize, and with bmbm you have a better chance that everything will be in a 'warm' state and thus obtain a fair result.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top