Question

This question already has an answer here:

I'm looking for a way to benchmark method calls in C#.

I have coded a data structure for university assignment, and just came up with a way to optimize a bit, but in a way that would add a bit of overhead in all situations, while turning a O(n) call into O(1) in some.

Now I want to run both versions against the test data to see if it's worth implementing the optimization. I know that in Ruby, you could wrap the code in a Benchmark block and have it output the time needed to execute the block in console - is there something like that available for C#?

Was it helpful?

Solution

You could use the inbuilt Stopwatch class to "Provides a set of methods and properties that you can use to accurately measure elapsed time." if you are looking for a manual way to do it. Not sure on automated though.

OTHER TIPS

Stolen (and modified) from Yuriy's answer:

private static void Benchmark(Action act, int iterations)
{
    GC.Collect();
    act.Invoke(); // run once outside of loop to avoid initialization costs
    Stopwatch sw = Stopwatch.StartNew();
    for (int i = 0; i < iterations; i++)
    {
        act.Invoke();
    }
    sw.Stop();
    Console.WriteLine((sw.ElapsedMilliseconds / iterations).ToString());
}

Often a particular method has to initialize some things, and you don't always want to include those initialization costs in your overall benchmark. Also, you want to divide the total execution time by the number of iterations, so that your estimate is more-or-less independent of the number of iterations.

I stole most of the following from Jon Skeet's method for benchmarking:

private static void Benchmark(Action act, int interval)
{
    GC.Collect();
    Stopwatch sw = Stopwatch.StartNew();
    for (int i = 0; i < interval; i++)
    {
        act.Invoke();
    }
    sw.Stop();
    Console.WriteLine(sw.ElapsedMilliseconds);
}

Here are some things I've found by trial and errors.

  1. Discard the first batch of (thousands) iterations. They will most likely be affected by the JITter.
  2. Running the benchmark on a separate Thread object can give better and more stable results. I don't know why.
  3. I've seen some people using Thread.Sleep for whatever reason before executing the benchmark. This will only make things worse. I don't know why. Possibly due to the JITter.
  4. Never run the benchmark with debugging enabled. The code will most likely run orders of magnitude slower.
  5. Compile your application with all optimizations enabled. Some code can be drastically affected by optimization, while other code will not be, so compiling without optimization will affect the reliability of your benchmark.
  6. When compiling with optimizations enabled, it is sometimes necessary to somehow evaluate the output of the benchmark (e.g. print a value, etc). Otherwise the compiler may 'figure out' some computations are useless and will simply not perform them.
  7. Invocation of delegates can have noticeable overhead when performing certain benchmarks. It is better to put more than one iteration inside the delegate, so that the overhead has little effect on the result of the benchmark.
  8. Profilers can have their own overhead. They're good at telling you which parts of your code are bottlenecks, but they're not good at actually benchmarking two different things reliably.
  9. In general, fancy benchmarking solutions can have noticeable overhead. For example, if you want to benchmark many objects using one interface, it may be tempting to wrap every object in a class. However, remember that the class constructor also has overhead that must be taken into account. It is better to keep everything as simple and direct as possible.

Sounds like you want a profiler. I would strongly recommend the EQATEC profiler myself, it being the best free one I've tried. The nice thing about this method over a simple stopwatch one is that it also provides a breakdown of performance over certain methods/blocks.

Profilers give the best benchmarks since they diagnose all your code, however they slow it down a lot. Profilers are used for finding bottlenecks.

For optimizing an algorithm, when you know where the bottlenecks are, use a dictionary of name-->stopwatch, to keep track of the performance critical sections during run-time.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top