Question

I am measuring time of sorting algorhytms like Bubble,Insert, Selection and Quick sort. I am using this for my purpose

    long int before = GetTickCount();
    QuickSort(pole,0,dlzka-1);
    long int after = GetTickCount();
    double dif = double((after - before));
cout << "Quick Sort with time "<< dif << " ms " << endl;

I am sorting array with 30 000 integers and working fine for other sort except the QuickSort which is probbably so fast that it sorts 30k integers in less then 1ms and then my timeer says it is 0ms which look like a mistake. I want to write it for example 0,01ms just to make it looks that it run corectly. Thank you.

Was it helpful?

Solution

When you benchmark, you never benchmark just one run. Your timer is not precise/accurate enough to give meaningful results across that tiny amount of time.

For example, the documentation for GetTickCount says:

The resolution of the GetTickCount function is limited to the resolution of the system timer, which is typically in the range of 10 milliseconds to 16 milliseconds.

So, it is plainly obvious that obtaining a value of 0.01ms is folly.

Instead, benchmark many runs, then divide by the number of times you ran it.

OTHER TIPS

Put your code into a loop that you run 1000 times, with the clock started and stopped outside of that loop. Then divide the result by 1000. Or, if you like, the result of the clock will now be in µs instead of in ms.

If your loop is very fast, you may need more than 1000 repetitions to get a meaningful measurement. You could run 10,000, 100,000, ... etc times until you get a "reasonable number of milliseconds".

When the piece of code you are testing is very fast, the overhead of the loop may become significant; in that case, you might run an "empty loop" and subtract the two results to give you the "net" timing of the inner part of the loop only.

It is rare, however, that this is something you need to do - most often you are trying to compare different algorithms, and as long as the overhead of the loop is the same it doesn't matter that it exists - the faster algorithm will still be faster.

One more thought - and this is pretty important: if you sort things in the first pass through the loop, and your algorithm speed depends on whether the data is sorted or not, you will get a different answer for multiple passes than you get for a single pass. Thus you need to make sure that you are using the same inputs for every pass through the algorithm. This might mean that you cannot use in-place sorting, or that you copy the unsorted data back into the "to be sorted" array at every pass of the algorithm.

other option: there is a good article on high precision timing that explains the use of the clock_gettime() function, with its various options and flavors. On some systems this will allow you to use higher resolution measurements. It is still always a good idea to do multiple runs, or even multiple runs of multiple runs - so you can compute statistics and thus come up with a confidence interval.

If you are using c++11:

std::chrono::high_resolution_clock represents the clock with the smallest tick period provided by the implementation.

If your compiler supports C++11 with std::chrono, this is best way to measure time at high accuracy; it is cross-platform and part of the standard library.

#include <chrono>
#include <iostream>
#include <iomanip>

::std::chrono::steady_clock::time_point startTime = std::chrono::steady_clock::now();
doWork();
::std::chrono::steady_clock::duration elapsedTime = ::std::chrono::steady_clock::now() - startTime;
std::cout << std::fixed << std::setprecision(9) << std::endl;
double duration = ::std::chrono::duration_cast< ::std::chrono::duration< double > >(elapsedTime).count();
std::cout << "Milliseconds: " << duration * 1000 << std::endl;

To use C++11 in GCC, you run g++ -std=c++11 -o app main.cpp. For the Visual Studio compiler, you need 2012 or higher to use chrono.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top