Question

I'm using time.h in C++ to measure the timing of a function.

clock_t t = clock();
someFunction();
printf("\nTime taken: %.4fs\n", (float)(clock() - t)/CLOCKS_PER_SEC);

however, I'm always getting the time taken as 0.0000. clock() and t when printed separately, have the same value. I would like to know if there is way to measure the time precisely (maybe in the order of nanoseconds) in C++ . I'm using VS2010.

Was it helpful?

Solution 2

I usually use the QueryPerformanceCounter function.

example:

LARGE_INTEGER frequency;        // ticks per second
LARGE_INTEGER t1, t2;           // ticks
double elapsedTime;

// get ticks per second
QueryPerformanceFrequency(&frequency);

// start timer
QueryPerformanceCounter(&t1);

// do something
...

// stop timer
QueryPerformanceCounter(&t2);

// compute and print the elapsed time in millisec
elapsedTime = (t2.QuadPart - t1.QuadPart) * 1000.0 / frequency.QuadPart;

OTHER TIPS

C++11 introduced the chrono API, you can use to get nanoseconds :

auto begin = std::chrono::high_resolution_clock::now();

// code to benchmark

auto end = std::chrono::high_resolution_clock::now();
std::cout << std::chrono::duration_cast<std::chrono::nanoseconds>(end-begin).count() << "ns" << std::endl;

For a more relevant value it is good to run the function several times and compute the average :

auto begin = std::chrono::high_resolution_clock::now();
uint32_t iterations = 10000;
for(uint32_t i = 0; i < iterations; ++i)
{
    // code to benchmark
}
auto end = std::chrono::high_resolution_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::nanoseconds>(end-begin).count();
std::cout << duration << "ns total, average : " << duration / iterations << "ns." << std::endl;

But remember the for loop and assigning begin and end var use some CPU time too.

The following text, that i completely agree with, is quoted from Optimizing software in C++ (good reading for any C++ programmer) -

The time measurements may require a very high resolution if time intervals are short. In Windows, you can use the GetTickCount or QueryPerformanceCounter functions for millisecond resolution. A much higher resolution can be obtained with the time stamp counter in the CPU, which counts at the CPU clock frequency.

There is a problem that "the clock frequency may vary dynamically and that measurements are unstable due to interrupts and task switches."

In C or C++ I usually do like below. If it still fails you may consider using rtdsc functions

      struct timeval time;
      gettimeofday(&time, NULL); // Start Time

      long totalTime = (time.tv_sec * 1000) + (time.tv_usec / 1000);

          //........ call your functions here

        gettimeofday(&time, NULL);  //END-TIME

        totalTime = (((time.tv_sec * 1000) + (time.tv_usec / 1000)) - totalTime);
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top