Frage

As has been known for a while (see, e.g., this old question, and bug reports that pop when you google this), clock_gettime() doesn't appear to report back time monotonically. To rule out any silly error I might have overseen, here is the relevant code (excerpt from larger program):

<include time.h>

long nano_1, nano_2;
double delta;
struct timespec tspec, *tspec_ptr;

clock_gettime(CLOCK_MONOTONIC_RAW, tspec_ptr);
nano_1 = tspec.tv_nsec;
sort_selection(sorted_ptr, n);
clock_gettime(CLOCK_MONOTONIC_RAW, tspec_ptr);
nano_2 = tspec.tv_nsec;  
delta = (nano_2 - nano_1)/1000000.0;
printf("\nSelection sort took %g micro seconds.\n", (double) delta);

Sorting small arrays (about 1,000 elements) reports plausible times. When I sort larger ones (10,000+) using 3 sort algorithms, 1-2 of the 3 report back negative sort time. I tried all clock types mentioned in the man page, not only CLOCK_MONOTONIC_RAW - no change.

(1) Anything I overlooked in my code?
(2) Is there an alternative to clock_gettime() that measures time in increments more accurate than seconds? I don't need nanonseconds, but seconds is too coarse to really help.

System:
- Ubuntu 12.04.
- kernel 3.2.0-30
- gcc 4.6.3.
- libc version 2.15
- compiled with -lrt

War es hilfreich?

Lösung

This has nothing to do with the mythology of clock_gettime's monotonic clock not actually being monotonic (which probably has a basis in reality, but which was never well documented and probably fixed a long time ago). It's just a bug in your program. tv_nsec is the nanoseconds portion of a time value that's stored as two fields:

  • tv_sec - whole seconds
  • tv_nsec - nanoseconds in the range 0 to 999999999

Of course tv_nsec is going to jump backwards from 999999999 to 0 when tv_sec increments. To compute differences of timespec structs, you need to take 1000000000 times the difference in seconds and add that to the difference in nanoseconds. Of course this could quickly overflow if you don't convert to a 64-bit type first.

Andere Tipps

Based on a bit of reading around (including the link I provided above, and How to measure the ACTUAL execution time of a C program under Linux?) it seems that getrusage() or clock() should both provide you with a "working" timer that measures the time spent by your calculation only. It does puzzle me that your other function doesn't always give a >= 0 interval, I must say.

For use on getrusage, see http://linux.die.net/man/2/getrusage

Lizenziert unter: CC-BY-SA mit Zuschreibung
Nicht verbunden mit StackOverflow
scroll top