Pregunta

I was running a cpp code , but one thing i noticed that on windows 7, CLOCKS_PER_SEC in C++ code gives 1000 while on linux fedora 16 it gives 1000000. Can anyone justify this behaviour?

¿Fue útil?

Solución

What's to justify? CLOCKS_PER_SEC is implementation defined, and can be anything. All it indicates it the units returned by the function clock(). It doesn't even indicate the resolution of clock(): Posix requires it to be 1000000, regardless of the actual resolution. If Windows is returning 1000, that's probably not the actual resolution either. (I find that my Linux box has a resolution of 10ms, and my Windows box 15ms.)

Otros consejos

Basically the implementation of the clock() function has some leeway for different operating systems. On Linux Fedora, the clock ticks faster. It ticks 1 million times a second.

This clock tick is distinct from the clock rate of your CPU, on a different layer of abstraction. Windows tries to make the number of clock ticks equal to the number of milliseconds.

This macro expands to an expression representing the number of clock ticks in a second, as returned by the function clock.

Dividing a count of clock ticks by this expression yields the number of seconds.

CLK_TCK is an obsolete alias of this macro.

Reference: http://www.cplusplus.com/reference/clibrary/ctime/CLOCKS_PER_SEC/

You should also know that the Windows implementation is not for true real-time applications. The 1000 tick clock is derived by dividing a hardware clock by a power of 2. That means that they actually get a 1024 tick clock. To convert it to a 1000 tick clock, Windows will skip certain ticks, meaning some ticks are slower than others!

A separate hardware clock (not the CPU clock) is normally used for timing. Reference: http://en.wikipedia.org/wiki/Real-time_clock

Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top