Question

I'm wondering what the precision of the Timer class is in System.Timers, because it's a double (which would seem to indicate that you can have fractions of milliseconds). What is it?

Was it helpful?

Solution

Windows desktop OSes really aren't accurate below about 40ms. The OS simply isn't real time and therefore presents significant non-deterministic jitter. What that means is that while it may report values down to the millisecond or even smaller, you can't really count on those values to be really meaningful. So even if the Timer interval gets set to some sub-millisecond value, you can't rely on times between setting and firing to actually be what you said you wanted.

Add to this fact that the entire framework you're running under is non-deterministic (the GC could suspend you and do collection duing the time when the Timer should fire) and you end up with loads and loads of risk trying to do anything that is time critical.

OTHER TIPS

System.Timers.Timer is weird. It's using a double as the interval but in fact calls Math.Ceiling on it and casting the result as an int to use with an underlying System.Threading.Timer. It's theorical precision is then 1ms and you can't specify an interval that exceeds 2,147,483,647ms. Given this information, I really don't know why a double is used as the interval parameter.

A couple of years ago I found it to be accurate to about 16ms... but I unfortunately don't remember the details.

You can find this out easily for yourself by performing a loop that constantly samples the resulting time duration and check on what granularity it steps.

I compared System.Timers.Timer and System.Threading.Timer - both of them give systematic errors around 8..16 ms, especially on small intervals (1000..6000 ms). Each subsequent call of timer routine occurred with increased interval from first call. For example, timer with 2000 ms interval fires in 2000, 4012, 6024, 8036, 10048 milliseconds and so on (Timestamps obtained from Environment.TickCount and Stopwatch.ElapsedTicks, both give same results).

Timer resolution goes down into the 1ms regime without much implementation effort. I have just described the reason for the double precision in this answer. When configuring the a system to run at 1024 interrupts per second (using the multimedia timer API) the time beween interrupts is 0.9765625 ms. The standard resolution for timing matters is 100ns. Those values are stored as integers. The value 0.9765625 cannot be stored without loosing accuracy in an integer at 100 ns resolution. The last digit (5) represents 500 ps. Thus the resolution has to be three orders of magnitude higher. Storing these time values in integers with 100 ps resolution is hopeless since a 8 byte integer at 100 ps resolution would wrap afer just about 21350 days or about 58 years. This time span is too short to get accepted by anyone (remember the Y2K scenario!).

See the answer linked to to find out more about the details.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top