문제

There is timer coalescing support in Windows 7 and Windows 8, see for example this: Timer coalescing in .net
Windows 7 has a function SetWaitableTimerEx about which it is claimed that it supports coalescing here and here.
Windows 8 has additionally a function SetCoalescableTimer which supports coalescing according to MSDN.


So lots of talk about timer coalescing in Windows 7 and Windows 8. But then it seems like it may have been implemented already earlier. Is it so?


First, is it correct that SetThreadpoolTimer available since Vista provides timer coalescing under Vista. Or does it only offer the interface and actually implements coalescing only since Windows 7?
From "Thread Pool Timers and I/O" I can read that

"This is actually a feature that affects energy efficiency and helps reduce overall power consumption. It’s based on a technique called timer coalescing."

Is that sentence correct for all Windows versions that support SetThreadpoolTimer function?


Secondly, now that I started wondering. I can see that timeSetEvent available since XP has parameter called uResolution. Does this parameter just change the global timer resolution like timeBeginPeriod does, for the duration of the timer event wait, or does it affect only this particular timer, providing also timer coalescing?


Finally, are there any additional or alternative functions that provide timer coalescing under Windows XP or Vista?

도움이 되었습니까?

해결책

A few words in general:

Timer coalescing provides a way to reduce the number of interrupts. Applications are allowed to specify a tolerance for their timing demands. This allows the operating system to "batch" interrupts with a couple of consequences:

  • the number of interrupts may be reduced. (+)
  • the number of context switches may be lower. (+)
  • the power consumption may be reduced. (+)
  • a bulk of of operations may have to be done at those batched interrupts (-)
  • the scheduler may have to schedule a large number of processes at this time (-)
  • the resolution in time is worse (-)

Windows, as well as other interrupt based operating systems, has always "batched" timed events. Anything set up to occurr at a specific time relies on a due time to expire with an interrut. Consequently, the events are coalesced with the interrupt. The granularity of this scheme is determined by the interrupt frequency. A must read for those interested in timer coalescing: MSDN: Windows Timer Coalescing.

For performance reasons every effort should be made to reduce the amout of interrupts as much as possible. Unfortunately lots of packages do set the systems timer resolution very high, e.g. by means of the multimedia timer interface timeBeginPeriod / timeEndPeriod or the underlying API NtSetTimerResolution. Like Hans mentioned: "Chrome" is a good example for how the use of these functions can be badly exaggerated.


Secondly, now that I started wondering... timeSetEvent is one of the multimedia timer functions. It uses timeBeginPeriod under the hood.

And it uses it badly: It sets the system timer resolution to match uResolution as good as it can within the timer resolutions available on the executing platform. On large values of uDelay it could wait at low resolution until it gets close to the expiry of the delay and only then raise the system timer resolution, but it sets the timer resolution for the entire wait period to the specified uResolution. That is painful, knowing that the high resolution will apply for long delays as well. However, the multimedia timer functions are not proposed for use at large delays. But setting the resolution over and over again isn't good either (see notes below).

Summary on timeSetEvent: This function is not doing anything like coalescing at all, what it does it the opposite: It optionally increases the number of interrupts; in this sense it spreads events over more interrupts, it "de-batches" them.

The SetThreadpoolTimer introduces the idea of "batching" events for the first time. This was primarely forced due to increasing complaints about battery life time on Windows notebooks. SetWaitableTimerEx has pushed that strategy further and SetCoalescableTimer is the most recent API to access coalescing timers. The latter introduces TIMERV_DEFAULT_COALESCING and TIMERV_NO_COALESCING which are worth thinking about since they allow to ignore certain facts.


Taking the opportunity for some notes on system timer resolutions:

Changing the system timer resolution has more consequeces than just an increased interrupt frequency. Some effects coming along with the use of timeBeginPeriod / NtSetTimerResolution:

  1. Interrupt frequency changes
  2. Thread quantum changes (threads time slice) (!)
  3. Hiccups of the system time (MSDN: "...frequent calls can significantly affect the system clock")
  4. Hiccups when a system time adjustment is active (SetSystemTimeAdjustment)

Point 3. was partly taken care of with Windows 7 and Point 4. was only addressed with Window 8.1. Hiccups of the system time can be as big as the minimum supported timer resolution (15.625 ms on typical systems) and they accumulate when timeBeginPeriod / NtSetTimerResolution frequently. This may result in a considerable jump when trying to adjust the system time to match an NTP reference. NTP clients need to operate at high timer resolution to obtain reasonable accuracy when running on Windows version < Windows 8.

Finally: Windows itself changes the system timer resolution whenever it sees advantages to do so. The number of supported timer resolutions depends on the underlying hardware and the Windows version. A list of available resolution may be obtianed by scanning through them by calling timeBeginPeriod with increasing periods followed by a call to NtQueryTimerResolution. Some of the supported resolutions may be "disliked" by Windows on specific platforms and modified to better suit the Windows needs. Example: XP may change a "user set" resolution of ~ 4 ms to 1 ms after a short period of time on certain platforms. Particular Windows versions < 8.1 does change the timer resolution at unpredictable times.

If an application is required to be completely independent of these artefacts it has to acquired the highest available resolution on its own. This way the application dominates the system wide resolution and it doesn't have to bother about other applications or the OS changing timer resolutions. More modern platforms do support a timer resolution of 0.5 ms. timeBeginPeriod does not allow to acquire this resolution but NtSetTimerResolution does. Here I've described how do use NtSetTimerResolution to obtain 0.5 ms resolution.

Power consumption is likely to raise under such conditions but that's the fee to pay for reliable resolution: The energy cost of a context switch typically is 0.05 mJ to 0.2 mJ on modern hardware (Has anyone estimated the worldwide total amount of context switches per year?). Windows cuts the thread quantum (time slice) to approx. 2/3 when the timer resolution it set to maximum. Consequenty, the power consumption raises by approx. 30%!

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top