Question

How can you keep track of time in a simple embedded system, given that you need a fixed-point representation of the time in seconds, and that your time between ticks is not precisely expressable in that fixed-point format? How do you avoid cumulative errors in those circumstances.

This question is a reaction to this article on slashdot.

0.1 seconds cannot be neatly expressed as a binary fixed-point number, just as 1/3 cannot be neatly expressed as a decimal fixed-point number. Any binary fixed-point representation has a small error. For example, if there are 8 binary bits after the point (ie using an integer value scaled by 256), 0.1 times 256 is 25.6, which will be rounded to either 25 or 26, resulting in an error in the order of -2.3% or +1.6% respectively. Adding more binary bits after the point reduces the scale of this error, but cannot eliminate it.

With repeated addition, the error gradually accumulates.

How can this be avoided?

Was it helpful?

Solution

One approach is not to try to compute the time by repeated addition of this 0.1 seconds constant, but to keep a simple integer clock-tick count. This tick count can be converted to a fixed-point time in seconds as needed, usually using a multiplication followed by a division. Given sufficient bits in the intermediate representations, this approach allows for any rational scaling, and doesn't accumulate errors.

For example, if the current tick count is 1024, we can get the current time (in fixed point with 8 bits after the point) by multiplying that by 256, then dividing by 10 - or equivalently, by multiplying by 128 then dividing by 5. Either way, there is an error (the remainder in the division), but the error is bounded since the remainder is always less than 5. There is no cumulative error.

OTHER TIPS

Another approach might be useful in contexts where integer multiplication and division is considered too costly (which should be getting pretty rare these days). It borrows an idea from Bresenhams line drawing algorithm. You keep the current time in fixed point (rather than a tick count), but you also keep an error term. When the error term grows too large, you apply a correction to the time value, thus preventing the error from accumulating.

In the 8-bits-after-the-point example, the representation of 0.1 seconds is 25 (256/10) with an error term (remainder) of 6. At each step, we add 6 to our error accumulator. Based on this so far, the first two steps are...

Clock  Seconds  Error
-----  -------  -----
 25    0.0977    6
 50    0.1953   12

At the second step, the error value has overflowed - exceeded 10. Therefore, we increment the clock and subtract 10 from the error. This happens every time the error value reaches 10 or higher.

Therefore, the actual sequence is...

Clock  Seconds  Error  Overflowed?
-----  -------  -----  -----------
 25    0.0977    6
 51    0.1992    2      Yes
 76    0.2969    8
102    0.3984    4      Yes

There is almost always an error (the clock is precisely correct only when the error value is zero), but the error is bounded by a small constant. There is no cumulative error in the clock value.

A hardware-only solution is to arrange for the hardware clock ticks to run very slightly fast - precisely fast enough to compensate for cumulative losses caused by the rounding-down of the repeatedly added tick-duration value. That is, adjust the hardware clock tick speed so that the fixed-point tick-duration value is precisely correct.

This only works if there is only one fixed-point format used for the clock.

Why not have 0.1 sec counter and every ten times increment your seconds counter, and wrap the 0.1 counter back to 0?

In this particular instance, I would have simply kept the time count in tenths of a seconds (or milliseconds, or whatever time scale is appropriate for the application). I do this all the time in small systems or control systems.

So a time value of 100 hours would be stored as 3_600_000 ticks - zero error (other than error that might be introduced by hardware).

The problems that are introduced by this simple technique are:

  • you need to account for the larger numbers. For example, you may have to use a 64-bit counter rather than a 32-bit counter
  • all your calculations need to be aware of the units used - this is the area that is most likely going to cause problems. I try to help with this problem by using time counters with a uniform unit. For example, this particular counter needs only 10 ticks per second, but another counter might need millisecond precision. In that case, I'd consider making both counters millisecond precision so they use the same units even though one doesn't really need that precision.

I've also had to play some other tricks this with timers that aren't 'regular'. For example, I worked on a device that required a data acquisition to occur 300 times a second. The hardware timer fired once a millisecond. There's no way to scale the millisecond timer to get exactly 1/300th of a second units. So We had to have logic that would perform the data acquisition on every 3, 3, and 4 ticks to keep the acquisition from drifting.

If you need to deal with hardware time error, then you need more than one time source and use them together to keep the overall time in sync. Depending on your needs this can be simple or pretty complex.

Something I've seen implemented in the past: the increment value can't be expressed precisely in the fixed-point format, but it can be expressed as a fraction. (This is similar to the "keep track of an error value" solution.)

Actually in this case the problem was slightly different, but conceptually similar—the problem wasn't a fixed-point representation as such, but deriving a timer from a clock source that wasn't a perfect multiple. We had a hardware clock that ticks at 32,768 Hz (common for a watch crystal based low-power timer). We wanted a millisecond timer from it.

The millisecond timer should increment every 32.768 hardware ticks. The first approximation is to increment every 33 hardware ticks, for a nominal 0.7% error. But, noting that 0.768 is 768/1000, or 96/125, you can do this:

  • Keep a variable for "fractional" value. Start it on 0.
  • wait for the hardware timer to count 32.
  • While true:
    • increment the millisecond timer.
    • Add 96 to the "fractional" value.
    • If the "fractional" value is >= 125, subtract 125 from it and wait for the hardware timer to count 33.
    • Otherwise (the "fractional" value is < 125), wait for the hardware timer to count 32.

There will be some short term "jitter" on the millisecond counter (32 vs 33 hardware ticks) but the long-term average will be 32.768 hardware ticks.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top