Question

Some people say that machine epsilon for double precision floating point numbers is 2^-53 and other (more commonly) say its 2^-52. I have messed around estimating machine precision using integers besides 1 and aproaching from above and below (in matlab), and have gotten both values as results. Why is it that both values can be observed in practice? I thought that it should always produce an epsilon around 2^-52.

Was it helpful?

Solution

There's an inherent ambiguity about the term "machine epsilon", so to fix this, it is commonly defined to be the difference between 1 and the next bigger representable number. (This number is actually (and not by accident) obtained by literally incrementing the binary representation by one.)

The IEEE754 64-bit float has 52 explicit mantissa bits, so 53 including the implicit leading 1. So the two consecutive numbers are:

1.0000  .....  0000
1.0000  .....  0001
  \-- 52 digits --/

So the difference betwen the two is 2-52.

OTHER TIPS

It depends on which way you round.

1 + 2^-53 is exactly half way between 1 and 1 + 2^-52, which are consecutive in double-precision floating point. So if you round it up, it is different from 1; if you round it down, it is equal to 1.

There are actually two definitions of "machine precision" which sound quite identical on first sight, but aren't, as they yield different values for the "machine epsilon":

  1. The machine epsilon is the smallest floating-point number eps1 such that 1.0 + x > 1.0.
  2. The machine epsilon is the difference eps2 = x - 1.0 where x is the smallest representable floating-point number with x > 1.0.

Strictly mathematically speaking, the definitions are equivalent, i.e. eps1 == eps2, but we're not talking about real numbers here, but about floating-point numbers. And that means implicit rounding and cancellation, which means that the, approximatively, eps2 == 2 * eps2 (at least in the most common architectures using IEEE-754 floats).

In more detail, if we let some x go from 0.0 to some point where 1.0 + x > 1.0, this point is reached at x == eps1 (by definition 1). However, because of roundup, the result of 1.0 + eps1 is not 1.0 + eps1, but the next representable floating-point value larger than 1.0 -- that is, eps2 (by definition 2). So, in essence,

eps2 == (1.0 + eps1) - 1.0

(Mathematicians will cringe at this.) And due to the rounding behaviour, this means that

eps2 == eps1 * 2 (approximatively)

And that is why there are two definitions for "machine epsilon", both legitimate and correct.

Personally speaking, I find eps2 the more "robust" definition, as it does not depend on the actual rounding behaviour, only on the representation, but I wouldn't say it is more correct than the other. As ever so often, it all depends on the context. Just be clear about which definition you use when talking about "machine epsilon" to prevent confusion and bugs.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top