Question

I have been trying to figure this floating-point problem out in javascript. This is an example of what I want to do:

var x1 = 0
for(i=0; i<10; i++)
{
    x1+= 0.2    
}

However in this form I will get a rounding error, 0.2 -> 0.4 -> 0.600...001 doing that.

I have tried parseFloat, toFixed and Math.round suggested in other threads but none of it have worked for me. So are there anyone who could make this work, because I feel that I have run out of options.

Was it helpful?

Solution

You can almost always ignore the floating point "errors" while you're performing calculations - they won't make any difference to the end result unless you really care about the 17th significant digit or so.

You normally only need to worry about rounding when you display those values, for which .toFixed(1) would do perfectly well.

Whatever happens you simply cannot coerce the number 0.6 into exactly that value. The closest IEEE 754 double precision is exactly 0.59999999999999997779553950749686919152736663818359375, which when displayed within typical precision limits in JS is displayed as 0.5999999999999999778

Indeed JS can't even tell that 0.5999999999999999778 !== (e.g) 0.5999999999999999300 since their binary representation is the same.

OTHER TIPS

To better understand how the rounding errors are accumulating, and get more insight on what is happenning at lower level, here is a small explanantion:
I will assume that IEEE 754 double precision standard is used by underlying software/hardware, with default rounding mode (round to nearest even).

1/5 could be written in base 2 with a pattern repeating infinitely

  0.00110011001100110011001100110011001100110011001100110011...

But in floating point, the significand - starting at most significant 1 bit - has to be rounded to a finite number of bits (53)

So there is a small rounding error when representing 0.2 in binary:

  0.0011001100110011001100110011001100110011001100110011010

Back to decimal representation, this rounding error corresponds to a small excess 0.000000000000000011102230246251565404236316680908203125 above 1/5

The first operation is then exact because 0.2+0.2 is like 2*0.2 and thus does not introduce any additional error, it's like shifting the fraction point:

  0.0011001100110011001100110011001100110011001100110011010
+ 0.0011001100110011001100110011001100110011001100110011010
  ---------------------------------------------------------
  0.0110011001100110011001100110011001100110011001100110100

But of course, the excess above 2/5 is doubled 0.00000000000000002220446049250313080847263336181640625

The third operation 0.2+0.2+0.2 will result in this binary number

  0.011001100110011001100110011001100110011001100110011010
+ 0.0011001100110011001100110011001100110011001100110011010
  ---------------------------------------------------------
  0.1001100110011001100110011001100110011001100110011001110

But unfortunately, it requires 54 bits of significand (the span between leading 1 and trailing 1), so another rounding error is necessary to represent the result as a double:

  0.10011001100110011001100110011001100110011001100110100

Notice that the number was rounded upper, because by default floats are rounded to nearest even in case of perfect tie. We already had an error by excess, so bad luck, successive errors did cumulate rather than annihilate...

So the excess above 3/5 is now 0.000000000000000088817841970012523233890533447265625

You could reduce a bit this accumulation of errors by using

x1 = i / 5.0

Since 5 is represented exactly in float (101.0 in binary, 3 significand bits are enough), and since that will also be the case of i (up to 2^53), there is a single rounding error when performing the division, and IEEE 754 then guarantees that you get the nearest possible representation.

For example 3/5.0 is represented as:

  0.10011001100110011001100110011001100110011001100110011

Back to decimal, the value is represented by default 0.00000000000000002220446049250313080847263336181640625 under 3/5

Note that both errors are very tiny, but in second case 3/5.0, four times smaller in magnitude than 0.2+0.2+0.2.

Depending on what you're doing, you may want to do fixed-point arithmetic instead of floating point. For example, if you are doing financial calculations in dollars with amounts that are always multiples of $0.01, you can switch to using cents internally, and then convert to (and from) dollars only when displaying values to the user (or reading input from the user). For more complicated scenarios, you can use a fixed-point arithmetic library.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top