Pregunta

In my program I have a running total of a particular number, declared as a float before the main so as to be universal, and on every iteration I add and subtract floats from it.

These floats are always numbers between 0 and 10, to one decimal place. However, the total deviates occasionally (very infrequently, but i'm dealing with billions of iterations) from this 1.d.p. accuracy, by 0.01 (i.e. I add 2.4 to 15.9 and get 18.31)

This minor deviation can lead to the program crashing, so is there any way to alleviate it?

¿Fue útil?

Solución

If you always have 1 decimal place multiply all your numbers by 10 and use integer arithmetic! Binary floating points cannot, in general, represent decimal fractional values. Doing computations with binary floating point numbers causes the small errors to be magnified. Doing the computations with integers is exact.

Otros consejos

0.1 is a repeating decimal in binary, so can't be represented exactly. Best to use integers and multiples of 10 for your calculations.

Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top