In my program I have a running total of a particular number, declared as a float before the main so as to be universal, and on every iteration I add and subtract floats from it.

These floats are always numbers between 0 and 10, to one decimal place. However, the total deviates occasionally (very infrequently, but i'm dealing with billions of iterations) from this 1.d.p. accuracy, by 0.01 (i.e. I add 2.4 to 15.9 and get 18.31)

This minor deviation can lead to the program crashing, so is there any way to alleviate it?

有帮助吗?

解决方案

If you always have 1 decimal place multiply all your numbers by 10 and use integer arithmetic! Binary floating points cannot, in general, represent decimal fractional values. Doing computations with binary floating point numbers causes the small errors to be magnified. Doing the computations with integers is exact.

其他提示

0.1 is a repeating decimal in binary, so can't be represented exactly. Best to use integers and multiples of 10 for your calculations.

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top