I have run in to an issue when adding floats in C#. The following is taken from the immediate window in Visual Studio. I have declared f as float, and now do a simple calculation in two different ways:

f = 5.75f + 0.075f*37
8.525001
f = 0.075f*37
2.775
f = f + 5.75f
8.525

As you can see the results differ between the two ways of doing the same calculation. How can that be. As i see it, the numbers are nowhere near the precision limit of the float, so what is going on?

有帮助吗?

解决方案

As i see it, the numbers are nowhere near the precision limit of the float

They really are - or rather, the difference is.

The exact values involved are 8.52500057220458984375 (rounded up to 8.525001 for display) and 8.5249996185302734375 (rounded up to 8.525 for display). The difference between those two values is 0.0000009536743164062. Given that float only has 7 decimal digits of precision (as per the documentation), that's a pretty reasonable inaccuracy, in my view.

It's not actually the ordering of the operations that matters here. You get exactly the same results if you switch them round:

float f = 0.075f * 37 + 5.75f;
Console.WriteLine(DoubleConverter.ToExactString(f));
f = 5.75f;
Console.WriteLine(DoubleConverter.ToExactString(f));
f = f + 0.075f * 37;
Console.WriteLine(DoubleConverter.ToExactString(f));

(Using my DoubleConverter class.)

The difference is that in the first version, all the information is available in one go - and actually at compile-time. The compiler does the arithmetic, and I suspect it actually performs it at a higher accuracy, and then reduces the accuracy of the overall result to 32 bits afterwards.

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top