As i see it, the numbers are nowhere near the precision limit of the float
They really are - or rather, the difference is.
The exact values involved are 8.52500057220458984375 (rounded up to 8.525001 for display) and 8.5249996185302734375 (rounded up to 8.525 for display). The difference between those two values is 0.0000009536743164062. Given that float
only has 7 decimal digits of precision (as per the documentation), that's a pretty reasonable inaccuracy, in my view.
It's not actually the ordering of the operations that matters here. You get exactly the same results if you switch them round:
float f = 0.075f * 37 + 5.75f;
Console.WriteLine(DoubleConverter.ToExactString(f));
f = 5.75f;
Console.WriteLine(DoubleConverter.ToExactString(f));
f = f + 0.075f * 37;
Console.WriteLine(DoubleConverter.ToExactString(f));
(Using my DoubleConverter
class.)
The difference is that in the first version, all the information is available in one go - and actually at compile-time. The compiler does the arithmetic, and I suspect it actually performs it at a higher accuracy, and then reduces the accuracy of the overall result to 32 bits afterwards.