Question

I am aware of the inherent imprecision of floats. What I'm confused about is why I would get returned "100" from something I would expect to resolve 0.99999999 etc. How could 0.33*3 ever possibly yield 100?

Here is my code: if I say

float x = 100.0/3.0;
printf("%f",x*3.0);

The output is "99.999996". Yet if I say

printf("%f",(100.0/3)*3);

The output is "100". Shouldn't they be identical? I would expect x to resolve to (100.0/3.0), exactly what's written there in plaintext -- yet they yield two different results.

Was it helpful?

Solution

The problem is that your second expression is not equivalent to the first one: it uses doubles throughout, while the first one has a conversion to float after the division, forcing the intermediate result to lower precision.

To build a fully equivalent expression, add a cast to float after division, like this:

printf("%f", ((float)(100.0/3.0))*3.0);
//             ^^^^^

This produces the same output as your first example, i.e. "99.999996 (demo)

If you use double for x in your first example, you get the output 100.000000, too:

double x = 100.0/3.0;
printf("%f",x*3.0);

(another demo).

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top