문제

I am puzzled. I have no explanation to why this test passes when using the double data type but fails when using the float data type. Consider the following snippet of code.

float total = 0.00;

for ( int i = 0; i < 100; i++ ) total += 0.01;

One would anticipate total to be 1.00, however it is equal to 0.99. Why is this the case? I compiled with both GCC and clang, both compilers have the same result.

도움이 되었습니까?

해결책 2

The value for 0.01 in decimal is expressed as the series: a1*(1/2) + a2*(1/2)^2 + a3*(1/2)^4 + etc. where aN is a zero or one.

I leave it to you to figure out the specific values of a1, a2 and how many fractional bits (aN) are required. In some cases a decimal fraction cannot be represented by a finite series of (1/2)^n values.

For this series to sum to 0.01 in decimal requires that aN go beyond the number of bits stored in a float (full word of bits minus the number of bits for a sign and exponent). But since double has more bits then 0.01 decimal can/might/maybe (you do the calculation) be precisely defined.

다른 팁

Try this:

#include <stdio.h>

int main(){
    float total = 0.00;
    int i;
    for (i = 0; i < 100; i++)
        total += 0.01;

    printf("%f\n", total);

    if (total == 1.0)
        puts("Precise");
    else
        puts("Rounded");
}

At least on most machines, you'll get an output of "Rounded". In other words, the result simply happens to be close enough that when it's printed out, it's rounded so it looks like exactly 1.00, but it really isn't. Change total to a double, and you'll still get the same.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top