I have the following test cases

#include <stdio.h>

int main() { 

    double x = 3.987;

    printf("x = %lf\n", x);

    printf("(double) (long) (x) = %lf\n", (double) (long) (x));

    printf("(x*100)/100 = %lf\n", (x*100)/100);

    printf("(double) (long) (x*100)/100 = %lf\n", (double) (long) (x*100)/100);

    printf("(double) (long) (x*10)/10 = %lf\n", (double) (long) (x*10)/10);

    return 0;
}

The output is:

x = 3.987000
(double) (long) (x) = 3.000000
(x*100)/100 = 3.987000
(double) (long) (x*100)/100 = 3.980000
(double) (long) (x*10)/10 = 3.900000

It seems to me that multiplying by 100 and dividing by 100 would cancel each other out? but It is actually decreasing the precision. How does this work exactly?

有帮助吗?

解决方案 2

Type casting to long has higher precedence than deviding by 100. So

(double) (long) (x*100)/100

actually is equivalent to

((double) (long) (x*100)) / 100

not to

(double) (long) ((x*100)/100)

其他提示

In some places you're casting to "long", and it's an integer type. So for example in the last case, you're multiplying 3.987 by 10 and you get 39.87. Then, you're casting it to "long" so it becomes 39. After dividing it by 10 you get 3.9.

What do you want to achieve by using (long)?

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top