Question

Is any difference in computation precision for these 2 cases:
1) x = y / 1000d;
2) x = y * 0.001d;

Edit: Shoudn't add C# tag. Question is only from 'floating-point' point of view. I don't wanna know what is faster, I need to know what case will give me 'better precision'.

Was it helpful?

Solution

No, they're not the same - at least not with C#, using the version I have on my machine (just standard .NET 4.5.1) on my processor - there are enough subtleties involved that I wouldn't like to claim it'll do the same on all machines, or with all languages. This may very well be a language-specific question after all.

Using my DoubleConverter class to show the exact value of a double, and after a few bits of trial and error, here's a C# program which at least on my machine shows a difference:

using System;

class Program
{
    static void Main(string[] args)
    {
        double input = 9;
        double x1 = input / 1000d;
        double x2 = input * 0.001d;

        Console.WriteLine(x1 == x2);
        Console.WriteLine(DoubleConverter.ToExactString(x1));
        Console.WriteLine(DoubleConverter.ToExactString(x2));
    }
}

Output:

False
0.00899999999999999931998839741709161899052560329437255859375
0.009000000000000001054711873393898713402450084686279296875

I can reproduce this in C with the Microsoft C compiler - apologies if it's horrendous C style, but I think it at least demonstrates the differences:

#include <stdio.h>

void main(int argc, char **argv) {
    double input = 9;
    double x1 = input / 1000;
    double x2 = input * 0.001;
    printf("%s\r\n", x1 == x2 ? "Same" : "Not same");
    printf("%.18f\r\n", x1);
    printf("%.18f\r\n", x2);
}

Output:

Not same
0.008999999999999999
0.009000000000000001

I haven't looked into the exact details, but it makes sense to me that there is a difference, because dividing by 1000 and multiplying by "the nearest double to 0.001" aren't the same logical operation... because 0.001 can't be exactly represented as a double. The nearest double to 0.001 is actually:

0.001000000000000000020816681711721685132943093776702880859375

... so that's what you end up multiplying by. You're losing information early, and hoping that it corresponds to the same information that you lose otherwise by dividing by 1000. It looks like in some cases it isn't.

OTHER TIPS

you are programming in base 10 but the floating point is base 2 you CAN represent 1000 in base 2 but cannot represent 0.001 in base 2 so you have chosen bad numbers to ask your question, on a computer x/1000 != x*0.001, you might get lucky most of the time with rounding and more precision but it is not a mathematical identity.

Now maybe that was your question, maybe you wanted to know why x/1000 != x*0.001. And the answer to that question is because this is a binary computer and it uses base 2 not base 10, there are conversion problems with 0.001 when going to base 2, you cannot exactly represent that fraction in an IEEE floating point number.

In base 10 we know that if we have a fraction with a factor of 3 in the denominator (and lacking one in the numerator to cancel it out) we end up with an infinitely repeated pattern, basically we cannot accurately represent that number with a finite set of digits.

1/3 = 0.33333...

Same problem when you try to represent 1/10 in base 2. 10 = 2*5 the 2 is okay 1/2, but the 5 is the real problem 1/5.

1/10th (1/1000 works the same way). Elementary long division:

       0 000110011
     ----------
1010 | 1.000000
         1010
       ------
          1100 
          1010
          ----
            10000
             1010
             ----
              1100
              1010
              ----
                10

we have to keep pulling down zeros until we get 10000 10 goes into 16 one time, remainder 6, drop the next zero. 10 goes into 12 1 time remainder 2. And we repeat the pattern so you end up with this 001100110011 repeated forever. Floating point is a fixed number of bits, so we cannot represent an infinite pattern.

Now if your question has to do with something like is dividing by 4 the same as multiplying by 1/4th. That is a different question. Aanswer is it should be the same, consumes more cycles and/or logic to do a divide than multiply but works out with the same answer in the end.

Probably not. The compiler (or the JIT) is likely to convert the first case to the second anyway, since multiplication is typically faster than division. You would have to check this by compiling the code (with or without optimizations enabled) and then examining the generated IL with a tool like IL Disassembler or .NET Reflector, and/or examining the native code with a debugger at runtime.

No, there is no any difference. Except if you set custom rounding mode.

gcc produces ((double)0.001 - (double)1.0/1000) == 0.0e0

When compiler converts 0.001 to binary it divides 1 by 1000. It uses software floating point simulation compatible with target architecture to do this.

For high precision there are long double (80-bit) and software simulation of any precision.

PS I used gcc for 64 bit machine, both sse and x87 FPU.

PPS With some optimizations 1/1000.0 could be more precise on x87 since x87 uses 80-bit internal representation and 1000 == 1000.0. It is true if you use result for next calculations promptly. If you return/write to memory it calculates 80-bit value and then rounds it to 64-bit. But SSE is more common to use for double.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top