As far as I understand now, Decimal is 2 times bigger than Double (128 bits vs 64 bits). So it can represent numbers with bigger precision. But it also uses numeric system with base 10, instead of binary. Maybe the last trait of Decimal affects the following results?

Microsoft (R) F# Interactive version 11.0.60610.1
Copyright (c) Microsoft Corporation. All Rights Reserved.

For help type #help;;

> let x = 1.0m / 3.0m ;;
val x : decimal = 0.3333333333333333333333333333M

> x * 3.0m ;;
val it : decimal = 0.9999999999999999999999999999M

> let y = 1.0 / 3.0 ;;
val y : float = 0.3333333333

> y * 3.0 ;;
val it : float = 1.0

> it = 1.0 ;;
val it : bool = true

As you can see, Double prints as 1.0 again after division and multiplication by 3.0. I tried different divisors, and the situation is the same.
(note for ones who don't know F# - float is basically synonym for double in F#)

有帮助吗?

解决方案

The nice thing about the type decimal when displayed with our usual decimal notation is that it is WYSIWYG: printing enough decimal digits (and 0.3333333333333333333333333333M certainly looks like enough), you can see the exact number the machine is working with. There is no surprise that three times that makes 0.9999999999999999999999999999M: you can do it with pen and paper and reproduce the result(2).

In binary, it would require many more decimal digits to see the exact number being represented, and they are usually not all printed (but the situation would be just as simple it they were). It is only a coincidence that in this case the binary multiplication of 3.0 by 1.0 / 3.0 makes 1.0. The property holds for some numbers but does not have to hold for all numbers. In fact, the result may not be 1.0, and your language may be printing fewer decimal digits than would reveal this. An exponential form 1.DD…DDEXXX with 16 digits after the dot suffices to distinguish all double-precision numbers, although it does not show the exact value of the number.

So, in summary:

  • decimal is WYSIWYG, you got 0.99… because you multiplied 0.33… by 3
  • the result in binary may not be 1.0, but only print as such with the default limited number of decimals for binary numbers in your language
  • even if it is 1.0, that's a coincidence that might not have happened with another number instead of 3.0.

Miscellaneous notes

  1. If F# is like OCaml in this respect, you can print enough decimals to distinguish 1.0 from another float with Printf.printf "%.16e".
  2. F#'s decimal type is WYSIWYG but you have to remember that some numbers have 28 digits of precision and most have 29. See supercat's answer or the comments below for details.
  3. The hexadecimal notation has the same WYSIWYG property for binary floating-point as the decimal notation has for decimal. C99, of all languages and years, has the best support for fine floating-point manipulation, and it supports hexadecimal for input and output.

An example:

#include <stdio.h>

int main(){
  double d = 1 / 3.0;
  printf("%a\n%a\n", d, 3*d);
}

Executing produces:

$ gcc -std=c99 t.c && ./a.out 
0x1.5555555555555p-2
0x1p+0

With pen and paper, we can multiply 0x1.5555555555555p-2 by 3. We obtain 0x3.FFFFFFFFFFFFFp-2, or 0x1.FFFFFFFFFFFFF8p-1 after normalization. This number is not representable exactly as a binary64 floating-point number (it has too many significant digits), and the “nearest” representable number, returned by the multiplication, is 1.0. (The rule that ties must be rounded to the nearest even number is applied. Of the two equally near alternatives 0x1.FFFFFFFFFFFFFp-1 and 1.0, the 1.0 result is the “even” one.)

其他提示

The behavior you are observing in double is attributable to the fact that the result of multiplying 1/3 by three has a different scale from 1/3. The situation would be analogous to what would see if, always keeping exactly three decimal figures, one were to compute 1.00/7.00 (yielding .143) and multiply that result by 7 (the exact value of which would be 1.001, but which gets rounded to 1.00). Essentially, the division picks up a significant figure (accurate to 0.001 even though the original number was only accurate to 0.01), which allows the multiplication to yield the correct result.

With type Decimal, even if x/y cannot be stored exactly, the value of (x/y)*y (where 1 < y < 10) will often equal x if the multiplication causes a change of scale requiring a rounding step. The reason that (1D/3D)*3D does not yield 1D is that while Decimal values above roughly 7.923 lose a decimal place at the right for each power of ten by which they exceed that quantity, those below 0.7922 do not gain places. Thus, dividing by 3 a value in the range 7.923 to 23.76 and then multiplying be 3 will yield the original value; likewise if one uses a value 79.23 to 233.76, etc. Division of a value below 7.923 by any value greater than one is generally not a reversible operation except when the result is an exact multiple of 10^-28.

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top