Question

All the methods in System.Math takes double as parameters and returns parameters. The constants are also of type double. I checked out MathNet.Numerics, and the same seems to be the case there.

Why is this? Especially for constants. Isn't decimal supposed to be more exact? Wouldn't that often be kind of useful when doing calculations?

Was it helpful?

Solution

This is a classic speed-versus-accuracy trade off.

However, keep in mind that for PI, for example, the most digits you will ever need is 41.

The largest number of digits of pi that you will ever need is 41. To compute the circumference of the universe with an error less than the diameter of a proton, you need 41 digits of pi †. It seems safe to conclude that 41 digits is sufficient accuracy in pi for any circle measurement problem you're likely to encounter. Thus, in the over one trillion digits of pi computed in 2002, all digits beyond the 41st have no practical value.

In addition, decimal and double have a slightly different internal storage structure. Decimals are designed to store base 10 data, where as doubles (and floats), are made to hold binary data. On a binary machine (like every computer in existence) a double will have fewer wasted bits when storing any number within its range.

Also consider:

System.Double      8 bytes    Approximately ±5.0e-324 to ±1.7e308 with 15 or 16 significant figures
System.Decimal    12 bytes    Approximately ±1.0e-28 to ±7.9e28 with 28 or 29 significant figures

As you can see, decimal has a smaller range, but a higher precision.

OTHER TIPS

No, - decimals are no more "exact" than doubles, or for that matter, any type. The concept of "exactness", (when speaking about numerical representations in a compuiter), is what is wrong. Any type is absolutely 100% exact at representing some numbers. unsigned bytes are 100% exact at representing the whole numbers from 0 to 255. but they're no good for fractions or for negatives or integers outside the range.

Decimals are 100% exact at representing a certain set of base 10 values. doubles (since they store their value using binary IEEE exponential representation) are exact at representing a set of binary numbers. Neither is any more exact than than the other in general, they are simply for different purposes.

To elaborate a bit furthur, since I seem to not be clear enough for some readers...

If you take every number which is representable as a decimal, and mark every one of them on a number line, between every adjacent pair of them there is an additional infinity of real numbers which are not representable as a decimal. The exact same statement can be made about the numbers which can be represented as a double. If you marked every decimal on the number line in blue, and every double in red, except for the integers, there would be very few places where the same value was marked in both colors. In general, for 99.99999 % of the marks, (please don't nitpick my percentage) the blue set (decimals) is a completely different set of numbers from the red set (the doubles).

This is because by our very definition for the blue set is that it is a base 10 mantissa/exponent representation, and a double is a base 2 mantissa/exponent representation. Any value represented as base 2 mantissa and exponent, (1.00110101001 x 2 ^ (-11101001101001) means take the mantissa value (1.00110101001) and multiply it by 2 raised to the power of the exponent (when exponent is negative this is equivilent to dividing by 2 to the power of the absolute value of the exponent). This means that where the exponent is negative, (or where any portion of the mantissa is a fractional binary) the number cannot be represented as a decimal mantissa and exponent, and vice versa.

For any arbitrary real number, that falls randomly on the real number line, it will either be closer to one of the blue decimals, or to one of the red doubles.

Decimal is more precise but has less of a range. You would generally use Double for physics and mathematical calculations but you would use Decimal for financial and monetary calculations.

See the following articles on msdn for details.

Double http://msdn.microsoft.com/en-us/library/678hzkk9.aspx

Decimal http://msdn.microsoft.com/en-us/library/364x0z75.aspx

Seems like most of the arguments here to "It does not do what I want" are "but it's faster", well so is ANSI C+Gmp library, but nobody is advocating that right?

If you particularly want to control accuracy, then there are other languages which have taken the time to implement exact precision, in a user controllable way:

http://www.doughellmann.com/PyMOTW/decimal/

If precision is really important to you, then you are probably better off using languages that mathematicians would use. If you do not like Fortran then Python is a modern alternative.

Whatever language you are working in, remember the golden rule: Avoid mixing types... So do convert a and b to be the same before you attempt a operator b

If I were to hazard a guess, I'd say those functions leverage low-level math functionality (perhaps in C) that does not use decimals internally, and so returning a decimal would require a cast from double to decimal anyway. Besides, the purpose of the decimal value type is to ensure accuracy; these functions do not and cannot return 100% accurate results without infinite precision (e.g., irrational numbers).

Neither Decimal nor float or double are good enough if you require something to be precise. Furthermore, Decimal is so expensive and overused out there it is becoming a regular joke.

If you work in fractions and require ultimate precision, use fractions. It's same old rule, convert once and only when necessary. Your rounding rules too will vary per app, domain and so on, but sure you can find an odd example or two where it is suitable. But again, if you want fractions and ultimate precision, the answer is not to use anything but fractions. Consider you might want a feature of arbitrary precision as well.

The actual problem with CLR in general is that it is so odd and plain broken to implement a library that deals with numerics in generic fashion largely due to bad primitive design and shortcoming of the most popular compiler for the platform. It's almost the same as with Java fiasco.

double just turns out to be the best compromise covering most domains, and it works well, despite the fact MS JIT is still incapable of utilising a CPU tech that is about 15 years old now.

[piece to users of MSDN slowdown compilers]

Double is a built-in type. Is is supported by FPU/SSE core (formerly known as "Math coprocessor"), that's why it is blazingly fast. Especially at multiplication and scientific functions.

Decimal is actually a complex structure, consisting of several integers.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top