Question

May be this is a very basic question, but I am really interested to know what really happens.

For example if we do the following in c#:

object obj = "330.1500249000119";
var val = Convert.ToDouble(obj);

The val becomes: 330.15002490001189

The question is that why the last 9 is replace by 89? Can we stop it from happening this way? And is this precision dependent on the Current Culture?

Was it helpful?

Solution

This has nothing to do with culture. Some numbers can not be exactly represented by a base-2 number, just like in base-10 1/3rd can't be exactly represented by .3333333

Note that in your specific case you are putting in more digits than the data type allows: the significant digits available with a Double is 15-16 (depending on range), which your number goes beyond.

Instead of a Double, you can use a Decimal in this case:

object obj = "330.1500249000119";
var val = Convert.ToDecimal(obj);

OTHER TIPS

A decimal would retain the precision.

object obj = "330.1500249000119";
var val = Convert.ToDecimal(obj);

The "issue" you are having is floating point representation.

http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

No, you can't stop it from happening. You are parsing a value that has more digits that the data type can represent.

The precision is not dependent of the culture. A double always has the same precision.

So, if you don't want it to happen, then simply don't do it. If you don't want the effects of the limited precision of floating point numbers, don't use floating point numbers. If you would use a fixed point number (Decimal) instead, it could represent the value exactly.

A CPU represents doubles in 8 bytes. Which is divided into 1 sign bit, 11 bits for the exponent ("the range") and 52 for the mantissa ("the precision"). You have limited range and precision.

The C constant DBL_DIG in <float.h> tells you that such a double can only represent 15 digits precisely, not more. But this number entirely dependent on your c library and CPU.

330.1500249000119 contains 18 digits, so it will be rounded to 330.150024900012. 330.15002490001189 is only one off, which is good. Normally you should expect 1.189 vs 1.2.

For the exact mathematics behind try to read David Goldberg, “What Every Computer Scientist Should Know About Floating-point Arithmetic,” ACM Computing Surveys 23, 1 (1991-03), 5-48. This is worth reading if you are interested in the details, but it does require a background in computer science. http://www.validlab.com/goldberg/paper.pdf

You can stop this from happening by using better floating point types, like long double or __float128, or using a better cpu, like a Sparc64 or s390 which use 41 digits (__float128) natively in HW as long double.

Yes, using an UltraSparc/Niagara or an IBM S390 is culture.

The usual answer is: use long double, dude. Which gives you two more bytes on Intel (18 digits) and several more an powerpc (31 digits), and 41 on sparc64/s390.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top