Question

I'm working on something and I've got a problem which I do not understand.

double d = 95.24 / (double)100;
Console.Write(d); //Break point here

The console output is 0.9524 (as expected) but if I look to 'd' after stoped the program it returns 0.95239999999999991.

I have tried every cast possible and the result is the same. The problem is I use 'd' elsewhere and this precision problem makes my program failed.

So why does it do that? How can I fix it?

Was it helpful?

Solution

http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems

The short of it is that a floating-point number is stored in what amounts to base-2 scientific notation. There is an integer significand understood to have one place in front of a decimal point, which is raised to an integer power of two. This allows for the storage of numbers in a relatively compact format; the downside is that the conversion from base ten to base 2 and back can introduce error.

To mitigate this, whenever high precision at low magnitudes is required, use decimal instead of double; decimal is a 128-bit floating point number type designed to have very high precision, at the cost of reduced range (it can only represent numbers up to +- 79 nonillion, 7.9E28, instead of double's +- 1.8E308; still plenty for most non-astronomical, non-physics computer programs) and double the memory footprint of a double.

OTHER TIPS

Use decimal instead of double.

A very good article that describes a lot: What Every Computer Scientist Should Know About Floating-Point Arithmetic It is not related to C#, but to the float arithmetic in general.

You could use a decimal instead:

decimal d = 95.24 / 100m;
Console.Write(d); //Break point here

Try:

double d = Math.Round(95.24/(double)100, 4);

Console.Write(d);

edit: Or use a decimal, yeah. I was just trying to keep the answer as similar to the question as possible :)

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top