Question

I have the following code sample:

        float val = 16777216.0F;
        var badResult = Convert.ToDecimal(val);
        //badResult has value 16777220

Why is this precision lost? the value specified is 2^24, a value which float can represent. Are there any .net libraries I can use to get this conversion to work correctly without having to roll my own iCustomFormatter?

Thanks!

Edit, this is the ugly code I used as a solution

var goodResult = Convert.ToDecimal(((double)val));
Was it helpful?

Solution

From the documentation for System.Single:

By default, a Single value contains only 7 decimal digits of precision, although a maximum of 9 digits is maintained internally.

The value given is indeed correct to 7 significant digits. While the exact value of the float is in fact the one you've given, it seems reasonable for the conversion to string to only show the number of digits which are known to be correct, with the final digit being potentially rounded.

OTHER TIPS

The conversion to decimal works fine; the problem is that the float type (System.Single) cannot represent this value exactly, so it's actually 1.677722E+07. If you use double (System.Double), which has a higher precision, it will work as expected.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top