How does a C# evaluate floating point in hover over and intermediate window versus compiled?

StackOverflow https://stackoverflow.com/questions/1405486

  •  05-07-2019
  •  | 
  •  

Question

I am seeing something odd with storing doubles in a dictionary, and am confused as to why.

Here's the code:

            Dictionary<string, double> a = new Dictionary<string, double>();
            a.Add("a", 1e-3);

            if (1.0 < a["a"] * 1e3)
                Console.WriteLine("Wrong");

            if (1.0 < 1e-3 * 1e3)
                Console.WriteLine("Wrong");

The second if statement works as expected; 1.0 is not less than 1.0. Now, the first if statement evaluates as true. The very odd thing is that when I hover over the if, the intellisense tells me false, yet the code happily moves to the Console.WriteLine.

This is for C# 3.5 in Visual Studio 2008.

Is this a floating point accuracy problem? Then why does the second if statement work? I feel I am missing something very fundamental here.

Any insight is appreciated.

Edit2 (Re-purposing question a little bit):

I can accept the math precision problem, but my question now is: why does the hover over evaluate properly? This is also true of the intermediate window. I paste the code from the first if statement into the intermediate window and it evaluates false.

Update

First of all, thanks very much for all the great answers.

I am also having problems recreating this in another project on the same machine. Looking at the project settings, I see no differences. Looking at the IL between the projects, I see no differences. Looking at the disassembly, I see no apparent differences (besides memory addresses). Yet when I debug the original project, I see: screenshot of problem

The immediate window tells me the if is false, yet the code falls into the conditional.

At any rate, the best answer, I think, is to prepare for floating point arithmetic in these situations. The reason I couldn't let this go has more to do with the debugger's calculations differing from the runtime. So thanks very much to Brian Gideon and stephentyrone for some very insightful comments.

Was it helpful?

Solution

It is floating precision problem.

Second statement works because the compiler counts the expression 1e-3 * 1e3 before emitting the .exe.

Look it up in ILDasm/Reflector, it will emit something like

 if (1.0 < 1.0)
                Console.WriteLine("Wrong");

OTHER TIPS

The problem here is quite subtle. The C# compiler doesn't (always) emit code that does the computation in double, even when that's the type that you've specified. In particular, it emits code that does the computation in "extended" precision using x87 instructions, without rounding intermediate results to double.

Depending on whether 1e-3 is evaluated as a double or long double, and whether the multiplication is computed in double or long double, it is possible to get any of the following three results:

  • (long double)1e-3 * 1e3 computed in long double is 1.0 - epsilon
  • (double)1e-3 * 1e3 computed in double is exactly 1.0
  • (double)1e-3 * 1e3 computed in long double is 1.0 + epsilon

Clearly, the first comparison, the one that is failing to meet your expectations is being evaluated in the manner described in the third scenario I listed. 1e-3 is being rounded to double either because you are storing it and loading it again, which forces the rounding, or because C# recognizes 1e-3 as a double-precision literal and treats it that way. The multiplication is being evaluated in long double because C# has a brain-dead numerics model that's the way the compiler is generating the code.

The multiplication in the second comparison is either being evaluated using one of the other two methods, (You can figure out which by trying "1 > 1e-3 * 1e3"), or the compiler is rounding the result of the multiplication before comparing it with 1.0 when it evaluates the expression at compile time.

It is likely possible for you to tell the compiler not to use extended precision without you telling it to via some build setting; enabling codegen to SSE2 may also work.

See the answers here

Umm...strange. I am not able to reproduce your problem. I am using C# 3.5 and Visual Studio 2008 as well. I have typed in your example exactly as it was posted and I am not seeing either Console.WriteLine statement execute.

Also, the second if statement is getting optimized out by the compiler. When I examine both the debug and release builds in ILDASM/Reflector I see no evidence of it. That makes since because I get a compiler warning saying unreachable code was detected on it.

Finally, I do not see how this could be a floating point precision issue anyway. Why would the C# compiler statically evaluate two doubles differently than the CLR would at runtime? If that were really the case then one could make the argument that the C# compiler has a bug.

Edit: After giving this a little more thought I am even more convinced that this is not a floating point precision issue. You must have either stumbled across a bug in the compiler or the debugger or the code you posted is not exactly representative of your actual code that is running. I am highly skeptical of a bug in the compiler, but a bug in the debugger seems more likely. Try rebuilding the project and running it again. Maybe the debugging information compiled along with exe got out of sync or something.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top