Question

I am using a debug build and obtaining different results on the same machine whether I run under the debugger or not. I am using the excellent TestDriven.Net to run the unit tests.

  • "run" with TestDriven.Net or the external NUnit runner produces the same result
  • "run with debugger" with TestDriven.Net produces different results

The code is

  • A complex iterative mesh deformation routine involving significant computation at the limits of floating point precision
  • C#, VS2012 targeting .Net 3.5.
  • Single threaded
  • Only debug build, no release version has been built
  • Same machine, no powersaving\speedstep or other feature I am aware of
  • Vanilla C# - No unsafe code, unmanaged libraries, platform invoke etc.
  • No debugger checks in code or strange third-party libraries

I have not tracked back to the first difference (tricky without a debugger!) but given how iterative the code is, its input sensitive and the tiniest difference will grow to significant proportions given enough time.

I am aware of how fragile fp reproducibility is across compilers, platforms and architectures but disappointed to find the debugger is one of the factors to throw this off.

Do I have just have to accept this as a fact of life or is there any advice you can offer?

Was it helpful?

Solution

Do I have just have to accept this as a fact of life or is there any advice you can offer?

You have to accept it as a fact of life. Floating point code can be optimized differently in different situations. In particular, in some cases the JIT compiler can use a representation with more precision/accuracy (e.g. 80-bit floating point) for operations. The situations under which the JIT compiler will do this will depend on the architecture, optimization settings etc. There can be any number of subtleties about what you do with a variable (and whether it's a local variable or not) which can affect this. Running under a debugger affects JIT optimization settings very significantly in general - not just for floating point - so I'm not at all surprised by this.

If you perform floating point comparisons with a certain tolerance, it should be fine - it's very rarely a good idea to do exact equality comparisons on floating point types anyway. Of course it's possible that you're actually performing a non-equality comparison where the differences become significant, but I've rarely come across that as a problem.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top