Do I have just have to accept this as a fact of life or is there any advice you can offer?
You have to accept it as a fact of life. Floating point code can be optimized differently in different situations. In particular, in some cases the JIT compiler can use a representation with more precision/accuracy (e.g. 80-bit floating point) for operations. The situations under which the JIT compiler will do this will depend on the architecture, optimization settings etc. There can be any number of subtleties about what you do with a variable (and whether it's a local variable or not) which can affect this. Running under a debugger affects JIT optimization settings very significantly in general - not just for floating point - so I'm not at all surprised by this.
If you perform floating point comparisons with a certain tolerance, it should be fine - it's very rarely a good idea to do exact equality comparisons on floating point types anyway. Of course it's possible that you're actually performing a non-equality comparison where the differences become significant, but I've rarely come across that as a problem.