Question

I have a program written in C# and some parts are writing in native C/C++. I use doubles to calculate some values and sometimes the result is wrong because of too small precision. After some investigation i figured out that someone is setting the floating-point precision to 24-bits. My code works fine, when i reset the precision to at least 53-bits (using _fpreset or _controlfp), but i still need to figure out who is responsible for setting the precision to 24-bits in the first place.

Any ideas who i could achieve this?

Was it helpful?

Solution

This is caused by the default Direct3D device initialisation. You can tell Direct3D not to mess with the FPU precision by passing the D3DCREATE_FPU_PRESERVE flag to CreateDevice. There is also a managed code equivalent to this flag (CreateFlags.FpuPreserve) if you need it.

More information can be found at Direct3D and the FPU.

OTHER TIPS

What about a binary search by partitions into your program and determining which calls reduce the precision?

Is your code using DirectX or XNA at all? I've certainly heard that there are problems due to that - some DirectX initialization code (possibly only in the managed wrapper?) reduces the precision.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top