Question

At work we have MSVS2010 Ultimate, and I'm writing a program which runs exhaustive simulations using real numbers. I'm getting non-trivial round-off errors and I've already taken reasonable steps to ensure my algorithm is as numerically stable as possible.

I'd like to switch to 128-bit quadruple precision floating point numbers (long double, right?), to see how much of a difference it makes.

I've replaced all relevant instances of double with long double, recompiled, and ran my dummy simulation again but have exactly the same result as before.

These are my (debug) compiler options as per my project property page in C/C++:

/ZI /nologo /W3 /WX- /Od /Oy- /D "_MBCS" /Gm /EHsc /RTC1 /GS /fp:precise /Zc:wchar_t /Zc:forScope /Fp"Debug\FFTU.pch" /Fa"Debug\" /Fo"Debug\" /Fd"Debug\vc100.pdb" /Gd /analyze- /errorReport:queue

My dev CPU is a Core2 Duo T7300 but the target machine will be an i7. Both installations are Windows 7 64-bit.

Was it helpful?

Solution

You could switch to a non-Microsoft compiler such as gcc, Borland, or Intel. Those all recognize long double as 80-bit extended precision, the native internal format of the 8087.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top