Can a calculation of floating point differ on different processors? (+passing doubles between C# and C)

StackOverflow https://stackoverflow.com/questions/2335529

문제

I have an application written in C# that invokes some C code as well. The C# code gets some double as an input, performs some calculations on it, pass it to the native layer that perform its own calculations on it, and then passes back to the C# layer.

If i run the same exe/dlls on different machines (all of them are x64 by Intel), is it possible that the final result i'll get will be different on different machines?

도움이 되었습니까?

해결책

If you use the same executable(s) the results should be the same. However, it is worth noting that floating-point calculations are usually highly customizable by a number of persistent settings (infinity mode, rounding mode etc.). This means that the same floating-point instruction can produce different results depending on the current combination of settings. If your application makes sure that all these settings are reset to the same values at the beginning of the execution, then the results should be the same. However, if some of these settings are not reset, or depend on external parameters (like environment variables), then in some circumstances you might end up observing different results on different machines.

다른 팁

Hardware itself should all should all do it the same way, assuming that they implement IEEE floating point operations, and I think most (all?) do.

http://en.wikipedia.org/wiki/IEEE_754-2008

Most modern hardware is standardised, as is the definition of double. You can check that both are using the same type by checking the memory footprint of each variable - eg sizeof(x).

The should also be some information to poll in float.h.

From what I remember, int tends to be more problematic in consistency terms. Some default to 2 bytes, others to 4, but you could always use long to ensure sizes.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top