Question

I have a FORTRAN code that I need to run on a server. I noticed that the results are a little different between the two machines. Looking into it, I learned that the difference rise from a function which return a real value to a double precision variable. On the locale machine I use old compiler (gnu f95 4.1.2) and on the remote machine I use ifort.

Was it helpful?

Solution

You should expect small difference between the same program compiled by different compilers. Finite precision arithmetic doesn't obey the rules that we expect for real numbers. So if the compilers change the order of operations, the results may differ slightly.

That said, gfortran 4.1 is very old to point of being obsolete. I wouldn't use a version of gfortran earlier than 4.3. I strongly recommend upgrading.

OTHER TIPS

Note that it is probable that your real value is a 32-bit floating-point number while the double precision is 64-bit. I suspect that the difference in results is due to the different ways in which the two compilers fill up the extra bits in the double precision variable when passed a real value. However, the default size of a Fortran real is compiler-dependent and can be set by compiler options, so check your documentation and compilation options.

double precision is now deprecated but is required to provide more precision than a default real though successive Fortran standards are silent on how much more precision is to be provided. It is reasonable to expect most compilers on most computers to default to 64-bits for double precision, but you might not want to bet your mortgage on a reasonable expectation.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top