Pergunta

I'm using Codeblocks + GNU Fortran.

The problem is that I have calculations like:

SQRT(1-COS*COS)

And when I do these calculations a lot (a few million times) sometimes the value under square root is negative and therefore I get NaNs in result.

My efforts have shown that when square root is being calculated for a negative number COS equals "-1". Therefore, fortran counts -1*-1 incorrectly as there should be 0 under square root but there isn't.

Is there a way to solve this problem? This concerns not only pythagorean trigonometric identity but anything under square root looking like

SQRT(1-x*x)

With X being in range of [-1,1].

Basically COST is defined like this in my program (I apologize for somewhat lengthy introduction before COST itself but that's how it goes):

XDET = 0.
YDET = 0.
ZDET = 50.
RADIUS = 1.

x = RADIUS*sqrt(omega)  !omega=random number in uniform distribution [0,1]
y = 0.
z = 1.E-20

DW=SQRT((XDET-X)**2+(YDET-Y)**2+(ZDET-Z)**2)
DWW = 1./DW
AN2=(ZDET-Z)*DWW

COST = AN2
if(COST > 1. ) COST = 1.
if(COST < -1.) COST = -1.
SINT = SQRT(1.-COST*COST)

By the way, the AN2 sometimes assumed an absolute zero that lead to NaNs as well before I trapped it.

P.S. I also have a bug of EXP(X) with X being higher than 90 showing up as INFINITY.

Foi útil?

Solução

The explanation for what your PS identifies as a bug is simple

exp(90.0) > 3.4028235 x 10^38

and 3.4028235 x 10^38 is the largest positive number that a single-precision floating-point number can represent with any accuracy.

This analysis does, of course, assume that your variable x is an IEEE 32-bit floating-point number.

Note too, that the expression ZDET-Z will never, in single-precision, be different from 1.0. 1.0 - 1.0e-20 == 0.9999999999999999999 but representing this exactly exceeds the precision available and the number is rounded to 1.0.

While I still can't see how 1.-COST*COST would ever be negative your use of floating-point arithmetic isn't reassuring me that there aren't subtle mistakes in those parts of the code you haven't shown us.

Outras dicas

My guess would be this is due to limited precision of floating point numbers. Take a look here: What Every Computer Scientist Should Know About Floating-Point Arithmetic.

Simple solution:

xx = x*x
if ( xx .gt. 1.e0 ) xx = 1.e0 ! 1.d0 for double precision

y = sqrt( 1.e0 - xx ) ! Again, 1.d0 for double precision

Or, as a one-liner:

 y = sqrt( 1.e0 - min( x*x, 1.e0 ) )
Licenciado em: CC-BY-SA com atribuição
Não afiliado a StackOverflow
scroll top