If x
and y
are very close to DBL_MIN
, the calculations are
succeptible to underflow or extreme loss of precision: if x
is
very close to DBL_MIN
, for example x * x
may be 0.0
, or
(for somewhat larger values) it may result in what is called
gradual underflow, with extreme loss of precision: e.g. with
IEEE double (most, if not all desktop and laptop PCs), 1E-300
* 1E-300
will be 0.0
. Obviously, if this happens for both
* x
and y
, you'll end up with n == 0.0
, even if x
and
y
are both positive.
In C++11, there is a function hypot
, which will solve the
problem for n
; if x * x
is 0.0
, however, d1
will still
be 0.0
; you'll probably get better results with (x / n) * x
(but I think that there still may be limit cases where you'll
end up with 0.0
or gradual underflow—I've not analyzed it sufficiently to be sure). A better solution
would be to scale the data differently, to avoid such limit
cases.