This isn't a NumPy issue. Your expectation that the result should be -1
is incorrect.
You're effectively computing the definite integral of a function f(x)
on a large interval around 5100
. The function you're integrating simplifies to factor * x * gaussian(x) / (1 - factor * x)
. We can easily give a back-of-the-envelope estimate for the value of the integral for the factors you're using: the quantity factor * x / (1 - factor * x)
will vary fairly slowly across the range of interest, which is about 5095
to 5105
; for anything outside this range the gaussian will render the contribution negligible. So to a first approximation we can treat that quantity as constant. Then we're left with that constant times the integral of gaussian(x)
, which will be near enough 1
. Thus the expected output should be somewhere in the region of factor * 5100 / (1 - factor * 5100)
. (Though that's not going to work so well when factor
is close to 1 / 5100
.)
So for example in the case that factor
is 0.0001
, factor * 5100 / (1 - factor * 5100)
has value around 1.0408163265306123
. This is close enough to the answer you're seeing to make it plausible that NumPy is doing more-or-less the right thing.