Pregunta

I'm using the following function to approximate the derivative of a function at a point:

def prime_x(f, x, h):

    if not f(x+h) == f(x) and not h == 0.0: 
        return (f(x+h) - f(x)) / h
    else:
        raise PrecisionError

As a test I'm passing f as fx and x as 3.0. Where fx is:

def fx(x):

    import math
    return math.exp(x)*math.sin(x)

Which has exp(x)*(sin(x)+cos(x)) as derivative. Now, according to Google and to my calculator

exp(3)*(sin(3)+cos(3)) = -17.050059.

So far so good. But when I decided to test the function with small values for h I got the following:

print prime_x(fx, 3.0, 10**-5)
-17.0502585578
print prime_x(fx, 3.0, 10**-10)
-17.0500591423
 print prime_x(fx, 3.0, 10**-12)
-17.0512493014
print prime_x(fx, 3.0, 10**-13)
-17.0352620898
print prime_x(fx, 3.0, 10**-16)
__main__.PrecisionError: Mantissa is 16 digits

Why does the error increase when h decreases (after a certain point)? I was expecting the contrary until f(x+h) was equal to f(x).

¿Fue útil?

Solución

Floating-point arithmetic (and integer arithmetic and fixed-point arithmetic) has a certain granularity: Values can only change by a certain step size. For IEEE-754 64-bit binary format, that step size is about 2–52 times the value (about 2.22•10–16). That is very small for physical measurements.

However, when you make h very small, the difference between f(x) and f(x+h) is not very large compared to the step size. The difference can only be an integral multiple of the step size.

When the derivative is d, the change in f(x) is about hd. Even if you calculate f(x) and f(x+h) as well as possible in the floating-point format, the measured value of their difference must be a multiple of the step size s, so it must be round(hd/s)•s, where round(y) is y rounded to the nearest integer. Clearly, as you make h smaller, hd/s is smaller, so the effect of rounding it to an integer is relatively larger.

Another way of looking at this is that, for a given f(x), there is a certain amount of error in calculating values around f(x). As you make h smaller, f(x+h)–f(x) gets smaller, but the error stays the same. Therefore, the error increases relative to h.

Otros consejos

When you subtract two numbers that are almost the same, the result has much less precision than either of the inputs. This reduces the precision of the overall result.

Suppose you have the following two numbers, good to 15 decimal places:

  1.000000000000001
- 1.000000000000000
= 0.000000000000001

See what happened? The result only has one good digit.

Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top