Domanda

n=iterations

for some reason this code will need a lot more iterations for more accurate result from other codes, Can anyone explain why this is happening? thanks.

    n,s,x=1000,1,0
    for i in range(0,n,2):
            x+=s*(1/(1+i))*4
            s=-s
    print(x)
È stato utile?

Soluzione

As I mentioned in a comment, the only way to speed this is to transform the sequence. Here's a very simple way, related to the Euler transformation (see roippi's link): for the sum of an alternating sequence, create a new sequence consisting of the average of each pair of successive partial sums. For example, given the alternating sequence

a0 -a1 +a2 -a3 +a4 ...

where all the as are positive, the sequences of partial sums is:

s0=a0  s1=a0-a1  s2=a0-a1+a2  s3=a0-a1+a2-a3  s4=a0-a1+a2-a3+a4 ...

and then the new derived sequence is:

(s0+s1)/2  (s1+s2)/2  (s2+s3)/2  (s3+s4)/2 ...

That can often converge faster - and the same idea can applied to this sequence. That is, create yet another new sequence averaging the terms of that sequence. This can be carried on indefinitely. Here I'll take it one more level:

from math import pi

def leibniz():
    from itertools import count
    s, x = 1.0, 0.0
    for i in count(1, 2):
        x += 4.0*s/i
        s = -s
        yield x

def avg(seq):
    a = next(seq)
    while True:
        b = next(seq)
        yield (a + b) / 2.0
        a = b

base = leibniz()
d1 = avg(base)
d2 = avg(d1)
d3 = avg(d2)

for i in range(20):
    x = next(d3)
    print("{:.6f} {:8.4%}".format(x, (x - pi)/pi))

Output:

3.161905  0.6466%
3.136508 -0.1619%
3.143434  0.0586%
3.140770 -0.0262%
3.142014  0.0134%
3.141355 -0.0076%
3.141736  0.0046%
3.141501 -0.0029%
3.141654  0.0020%
3.141550 -0.0014%
3.141623  0.0010%
3.141570 -0.0007%
3.141610  0.0005%
3.141580 -0.0004%
3.141603  0.0003%
3.141585 -0.0003%
3.141599  0.0002%
3.141587 -0.0002%
3.141597  0.0001%
3.141589 -0.0001%

So after just 20 terms, we've already got pi to about 6 significant digits. The base Leibniz sequence is still at about 2 digits correct:

>>> next(base)
3.099944032373808

That's an enormous improvement. A key point here is that the partial sums of the base Leibniz sequence give approximations that alternate between "too big" and "too small". That's why averaging them gets closer to the truth. The same (alternating between "too big" and "too small") is also true of the derived sequences, so averaging their terms also helps.

That's all hand-wavy, of course. Rigorous justification probably isn't something you're interested in ;-)

Altri suggerimenti

That is because you are using the Leibniz series and it is known to converge very (very) slowly.

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top