Question

I implemented the Madhava–Leibniz series to calculate pi in Python, and then in Cython to improve the speed. The Python version:

from __future__ import division
pi = 0
l = 1
x = True
while True:
    if x:
        pi += 4/l
    else:
        pi -= 4/l
    x = not x
    l += 2
    print str(pi)

The Cython version:

cdef float pi = 0.0
cdef float l = 1.0
cdef unsigned short x = True
while True:
    if x:
        pi += 4.0/l
    else:
        pi -= 4.0/l
    x = not x
    l += 2
    print str(pi)

When I stopped the Python version it had correctly calculated pi to 3.141592. The Cython version eventually ended up at 3.141597 with some more digits that I don't remember (my terminal crashed) but were incorrect. Why are the Cython version's calculations incorrect?

Was it helpful?

Solution

You are using float in the Cython version -- that's single precision! Use double instead, which corresponds to Python's float (funnily enough). The C type float only has about 8 significant decimal digits, whereas double or Python's float have about 16 digits.

OTHER TIPS

If you want to increase speed, note that you can simplify the logic by unrolling your loop once, like so:

cdef double pi = 0.0
cdef double L = 1.0

while True:
    pi += 4.0/L - 4.0/(L+2.0)
    L += 4.0
    print str(pi)

Also note that you don't have to call print inside the loop - it is probably taking ten times longer than the rest of the calculation.

How do you know when it's finished? Have you considered that the value for pi would oscillate about the true value, and you would expect that if you stopped the code at some point, you could have a value that is too high (or too low)?

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top