Question

This code estimates the value of pi and it then compares it to the real pi value by a certain accuracy which is defined as 'c'. Then it decreases 'c' to a smaller number and does the calculation again.

The values of c are .01,0.001,0.0001,0.00001.

What I am trying to do is the whole process 10 times and and find the average for the amount of 'd' which the amount of times it runs the code to get to the accuracy level I want.

import math
import random
pi = math.pi

n = 0
d = 0
ratios = []
xs = []
ys = []
c = 0.1
simulating = True

while c >= 0.0001:

    while simulating:
        x=random.random()
        y=random.random()
        xs.append(x)
        ys.append(y)
        if x**2 + y**2 <= 1.0:
            n += 1
        d += 1
        ratio = 4*n*1./d
        ratios.append(ratio)
        if abs(ratio-pi) / pi <= c:
            print "Draws Needed: ", d
            break

    c = c*.1
    print c       
Was it helpful?

Solution

Here are our corrections:

from __future__ import division
import random

pi = random._pi
error = 0.1
inCircle, Total = 0,0
while (error >= 0.0001):
    print '%g ...'%error
    while True:
        x,y = random.random(), random.random()
        if (0.5-x)**2+(0.5-y)**2 <= 0.25: inCircle += 1
        Total += 1
        estimate = 4*inCircle/Total
        if abs(estimate/pi-1) <= error:
            print '{est.} %g vs. {pi} %g after %d trials, {err} %g\n'%( \
                   estimate,pi,Total,error)
            break
    error *= 0.1

results:

0.1 ...
{est.} 3.33333 vs. {pi} 3.14159 after 6 trials, {err} 0.1

0.01 ...
{est.} 3.11765 vs. {pi} 3.14159 after 68 trials, {err} 0.01

0.001 ...
{est.} 3.14286 vs. {pi} 3.14159 after 70 trials, {err} 0.001

0.0001 ...
{est.} 3.1417 vs. {pi} 3.14159 after 247 trials, {err} 0.0001
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top