Question

I have derived and implemented an equation of an expected value. To show that my code is free of errors i have employed the Monte-Carlo computation a number of times to show that it converges into the same value as the equation that i derived.

As I have the data now, how can i visualize this? Is this even the correct test to do? Can I give a measure how sure i am that the results are correct?

Was it helpful?

Solution 2

Why not just do a simple t-test? From your theoretical equation, you have the true mean mu_0 and your simulators mean,mu_1. Note that we can't calculate mu_1, we can only estimate it using the mean/average. So our hypotheses are:

H_0: mu_0 = mu_1  and H_1: mu_0 does not equal mu_1

The test statistic is the usual one-sample test statistic, i.e.

T = (mu_0 - x)/(s/sqrt(n))

where

  • mu_0 is the value from your equation
  • x is the average from your simulator
  • s is the standard deviation
  • n is the number of values used to calculate the mean.

In your case, n is going to be large, so this is equivalent to a Normal test. We reject H_0 when T is bigger/smaller than (-3, 3). This would be equivalent to a p-value < 0.01.

A couple of comments:

  1. You can't "prove" that the means are equal.
  2. You mentioned that you want to test a number of values. One possible solution is to implement a Bonferroni type correction. Basically, you reduce your p-value to: p-value/N where N is the number of tests you are running.
  3. Make your sample size as large as possible. Since we don't have any idea about the variability in your Monte Carlo simulation it's impossible to say use n=....
  4. The value of p-value < 0.01 when T is bigger/smaller than (-3, 3) just comes from the Normal distribution.

OTHER TIPS

It's not clear what you mean by visualising the data, but here are some ideas.

If your Monte Carlo simulation is correct, then the Monte Carlo estimator for your quantity is just the mean of the samples. The variance of your estimator (how far away from the 'correct' value the average value will be) will scale inversely proportional to the number of samples you take: so long as you take enough, you'll get arbitrarily close to the correct answer. So, use a moderate (1000 should suffice if it's univariate) number of samples, and look at the average. If this doesn't agree with your theoretical expectation, then you have an error somewhere, in one of your estimates.

You can also use a histogram of your samples, again if they're one-dimensional. The distribution of samples in the histogram should match the theoretical distribution you're taking the expectation of.

If you know the variance in the same way as you know the expectation, you can also look at the sample variance (the mean squared difference between the sample and the expectation), and check that this matches as well.

EDIT: to put something more 'formal' in the answer!

if M(x) is your Monte Carlo estimator for E[X], then as n -> inf, abs(M(x) - E[X]) -> 0. The variance of M(x) is inversely proportional to n, but exactly what it is will depend on what M is an estimator for. You could construct a specific test for this based on the mean and variance of your samples to see that what you've done makes sense. Every 100 iterations, you could compute the mean of your samples, and take the difference between this and your theoretical E[X]. If this decreases, you're probably error free. If not, you have issues either in your theoretical estimate or your Monte Carlo estimator.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top