Question

When can we say that a Monte Carlo algorithm solves a problem?

To quote from Wikipedia on Monte Carlo algorithms

For instance, the Solovay–Strassen primality test is used to determine whether a given number is a prime number. It always answers true for prime number inputs; for composite inputs, it answers false with probability at least ½ and true with probability less than ½.

What would happen if the Solovay–Strassen Test would answer true for only 1% of composite inputs?

Would we then still say that it solves the problem of testing primality?

Or is there some requirement like that a Monte Carlo algorithm has to answer true for more than half the cases?

Was it helpful?

Solution

In general Monte Carlo is used to solve a wide variety of different types of problems. In this particular case you want to learn if a random variable is the constant 1, or not. The idea is straightforward, sample the random variable multiple times, (each sample independently from previous sample to avoid bias) and check if all results were 1. If at least some of the result was 0, we know for sure the random variable is not constant 1 (in Solovay-Strassen Test context, the number is composite).

It is important to emphasize, since Monte Carlo is a randomized algorithm, that it is said to solve the problem if the probability of returning the wrong answer is below some threshold (a small number we will call $\epsilon$).

What happens if all outcome were 1? There is a chance that it is the constant 1, but also there is a chance we were not lucky and all outcome were 1 when they can also be 0. If the probability of sampling a 1 is $p < 1$, then after $n$ samples the probability of getting all 1 is $P_n = p^n$. Note that while $n$ increase $P_n \rightarrow 0$. We can specify a threshold, let's say $\epsilon = 10^{-10}$, such that if $P_n < \epsilon$ (i.e. the probability of having a false positive is less than $\epsilon$) we are ok with that result.


Now the answer to your question. $\forall p < 1, \epsilon > 0 \space \exists n \space p^n < \epsilon$. What is this telling you exactly?

Whatever the success probability is, as far as it is less than $1$ (for example $p = 0.99$ or $p=0.01$ or $p=0.5$) and threshold $\epsilon$ there exist an $n$ such that if we run the experiment $n$ times (sample $n$ times the random variable independently) we are going to fail with probability at most $\epsilon$. So Monte Carlo can be applied for non-degenerated values of $p$, just the number $n$ of sample must be adjusted to satisfy the $\epsilon$ threshold requirement.

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top