Question

I am given the following randomized algorithm for SAT,

  • Input: A satisfiable CNF-formula $\varphi$.
  • Output: An assignment $\rho$, such that $\rho \models \varphi$.

The algorithm works as follow:

  1. Choose an arbitrary assignment $\rho$.
  2. As long as $\rho \not \models \varphi$
    1. Choose a clause in $\varphi$ not satisfied by $\rho$ uniformly at random.
    2. Choose a variable $x\in_{u.a.r}\operatorname{VAR}(C)$.
    3. Flip the value of $\rho(x)$ (set $\rho(x) = \overline{\rho(x)}$).

We have to prove that for a 2-CNF Formula, the algorithm has polynomial expected running time.


I have proven that for a fixed assignment $\alpha$, such taht $\alpha \models \varphi$, with probability $p \geq 1/2$, after each iteration the number of variables that are assigned different values in $\alpha$ and $\rho$ decrease by one. With probability $1-p$, the assignments $\rho$ and $\alpha$ differ at one extra variable.

Now I have to prove, that the algorithm finished in expected polynomial number of steps. I was able to add one more step of abstraction. Let $X_i$ be the random variable that indicates the number of steps needed to make $\rho = \alpha$, when $\rho$ and $\alpha$ differ by exactly $i$ variables. Then it holds that $$E[X_i] = 1 + p E[X_{i-1}] + (1-p) E[X_{i+1}],$$ and $X_i \leq X_{i+1}$ for all $i$ and $E[X_0]$ is equal to 0. We need to find a polynomial bound for $E[X_i]$.

Since $p\geq 1/2$ and $X_i \leq X_{i+1}$, the following must hold $$E[X_i] \leq 1 + \frac{E[X_{i-1}] + E[X_{i-1}]}{2}$$

Now this can bee seen as walking on the integer line, in each step we move either one step to the left or one step to the right and the probability of moving to the left is at least one half. We have to prove that in expected polynomial many steps (polynomial in the starting position), we reach the number $0$ on the line.

Any help on this problem is very appreciated :)

Was it helpful?

Solution

The behavior when $p = 1/2$ and when $p > 1/2$ is rather different. When $p > 1/2$, in expectation you move $2p-1$ steps to the left, so you will hit the origin after a linear number of steps. When $p = 1/2$, the situation is more complicated.

Consider a random walk on the line started at the origin. The number of walks of length $2n$ which never move right of the origin is $\frac{\binom{2n}{n}}{n+1}$ (these are the Catalan numbers), and so the probability that you move right of the origin at the first time after exactly $2n+1$ steps is $$ q_{2n+1} = \frac{1}{2^{2n+1}} \frac{\binom{2n}{n}}{n+1} = \Theta\left(\frac{1}{n^{3/2}}\right). $$ Therefore the expected number of steps until you first move right of the origin is $$ \sum_{n=0}^\infty (2n+1) q_{2n+1} = \sum_{n=0}^\infty \Theta\left(\frac{1}{\sqrt{n}}\right) = \infty. $$ On the other hand, the number of visits to the origin is $$ \sum_{n=0}^\infty \frac{\binom{2n}{n}}{2^{2n}} = \sum_{n=0}^\infty \Theta\left(\frac{1}{\sqrt{n}}\right) = \infty, $$ so you will move right of the origin infinitely many times.

The conclusion is that a random walk started at $i > 0$ will hit the origin almost surely, but the expected time to reach the origin is infinite.

Your case is, however, different, since you "bounce" at $i = n$: when at position $n$, you always move to $n-1$. You can think of the situation in the following way. Although the actual set of positions is $0,\ldots,n$, we imagine reflecting this set along the point $n$, adding $n-1$ new positions $n+1,\ldots,2n$. Here position $n+k$ really plays the role of position $n-k$. We can run the random walk as usual, declaring victory upon reaching either position $0$ or position $2n$.

After $t$ steps, the displacement from the initial position has binomial distribution $\mathrm{Bin}(t,1/2)$, which is close to a normal distribution $N(t/2,t/4)$. Therefore after $t$ steps, we expect to be at a distance of about $\Theta(\sqrt{t})$ from where we started (this can be shown formally by considering the squared displacement and using Chebyshev's inequality, which requires an estimate of the fourth moment of a binomial random variable). In your case, we start at some position $i \in \{1,\ldots,n\}$. After $\Theta(n^2)$ steps, we expect to be about $\Theta(n)$ far from where we started, and so, hit either $0$ or $2n$. Therefore the process should converge in $O(n^2)$ steps.


We can also solve the recurrence directly when $p = 1/2$. Recall that the recurrence is $$ E[X_i] = 1 + \frac{E[X_{i-1}] + E[X_{i+1}]}{2}, $$ with initial conditions $E[X_0] = E[X_{2n}] = 0$.

Let $N_i = E[X_i] + i^2$. Then $$ N_i = 1 + \frac{N_{i-1} + N_{i+1} - (i-1)^2 - (i+1)^2}{2} + i^2 = \frac{N_{i-1}+N_{i+1}}{2}, $$ with initial conditions $N_0 = 0$ and $N_{2n} = 4n^2$.

Let $M_i = N_i - 2ni$. Then $M_i$ satisfies the same recurrence as $N_i$, with initial conditions $M_0 = M_{2n} = 0$. It is easy to check that the minimum and maximum of the sequence $M_0,\ldots,M_{2n}$ must be attained at the endpoints (since $\max(M_{i-1},M_{i+1}) \geq M_i$ and $\min(M_{i-1},M_{i+1}) \leq M_i$), and so $M_i \equiv 0$. Therefore $N_i = 2ni$, and so $$ E[X_i] = i(2n-i) = n^2 - (n-i)^2. $$

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top