Asymptotic of divide-and-conquer type recurrence with non-constant weight repartition between subproblems and lower order fudge terms

cs.stackexchange https://cs.stackexchange.com/questions/126572

  •  29-09-2020
  •  | 
  •  

Question

While trying to analyse the runtime of an algorithm, I arrive to a recurrence of the following type :

$$\begin{cases} T(n) = \Theta(1), & \text{for small enough $n$;}\\ T(n) \leq T(a_n n + h(n)) + T((1-a_n)n+h(n)) + f(n), & \text{for larger $n$.} \end{cases}$$ Where:

  • $a_n$ is unknown and can depend on $n$ but is bounded by two constants $0<c_1\leq a_n \leq c_2 < 1$.
  • $h(n)$ is some "fudge term" which is negligeable compared to $n$ (say $O(n^\epsilon)$ for $0\leq \epsilon < 1$).

If $a_n$ was a constant, then I could use the Akra-Bazzi method to get a result. On the other hand, if the fudge term didn't exist, then some type of recursion-tree analysis would be straight-forward.

To make things a bit more concrete, here is the recurrence I want to get the asymptotic growth of:

$$\begin{cases} T(n) = 1, & \text{for n = 1;}\\ T(n) \leq T(a_n n + \sqrt n) + T((1-a_n)n+\sqrt n) + n, & \text{for $n \geq 2$} \end{cases}$$ Where $\frac{1}{4} \leq a_n \leq \frac{3}{4}$ for all $n\geq 1$.

I tried various guesses on upper bounds or transformations. Everything tells me this should work out to $O(n\log(n))$ and I can argue informaly that it does (although I might be wrong). But I can only prove $O(n^{1+\epsilon})$ (for any $\epsilon>0$), and this feels like something that some generalization of the Master theorem à la Akra-Bazzi should be able to take care of.

Any suggestions on how to tackle this type of recurrence?

Was it helpful?

Solution

According to the OP, to complete the proof we need to prove that for large enough $n$, $$ (a_n n + \sqrt{n})^{a_n + 1/\sqrt{n}}((1-a_n)n + \sqrt{n})^{1-a_n + 1/\sqrt{n}} \leq n. $$ Taking out a factor of $n^{1+2/\sqrt{n}}$, we get $$ (a_n + 1/\sqrt{n})^{a_n + 1/\sqrt{n}} (1-a_n + 1/\sqrt{n})^{1-a_n + 1/\sqrt{n}} \leq n^{-2/\sqrt{n}}. $$ Dividing by $(1+2/\sqrt{n})^{1+2/\sqrt{n}}$, we get $$ \left( \frac{a_n + 1/\sqrt{n}}{1 + 2/\sqrt{n}} \right)^{a_n + 1/\sqrt{n}} \left( \frac{1-a_n + 1/\sqrt{n}}{1 + 2/\sqrt{n}} \right)^{1-a_n + 1/\sqrt{n}} \leq \frac{1}{n^{2/\sqrt{n}} (1+2/\sqrt{n})^{1+2/\sqrt{n}}}. $$ Raising to the power $1/(1+2/\sqrt{n})$, and defining $p = (a_n + 1/\sqrt{n})/(1 + 2/\sqrt{n})$, we get $$ p^p (1-p)^{1-p} \leq \frac{1}{n^{(2/\sqrt{n})/(1+2/\sqrt{n})}(1+2/\sqrt{n})}. $$

The left-hand side is $1/\exp h(p)$, where $h(p)$ is the entropy function. Hence it is maximized when $a_n = 1/4$, at which point $p \approx 1/4$. Since $(1/4)^{1/4} (3/4)^{3/4} < 1$, for large enough $n$ we can bound the left-hand side by some $\theta < 1$. As for the right-hand side, as $n\to\infty$ it tends to $1$. Indeed, taking logarithm, we get $$ \frac{2\log n}{\sqrt{n} + 2} + O\left(\frac1{\sqrt{n}}\right) \longrightarrow 0. $$ In particular, for large enough $n$ it will be larger than $\theta$.

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top