Question

I'm a little bit confused about the analysis of binary search. In almost every paper, the writer assumes that the array size $n$ is always $2^k$. Well I truly understand that the time complexity becomes $\log(n)$ (worst case) under this assumption. But what if $n \neq 2^k$?

For example if $n=24$, then we have 5 iterations for 24
4 i. for 12
3 i. for 6
2 i. for 3
1 i. for 1

So how do we get the result $k=\log n$ in this example (I mean of course every similar example whereby $n\neq2^k$)?

Was it helpful?

Solution

$n=2^k$ is a simplifying assumption that ensures binary search "goes through" properly. If the array has a length that is not a power of two, you have to break ties when choosing middle elements, "halves" are not equally large and not all decisions lead to the same number of steps.

The worst-case number of steps on an array of size $n$ can certainly bounded from above by the maximum number of steps on an array of size $2^k$, $n \leq 2^k$. If we choose $k$ as small as possible, we obtain $k = \lceil \log n \rceil$.

With $\lceil \log n \rceil \leq \log(n) + 1$ we see that the worst-case number of steps is indeed asymptotically equal to $\log n$.

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top