This looks like a normal binary search algorithm to me, which is known to have O(log n)
complexity (except it returns whether value was found, not its position). You basically "visits" at most log n
of entries of array:
- first you visit and check point in the middle, and check if its the one you sought,
- if not, you limits your searching to "lower" or "upper" sought part of array,
- so you are slicing table in half and then choosing only one of those parts, so long there would be only 1 element left (worst case scenario - sought element would be found earlier),
- how many times you can do that?
n/2/2/2/2/.../2 = 1
- the amount of/2
would belog n
.
It's not a strict analysis. There are likely to be some off-by-one errors as well as no boundary condition checking, but the final result would surely be O(log n)
.
Of course, we have to assume that data
array is sorted and n = max - min
(and min < max
). Otherwise that would be garbage.
BTW: f(n) = O(log n)
means that log n
(from some point) is kind of upper limit of f(n)
(with possibly some positive constant factor). f(n) = O(n log n)
would mean that same for n log n
. So yeah, if its limited by log n
it surely is limited by n log n
(since f(n) <= c1(log n) <= c2(n log n)
for all n greather that some N) but that's not the answer you are looking for.