質問

What is the Big-O of division on most modern day ISAs? Is there some kind of optimization or is it the naive O(numerator/denominator)? I'm writing code that relies heavily of modulus operation.

For example, what is the relative times taken to perform 10/5 and 20/5 and 40/5? Do modern processors from Intel, nVidia, Qualcomm etc. have the same Big-O for division?

NOTE: I could be wrong here by assuming that division is O(size of numerator) and this question may not make any sense at all. Please correct me if that's the case.

役に立ちましたか?

解決

This question is not that good. But it is also not that "stupid", so I try to answer /clarify some points:

Almost all modern CPU/GPUs have a division instruction. As it works on the default word size, it doesnt matter how fast it is, in terms of Big-O it is constant, so its always O(1). Even for embedded processors, microcontrollers and similar, that do not have an division instruction this is true, as it is emulated in software and the software emulation is bound in terms of the size of a word, so its always a constant time to perform the operation (that means its also O(1)).

The exception is when speaking about operation performed on non-word sized data. This happens e.g. when talking about BigInt libararies. But in this case ALL operations (addition, multiplication,...) are no longer O(1) but depend on the size of the numbers.

But attention: Big-O does not say something about the real computation time. Its just the asymptotic behaviour neglecting a constant factor. That means, even when you have two algorithms that take O(n) the time difference can be a factor of 1000 (or one million or whatever you want). Best example the division: It an e.g. an addition are both O(1), but usually the division takes far more cycles/time to execute than the addition.

ライセンス: CC-BY-SA帰属
所属していません StackOverflow
scroll top