سؤال

I am contemplating a fixed-point arithmetic library, and in order to decide on how much optimization should be done by the library itself (through expression templates) I started questioning how much will already be done by the optimizer. Take the following example for instance:

//This is a totally useless function to exemplify my point
void Compare(FixedPoint a, FixedPoint b) {
   if(a/b>10) {
      ... do stuff
   }
}

Now, in this function, a typical implementation of the FixedPoint class will cause

if( ( (a_<<N) / b_) > (10 <<N) ) {
... do stuff
}

Where N is the number of fractional bits. That expression could mathematically be transformed into:

(a_ > 10*b_)

even though this transformation will not result in the same behavior when you consider integer overflow. The users of my library will presumably care about the mathematical equivalence and would rather have the reduced version (possibly provided through expression templates).

Now, the question is: Will the optimizer dare do the optimization itself, even though the behavior is not strictly the same? Should I bother with such optimizations? Note that such optimizations aren't trivial. In reality, you rarely have to do any bit shifts when you're using fixed-point arithmetic if you actually do these optimizations.

هل كانت مفيدة؟

المحلول

That will depend on whether the a_ and b_ types are signed or unsigned.

In C and C++ signed overflow is technically undefined behavior, while unsigned overflow is done using two-complement arithmetic.

Nevertheless, some compilers refuse to optimize the that code because many programs rely on the two-complement behavior of the signed overflow.

Good modern compilers will have an option to enable/disable this particular assumption: that signed integers won't overflow. What option is the default will vary with the compiler.

With GCC, for example, see options -fstrict-overflow/-fno-strict-overflow and the related warning -Wstrict-overflow.

مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى StackOverflow
scroll top