Question

Which operation should be faster on a x86 CPU on Linux and what are the average differences (in %):

unsigned int x, y, z;
x = y / z;

or:

double x, y, z;
x = y / z;

The operation on double will be executed by the FPU and the CPU may continue to other commands. Is it correct? Does it depend on compilation flags (I'm using gcc with the -O3 flag)?

Was it helpful?

Solution

If your work is inherently integer-based, the int-float and float-int conversions may ruin any performance benefit. C's default conversion (truncation) can be particularly slow on older Intel chips.

Apart from that, there are correctness issues with your idea and that's probably sufficient reason not to do it.

OTHER TIPS

An individual floating-point division instruction will take longer than an integer one. However, if you're doing lots in a row, they'll reach approximately the same rate, due to pipelining (on a modern x86, at least).

Oh, and yes, the CPU can be getting on with other operations whilst the FPU is busy.

Integral operations are generally faster than their floating point counterpart. The difference mostly depends on hardware: some platforms don't even have an FPU.

Such a simple operation shouldn't depend at all on your operating system or on compiler flags: that should be some straight forward assembly instructions.

The best way to find out how much any operation takes is checking your platform's assembly manual or running a benchmark.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top