Question

According to what I've heard, integer values that are not the processor's word size will be promoted when modifying them (using +, -, &, etc). The long type is supposed to the the word size. On my compiler, int is 32-bit and long is 64-bit.

Does this mean long is more efficient than int despite requiring more memory?

One more thing, do compound operators also promote the values? And what about the increment and decrement operators?

Was it helpful?

Solution

It doesn't matter, this isn't something you can control. Choose the narrowest type which is wide enough to represent the values you need. This is the best you can do under any and all circumstances.

The language guarantees the result of operations will be correct, and the compiler will choose the most efficient path to that result that it can find. This may involve changing integer sizes at some stage, or may not.

The processor may do its own internal transformations. Again, without changing the result. Again, it's out of your hands.

OTHER TIPS

It's going to be hard for anyone to give you a straight-forward answer. The best you can do in this case is to profile and see for yourself.

But unless you make millions of operations with the numbers, I doubt it will matter which is faster. Write readable code first, profile, and optimize only after.

I also heard that it's faster to have an if (int) rather than if (bool) because the bool would get promoted to int (counter-intuitive, I know), but no one declares an int instead of a bool just for the sake of performance. (unless, maybe, after profiling)

In line of principle, plain int should be the "fast, normal use integer", while long, when different, can mean "extended range but may be slower".

What actually happens depends strongly from the platform you are working on.

On several microcontrollers I used int is 16 bit and long is 32 bit, and operations on long take more than one processor instructions. On "classic" 32 bit x86, long and int are usually the same, so there's no difference at all. On x86_64, depending on the portability concerns, long may be 32 or 64 bit; as far as "instruction count to perform an operation" they are the same, but the increased size can matter if you have to read/store big arrays of integers in memory (32 bit integers may perform better because more fit in cache) (and probably there are much many considerations you could do, optimization is often counterintuitive, especially on x86).

So, long story short: don't overthink this issue, if you need a "normal" integer that is guaranteed to work fast and its range is ok for your application just use int. If you need a minimum guaranteed size, look at the typedefs of <stdint.h> (which, besides giving you exactly-sized integers, provides also "fastest integer with this minimum size").

But as always, the usual rule applies: if you have performance problems first profile, then optimize.

Does this mean long is more efficient than int despite requiring more memory?

That depends entirely on your computer. There is no one perfect integer type that is universally optimal. If it's critical to performance you'll need to check with some analysis tool. If it isn't critical to performance you shouldn't care.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top