Right, the compiler will optimise away things that it can calculate at compile time, and if you have a loop that only iterates once (e.g. for(i = 0; i < 1; i++)
, it will remove the loop completely.
As to integer sizes, it really depends on what you are trying to achieve if it's better to use long
or int
. In x86-64, for example, a 64-bit operation will take an extra byte to indicate that the instruction following is a 64-bit instruction instead of a 32-bit instruction. If the compiler made int
64-bits long, the code would become (a little bit) larger, and thus fit less nicely in caches, etc, etc. There is no speed benefit between 16-, 32- or 64-bit operations [for 99% of the operations, multiply and divide being some of the obvious exceptions - the bigger the number, the longer it takes to divide or multiply it (( Actually, the number of bits SET in the number affects the multiply time, and I believe divide as well )) ] in x86-64. Of course, if you are, for example, using the values to perform bitmask operations and such, using long
will give you 64-bit operations, which take half as many operations to perform the same thing. This is clearly an advantage. So it is "right" to use long
in this case, even if it adds an extra byte per instruction.
Also bear in mind that very often, int
is used for "smaller numbers", so for a lot of things the extra size of int
would simply be wasted, and take up extra data-cache space, etc, etc. So int
remains 32-bits also to keep the size of large integer arrays and such at a reasonable size.