Pregunta

Reading the following resource it says the size of an int/pointer can vary depending on the compiler:

http://www.c4learn.com/c-programming/c-size-of-pointer-variable/

Why is this?

I understand C defines only the min and max number of what a type should hold, but why would one compiler choose to set for example int to 2 bytes and another at 4? What would be the advantage of one over another?

¿Fue útil?

Solución

Whilst the "why" can be answered with "Because the standard says so", one could make the argument that the standard could be written differently, to guarantee a particular size.

However, the purpose of C and C++ is to produce very fast code on all machines. If the compiler had to make sure that an int is a "unnatural size" for that machine, it would require extra instructions. For nearly all circumstances, that is not required, all you'd care about is that it's "big enough for what I want to do". So, to give the compiler a good chance to generate "good code", the standard specifies only minimum sizes, avoiding the compiler having to generate "extra code" to make int (and other types) behave in a very specific way.

One of the many benefits of C and C++ is that there are compilers to target a vast range of machines, from little 8- and 16-bit microcontrollers to large, 64-bit multi-core processors like the ones in a PC. And of course, some 18, 24 or 36-bit machines too. If your machine has a 36-bit native size, you wouldn't be very happy if, because some standard says so, you get half the performance in integer math due to extra instructions, and can't use the top 4 bits of an int...

A small microprocessor with 8-bit registers often have support to do 16-bit additions and subtractions (and perhaps also multiplication and divide), but 32-bit math would involve doubling up on those instructions [and more work for multiplication and divide]. So 16-bit integers (2 byte) would make much more sense on such a small processor - particularly since memory is probably not very large either, so storing 4 bytes for every integer is a bit of a waste. In a 32- or 64-bit machine, memory range is most likely a lot larger, so having larger integers isn't that much of a drawback, and 32-bit integer operations are the same speed as smaller ones (and in some cases "better" - for example in x86, a 16-bit simple math operation such as addition or subtraction requires an extra prefix byte to say "make this 16-bit", so math on 16-bit integers would take up more code-space).

Otros consejos

Because the C Standard says:

(C99, 6.2.5p5) "A "plain" int object has the natural size suggested by the architecture of the execution environment"

C only defines a minimum value for the maximum value an int (INT_MAX) can hold, a maximum value for the minimum value an int can hold (INT_MIN).

Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top