Вопрос

I know that the compiler and architecture of a processor also have a role to play in this. But this is more of a "can it" rather then a "does it" question.

I have already tried to research this, but all I could find out is that int16_t and int8_t are used for the times where you need a precise size in the length of the data.

But what I want to know is, can a int16_t put less strain on say a 80Mhz micro processor and increase performance than an int. Or is it most likely just going to perform the same?

Это было полезно?

Решение

Even in older C standards, according to Wikipedia, int is always guaranteed to have 16 bits at minimum, independently from the processor architecure. This goes along with a recommendation for int being the "the integer type that the target processor is most efficiently working with".

So for 8 or 16 bit processor architectures, I would usually expect int to be the same type as int16_t, so both will be compiled to exactly the same machine instructions. For processor architectures with more bits, int arithmetics may be equally or more efficient. AFAIK especially certain RISC architectures were optimized for 32 or 64 bit arithmetics, so 16 bit arithmetics may be - in theory - a little bit slower on those architectures.

However, in reality, I would not expect this to matter for the vast majority of real-world programs and machines. If you want some numbers, look into the instruction tables of several Intel/AMD/VIA processor architectures, there you can see how large (or small) the differences in CPU cycles are between 16 and 32 bit variants of the same instruction.

Другие советы

Depends on the register size. If 16 bits is smaller than your micropressor's register size it could be a performance hit. If 16 bits is larger than register size, alu ops will take more than 1 load and store. If 16 bits is smaller, than you have to do extra ops to mask and unmask the unused bits. So an integer size the size of the register is the most efficient.

Strain is not the right word, a 32 bit processor will process 32 bits minimum per clock cycle regardless, for 16 or 8 bit instructions it will just discard the bits you are not interested in once the calculation is done. Which brings down the efficiency question to alignment and fetching.

If int = 16 bit then obviously there is no difference between int and int16_t. So since int cannot be less than 16 bit, we may assume that int is more than 16 bit. Being more than 16 bit makes it more useful than int16_t. After all, it can hold more values. There is also a lot of code nowadays that assumes int = 32 bit, so an implementation that makes int = 16 bit would likely lead to lots of code that doesn't work anymore.

So on an implementation where 16 bit is easier to handle than 32 bit, it is still quite likely that int = 32 bit.

Лицензировано под: CC-BY-SA с атрибуция
scroll top