Question

I think I confused myself with endianness and bit-shifting, please help.

I have 4 8-bit ints which I want to convert to a 32-bit int. This is what I an doing:

uint h;
t_uint8 ff[4] = {1,2,3,4};

if (BIG_ENDIAN) {
    h = ((int)ff[0] << 24) | ((int)ff[1] << 16) | ((int)ff[2] << 8) | ((int)ff[3]);
}
else {
    h = ((int)ff[0] >> 24) | ((int)ff[1] >> 16) | ((int)ff[2] >> 8) | ((int)ff[3]);
}

However, this seems to produce a wrong result. With a little experimentation I realised that it should be other way round: in the case of big endian I am supposed to shift bits to the right, and otherwise to the left. However, I don't understand WHY.

This is how I understand it. Big endian means most significant byte first (first means leftmost, right? perhaps this where I am wrong). So, converting 8-bit int to 32-bit int would prepend 24 zeros to my existing 8 bits. So, to make it a 1st byte I need to shift bits 24 to the left.

Please point out where I am wrong.

Was it helpful?

Solution

You always have to shift the 8-bit-values left. But in the little-endian case, you have to change the order of indices, so that the fourth byte goes into the most-significant position, and the first byte into the least-significant.

if (BIG_ENDIAN) {
    h = ((int)ff[0] << 24) | ((int)ff[1] << 16) | ((int)ff[2] << 8) | ((int)ff[3]);
}
else {
    h = ((int)ff[3] << 24) | ((int)ff[2] << 16) | ((int)ff[1] << 8) | ((int)ff[0]);
}
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top