Question

I'm initializing an unsigned short int with a = 0xff (all bits are set). Then I assign b to a>>7 which should yield (0000 0001) and it does. However, the odd thing is that when I assign c to a<<7, it isn't equivalent to (1000 0000). I tested this by outputting 0x80 (which is 1000 0000) and c, but they aren't the same.

Here is some code:

unsigned short int a = 0xff;
unsigned short int b = a>>7;
unsigned short int c = a<<7; // c should == 0x80

I'm unsure what the problem is. Any help is appreciated. Thanks.

P.S. By "output" I mean output 0x80 and c in decimal and hex form.

Was it helpful?

Solution

a short int has 16, not just 8 bits.

So you will likely to get "0111 1111 1000 0000" as the result from 0xff<<7.

OTHER TIPS

Don't guess at bit types, use <stdint.h>


A short int "usually" has 16 bits. Actually, I'm pretty sure it always has 16 except for those Martian computers, but that isn't something the standard promises.

If you want to declare types that have a specific number of bits, the conforming technique is:

#include <stdint.h>

  int8_t  a;
 uint8_t  b;
 int16_t  x;
uint16_t  y;

Doing this would have avoided your not-completely-right guess at the bit representation of short. For some reason, Microsoft is a long ways from conforming to the C99 standard, even on easy things like <stdint.h>. Fortunately, a project maintains a VC++ stdint version.

I'd expect that you got 0x7f80. I think what you meant to write is:

unsigned short int c = b<<7; // c should == 0x80

Unfortunately, VC++ (the C compiler) does not have inttypes.h because it does not fully support C99. You have to use 3rd-party headers (e.g. Paul Hsieh's stdint.h).

If you want to get that result, you can chop off the portion a << 7 greater than the 8th-least-significant-bit, using bitwise-and:

unsigned short int c = (a << 7) & 0xff;
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top