Question

I'm new to C, and this is a newbie question:

I came across this piece of code about signed int representation:

int main(void) {
    int a = 0x8fffffff;
    printf("%d\n",a);
    return 0;
}

and the it returns -1879048193.

My current understand about signed int is that the very left side bit is used for negative or positive indication:

so 0x9 should evaluated to signed decimal -1, because its left side bit is 1:

int main(void) {
    int a = 0x9;
    printf("%d\n",a);
    return 0;
}

but it gives me the decimal 9, not what I expected, any idea?

Was it helpful?

Solution

You're confusing the representation and what the "leftmost" (most significant) bit actually means. When the system represents an int (I'll use 32-bit) in memory, it takes up the number of bits the type defines, not however many it needs to hold the number. Depending on the type of variable, it could even take up a few more to put it in a good starting position for later access (padding/alignment).

Your integer could be represented like this, which has the value 9 in binary in the 4 rightmost bits:

 0b00000000000000000000000000001001

As you can see, the leftmost bit is certainly not 1. Your int might not be 32 bits, but anything more than 4 will give you the same story. Making the same assumptions, you could view the result of actually setting the most significant bit, which, in my test of printing 0x80000009, gave a result of -2147483639.

OTHER TIPS

There are several ways of representing signed numbers in binary. You are thinking of sign-and-magnitude, which is what IEEE floating point format uses. The most significant bit of a single precision float or double precision double is used to represent the sign, as you described. Integer values in modern computers are represented in two's complement. The range of values representable in two's complement depends on how many bits are used. The number of bits used depends on your compiler, the target you are compiling for and the variable type you choose. An 8-bit two's complement number can represent numbers in the range -128 to +127. In C you would typically use the char type for a signed 8-bit value and int for a signed 32-bit value, all processors I'm aware of today would represent these in two's complement. To find out how many bytes of storage your system uses to store an int you can use the sizeof operator in C, in most systems an int is 4 bytes, or 32-bits. In an N-bit two's complement number the most significant bit (bit N-1) can, in fact, be used to determine the sign of the number, but the remaining bits are not to be interpreted as the magnitude.

See Wikipedia's article on two's complement.

One interesting fact of two's complement is that the most negative number representable in N bits has no representable positive counterpart in N bits. In other words, you cannot represent the absolute value of the most negative value. The most negative value in two's complement representable in N bits has the most significant bit (bit N-1) set, and the remaining bits 0, its value is -pow(2,N-1). For N=8 the most negative value is 0x80 and the value is -pow(2,8-1) which is -pow(2,7) or -128. The largest positive number representable in two's complement in 8 bits is 0x7F, a '0' in the most significant bit and '1' in the remaining bits, or pow(2,7)-1 or +127.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top