There are many related questions here, for example this one.
Your question is not quite right: for signed
the highest bit is not always 1
-- only if the value is negative. In fact, signed
or unsigned
are "types" attributed to the exact same bit patters, and how these bit patterns are interpreted when compared or promoted is defined by their respective types.
For example:
unsigned char u = 0xFF; // decimal 255
signed char s = 0xFF; // decimal -1
You can see how both values are the same, in both the highest bit is set, but they differ in their types.
The compiler uses a type system to know how to interpret values, and it is the task of the programmer to assign meaningful types to values. In the above example, I told the compiler that the first 0xFF
should be interpreted as an unsigned
value (see also the include file limits.h) with the maximum range:
u = 0x00; // decimal 0, CHAR_MIN
u = 0xFF; // decimal 255, UCHAR_MAX
and the second 0xFF
as a signed
value with the maximum range:
s = 0x00; // decimal 0, CHAR_MIN
s = 0x7F; // decimal 127, SCHAR_MAX
s = 0x80; // decimal -127, SCHAR_MIN (note how 0x7F + 1 = 0x80, decimal 127 + 1 = -127, called an overflow)
s = 0xFF; // decimal -1
For the printf in your example, the %d
tells it to expect a signed int
value. According to the integer promotion rules of the C language the smaller char
type is either sign-extended (if it's signed
type) or zero-extended (if it's unsigned
type). To finish with the above example:
printf("%d", u); // passes a int 0x000000FF, decimal 128, to the function
printf("%d", s); // passes a int 0xFFFFFFFF, decimal -1, to the function
More printf formatting specifiers are here, for example %u
might be interesting for you in this context.