Question

In most programming languages where integral datatypes have a finite range,there is always one more negative number than positive numbers.

For instance,in C,a byte is -128~127, and an int is between -2^31 and 2^31-1 inclusively. Is there a reason why a byte is not -127~128,since positives occur more frequently in intuitive sense?

Was it helpful?

Solution

the largest positive is 0111 1111 = 127

128 | 64 | 32 | 16 | 8 | 4 | 2 | 1 |
 0  | 1  |  1 |  1 | 1 | 1 | 1 | 1 |

The largest Negative byte is 1000 0000 = -128

-128| 64 | 32 | 16 | 8 | 4 | 2 | 1 |
 1  | 0  |  0 |  0 | 0 | 0 | 0 | 0 |

In binary the MSB(Most Significant Bit - the front one) is reserved to signify a negative number. The concept is called Twos' Complement and is used by most computers as a way of representing integers in binary(base 2) notation.

To get more info look into binary calculations

OTHER TIPS

It's because of 2's complement notation. The sign bit is 0 for positive, 1 for negative. So, using 4 bits as a simpler example:

Positive: 0 is 0000, 1 is 0001, etc. up to 0111 as 7.

Negative: -1 is 1111, -2 is 1110, etc, down to 1000 as -8.

In most programming languages where integral datatypes have finite a range,there is always one more negative number than positive numbers.

This is because 2's complement is almost always used.

The reason two's complement is so popular basically comes down to hardware reasons. In particular:

a - b = a + (~b + 1)

Example (4 bit words):

0110 - 0101 = 0110 + 1010 + 1 = 0110 + 1011 = 0001 (note that the addition steps are essentially unsigned addition -- there's no special handling of the sign bit in those steps)

Basically, in hardware-land, you can change a - b into an addition with a + ~b + 1 with the initial carry set to 1. This can be quite a useful trick. There's no special care required for subtraction, which means it doesn't require its own circuit.

(I know this doesn't answer your question, but it does address an untrue assumption in your question and it is too long to leave as a comment.)

Actually, the C standard does not define the size of a byte.

The only thing that is assured is that char will be able to hold one character.

In the past, bytes have ranged between 5 and 9 bits, depending upon the CPU.

It is true that most of that wildness has settled down and most systems in place do use an 8-bit byte.

// What the C standard says must be true:
sizeof char <= sizeof int <= sizeof long

This is why many pre-c99 (and c99) systems included the extremely useful typedefs of:

int8
uint8
int16
uint16
int32
uint32
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top