Question

Reading the C++ Primer 5th edition book, I noticed that a signed char with a value of 256 is undefined. I decided to try that, and I saw that std::cout didn't work for that char variable. (Printed Nothing).

But on C, the same thing signed char c = 256; would give a value 0 for the char c.

I tried searching but didn't find anything.

Can someone explain to me why is this the case in C++?

Edit: I understand that 256 is 2 bytes, but why doesn't the same thing as in C, happen to C++?

Was it helpful?

Solution 2

Edit: See T.C.'s answer below. It's better.

Signed integer overflow is undefined in C++ and C. In most implementations, the maximum value of signed char, SCHAR_MAX, is 127 and so putting 256 into it will overflow it. Most of the time you will see the number simply wrap around (to 0), but this is still undefined behavior.

OTHER TIPS

The book is wildly incorrect. There's no undefined behavior in

signed char c = 256;

256 is an integer literal of type int. To initialize a signed char with it, it is converted to signed char (§8.5 [dcl.init]/17.8; all references are to N4140). This conversion is governed by §4.7 [conv.integral]:

1 A prvalue of an integer type can be converted to a prvalue of another integer type. A prvalue of an unscoped enumeration type can be converted to a prvalue of an integer type.

2 If the destination type is unsigned, [...]

3 If the destination type is signed, the value is unchanged if it can be represented in the destination type (and bit-field width); otherwise, the value is implementation-defined.

If signed char cannot represent 256, then conversion yields an implementation-defined value of type signed char, which is then used to initialize c. There is nothing undefined here.


When people say "signed overflow is UB", they are usually referring to the rule in §5 [expr]/p4:

If during the evaluation of an expression, the result is not mathematically defined or not in the range of representable values for its type, the behavior is undefined.

This renders UB expressions like INT_MAX + 1 - the operands are both ints, so the result's type is also int, but the value is outside the range of representable values. This rule does not apply here, as the only expression is 256, whose type is int, and 256 is obviously in the range of representable values for int.

You're seeing the difference between cout and printf. When you output a character with cout you don't get the numeric representation, you get a single character. In this case the character was NUL which doesn't appear on-screen.

See the example at http://ideone.com/7n6Lqc

A char is generally 8 bits or a byte, therefore can hold 2^8 different values. If it is unsigned, from 0 to 255 otherwise, when signed from -128 to 127

unsigned char values is (to be pedantic, usually) is from 0 to 255. There is 256 values, that 1 byte may hold.

If you get overflow (usually) values are used modulo 256, as other Integer type modulo MAX + 1

@Pubby I don't know if the C/C++ standards have defined the behavior when signed integer overflows, but gcc seems not always treat (x < x + 1) as true. the "<" operator takes signed int as operand, so x < x + 1 --> (int)x < (int)x + (int)1

the following code produces output: 1 0 0 0 (32bit Linux + gcc)

signed char c1, c2; 
signed int i1, i2; 

c1 = 127;
c2 = c1 + 1;

i1 = 2147483647;
i2 = i1 + 1;

printf("%d %d\n", c1 < c1 + 1, c1 < c2);
printf("%d %d\n", i1 < i1 + 1, i1 < i2);
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top