The book is wildly incorrect. There's no undefined behavior in
signed char c = 256;
256
is an integer literal of type int
. To initialize a signed char
with it, it is converted to signed char
(§8.5 [dcl.init]/17.8; all references are to N4140). This conversion is governed by §4.7 [conv.integral]:
1 A prvalue of an integer type can be converted to a prvalue of
another integer type. A prvalue of an unscoped enumeration type can be
converted to a prvalue of an integer type.
2 If the destination type is unsigned, [...]
3 If the destination type is signed, the value is unchanged if it can
be represented in the destination type (and bit-field width);
otherwise, the value is implementation-defined.
If signed char
cannot represent 256, then conversion yields an implementation-defined value of type signed char
, which is then used to initialize c
. There is nothing undefined here.
When people say "signed overflow is UB", they are usually referring to the rule in §5 [expr]/p4:
If during the evaluation of an expression, the result is not
mathematically defined or not in the range of representable values for
its type, the behavior is undefined.
This renders UB expressions like INT_MAX + 1
- the operands are both int
s, so the result's type is also int
, but the value is outside the range of representable values. This rule does not apply here, as the only expression is 256
, whose type is int
, and 256 is obviously in the range of representable values for int
.