Question

Does the C standard state how bit representations should be interpreted? In other words do the following if conditions always evaluate to true? Assume sizeof (int) = 4 and CHAR_BIT = 8

unsigned u = 0xffffffff;
if (u == 4294967295) /* do something */

int i = 0xffffffff;
if (i == -1) /* do something */

unsigned u = (int)0xffffffff;
if (u == 0xffffffff) /* do something */

int i = hex_literal;
unsigned u;
memcpy (&u, &i, sizeof (u));
if (i == u) /* do something */
if ((i & 0x7fffffff) == (u & 0x7fffffff)) /* do something */

int i = hex_literal;
unsigned u = i;
if (i == u) /* do something */

unsigned u = hex_literal;
int i = u;
if (i == u) /* do something */

int i = hex_literal;
unsigned u = hex_literal;
if (i == hex_literal && u == hex_literal) /* do something */

char c = 0xff;
if (c >> 4 == 0xf) /* do something */

signed char c = 0xff;
if (((c >> 4) & 0xff) == 0xf) /* do something */
Was it helpful?

Solution

I will make the added assumption that no types have padding bits on the implementation under discussion. Let's take them one at a time:

unsigned u = 0xffffffff;
if (u == 4294967295) /* do something */

Yes.

int i = 0xffffffff;
if (i == -1) /* do something */

No. Conversion of an out-of-range number to a signed type gives an implementation-defined result.

unsigned u = (int)0xffffffff;
if (u == 0xffffffff) /* do something */

No, same reason as the previous example.

int i = hex_literal;
unsigned u;
memcpy (&u, &i, sizeof (u));
if (i == u) /* do something */
if ((i & 0x7fffffff) == (u & 0x7fffffff)) /* do something */

Yes. The standard guarantees that each value bit in a signed type has the same value in the object representation of the corresponding unsigned type.

int i = hex_literal;
unsigned u = i;
if (i == u) /* do something */

Yes. The promotion of i from int to unsigned is deterministic and produces the same value both in the assignment to u and in the comparison.

unsigned u = hex_literal;
int i = u;
if (i == u) /* do something */

Yes, but only if hex_literal is in the range of (positive) values representable by an int - otherwise the implementation-defined result strikes again.

int i = hex_literal;
unsigned u = hex_literal;
if (i == hex_literal && u == hex_literal) /* do something */

u == hex_literal will always evalulate to true, but i == hex_literal need only do so if hex_literal is in the range of values representable by an int.

char c = 0xff;
if (c >> 4 == 0xf) /* do something */

char may be signed or unsigned. If it is unsigned then the test will be true; if signed, then c and c >> 4 will have implementation-defined values, so it may not be true.

signed char c = 0xff;
if (((c >> 4) & 0xff) == 0xf) /* do something */

c will have an implementation-defined value, so the test may not be true.

Note that all of your questions other than the memcpy() one pertain only to the values rather than the representation.

OTHER TIPS

For unsigned, yes. For signed types, no; the standard permits 2's complement, 1's complement or sign-magnitude representations. The relevant section of the standard (C99) is 6.2.6.2.

A separate issue is that code such as unsigned u = (int)0xffffffff invokes undefined behaviour, as this causes an integer overflow (section 6.3.1.3).

Yet another issue is that code such as char c = 0xff; c >> 4 is implementation-defined for two reasons. Firstly, char can either be signed or unsigned. Secondly, if it's signed, then right-shifting a negative number is implementation-defined (section 6.5.7).

Unsigned numbers have guaranteed modulo 2^n arithmetic. There is no such guarantee for signed ones.

Nothing is said about bit patterns. Note that 0xfffffff is not a "bit pattern", it is a number (whose "bit patterns" have no meaning for the C++ standard) which is guaranteed to satisfy x + 1 = 0 if x is an 32 bit unsigned number to which you assigned 0xffffffff.

Key item to memorize is that hex literals (e.g. 0x0F) refer to the value (here: 15), not the order in which bits and bytes are stored physically.

It is machine dependant how this is stored - some will store the least significant bit first, others the high bit first, and AFAIK on x86 it's least significant byte first but high bit first.

But it is always true, that 0x000F equals 15.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top