Question

Does this simple code snippet have fully defined behavior by C99 standard?

{
  uint64_t longer = 0x122223333u;
  uint32_t shorter = longer;
  assert(shorter == 0x22223333u);
}

If not, what is a standard-compliant way to achieve this (put lower 32 bits of a uint64_t value to a uint32_t variable)?

Was it helpful?

Solution

The draft C99 standard says:

Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.

So, in this case the 64-bit value would be converted by repeatedly subtracting 0x1_0000_0000 from it until the result fits in 32 bits (underscores added for readability).

In your case, after one such subtraction the result is 0x22223333. It seems pretty clear that it can be proven that this is the same thing as just doing the truncation, I really don't think actual subtractions ever happen.

OTHER TIPS

Unsigned integral types obey the rules of modular arithmetic without any undefined behaviour, so the assignment is perfectly valid and correct.

More than that, the optionally-available types uint32_t and uint64_t actually guarantee to have no padding and be exactly 32 or 64 bits wide, respectively.

Unsigned truncation is well defined to be modulo 2^n. It's the signed truncation which is tricky and less defined.

So yes, this is defined and the assert is true, regardless of the endianness of the machine.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top