문제

I was reading this question here about deciding endianness and the first answer baffled me somewhat.

The code used to decide big endianness is as follows:

int is_big_endian(void)
{
    union {
        uint32_t i;
        char c[4];
    } bint = {0x01020304};

    return bint.c[0] == 1; 
} 

My question is how does the compiler here decide what type to use for that array of hex digits? Because technically it fits equally well in both that uint32_t or that char[4].

Why not just store it in the char[4] and skip the union?

Is there some advantage of a union here that I don't see? I know this is called type-punning, but I fail to see its advantage here.

도움이 되었습니까?

해결책

My question is how does the compiler here decide what type to use for that array of hex digits?

As with arrays and aggregate classes, the first initialiser initialises the first member; in this case i. (Of course, unlike those things, it doesn't make sense to have more than one initialiser).

Why not just store it in the char[4] and skip the union? Is there some advantage of a union here that I don't see?

The purpose of this is to initialise the 4-byte integer, then use the char array to examine the individual bytes to determine the memory order. If the most significant byte (0x01) is stored in the first byte, then the system is "big-endian"; otherwise it's "little-endian" (or perhaps something stranger).

다른 팁

The original C standard allowed only to assign a value to the first element of an union. This means: 0x1020304 is assigned to "i", not to "c".

The latest C standard allows assigning to any member like this:

union { ... } bint = { .c = {1,2,3,4} };
union { ... } bint2 = { .i = 0x1020304 };

However - as said - if no name is given then the value is assigned to "i".

Because you want to store 0x01020304 as the unsigned 32-bit integer uint32_t i and then read the first byte char c[0].

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top