Gibbon1's answer is correct, but I think example code is helpful for this sort of question.
#include <stdio.h>
int main(void)
{
union {
unsigned int x;
struct {
unsigned int a : 1;
unsigned int b : 10;
unsigned int c : 20;
unsigned int d : 1;
} bits;
} u;
u.x = 0x00000000;
u.bits.a = 1;
printf("After changing a: 0x%08x\n", u.x);
u.x = 0x00000000;
u.bits.b = 1;
printf("After changing b: 0x%08x\n", u.x);
u.x = 0x00000000;
u.bits.c = 1;
printf("After changing c: 0x%08x\n", u.x);
u.x = 0x00000000;
u.bits.d = 1;
printf("After changing d: 0x%08x\n", u.x);
return 0;
}
On a little-endian x86-64 CPU using MinGW's GCC, the output is:
After changing a: 0x00000001
After changing b: 0x00000002
After changing c: 0x00000800
After changing d: 0x80000000
Since this is a union, the unsigned int (x) and the bit field structure (a/b/c/d) occupy the same storage unit. The order of allocation of [the] bit fields decides whether u.bits.a refers to the least significant bit of x or the most significant bit of x. Typically, on a little-endian machine:
u.bits.a == (u.x & 0x00000001)
u.bits.b == (u.x & 0x000007fe) >> 1
u.bits.c == (u.x & 0xeffff800) >> 11
u.bits.d == (u.x & 0x80000000) >> 31
and on a big-endian machine:
u.bits.a == (u.x & 0x80000000) >> 31
u.bits.b == (u.x & 0x7fe00000) >> 21
u.bits.c == (u.x & 0x001ffffe) >> 1
u.bits.d == (u.x & 0x00000001)
What the standard is saying is that the C programming language does not require any particular endianness -- big-endian and little-endian machines can put data in the order that is most natural for their addressing scheme.