Question

So I am programming in C++, and as far as I can tell there is no C++ equivalent to stdint.h. Which is no problem, seeing as you can just grab a copy of stdint and include it... but my question is basically this,

what is the difference between these two pieces of code:

struct FREQ{
    unsigned int FREQLOHI :16;
    //etc...

};

and

struct FREQ{
    uint16_t FREQLOHI;
    //etc...
}

other than the obvious limitations of bitfields, is there a performance/portability issue? Which is preferred?

Was it helpful?

Solution

The difference is that unsigned int may be of different size on different platforms while uint16_t is guaranteed to have 16 bit width. This means that an instance of the first (bitfield) struct may have different size on different platforms. Also, bitfield access is more expensive since it involves extra shift and mask.

For example on laptop where unsigned int is 32-bit wide the first struct is 32-bit wide while the second struct is 16-bit.

When it comes to portability, bit-fields are in a much cleaner situation since it is an old C language feature that was included in C++ when it was standardized in 1998 (ISO/IEC 14882:1998). On the other hand stdint.h was added to C only in 1999 (ISO/IEC 9899:1999 standard) and hence is not part of C++98 (ISO/IEC 14882:1998). The corresponding header cstdint was then incorporated into C++ TR1, but it put all the identifiers in std::tr1 namespace. Boost also offered the header. The most recent C++ standard (C++11 aka ISO/IEC 14882:2011 that went out in September 2011) includes the header cstdint and it puts all the identifiers into the std namespace. Despite this, cstdint is widely supported.

OTHER TIPS

Compilers will generally tend to pack bitfields together in a single word, thus reducing the overall size of your struct. This packing is at the expense of slower access to the bitfield members. For example:

struct Bitfields
{
    unsigned int eight_bit : 8;
    unsigned int sixteen_bit : 16;
    unsigned int eight_bit_2 : 8;
};

Might be packed as

0            8                        24
-----------------------------------------------------
| eight_bit  | sixteen_bit            | eight_bit_2 |
-----------------------------------------------------

Each time you access sixteen_bit it incurs a shift and bitwise & operation.

On the other hand, if you do

struct NonBitfields
{
    uint8_t eight_bit;
    uint16_t sixteen_bit;
    uint8_t eight_bit_2;
};

then the compiler generally aligns the members at word boundaries and lays it out as something like:

0            8           16           24
-----------------------------------------------------
| eight_bit  |            | sixteen_bit             |
-----------------------------------------------------
| eight_bit_2|                                      |
-----------------------------------------------------

This wastes more space compared to bitfields, but the members can be accessed faster without bit-shifting and masking.


Here are some other differences:

  • You can't apply sizeof to a bitfield member.
  • You can't pass a bitfield member by reference.

In terms of portability, both options should work on any standards-compliant compiler. If you mean binary portability between different platforms when writing the struct out to a file or socket, then all bets are off for either case.


In terms of preference, I would opt for using uint16_t instead of bitfields, unless there a good reason for packing the fields together to save space. If I have many bools inside a struct, I'll generally use bitfields for compressing those boolean flags together in the same word.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top