Question

Here's the reason I ask:

uint32_t Color::hex(uint32_t a_hex, bool a_readAlphaBits /*= false*/) {
    A = (a_readAlphaBits ? ((a_hex >> 24) & 0xFF) / 255.0f : 1.0f);
    R = (((a_hex >> 16) & 0xFF) / 255.0f);
    G = (((a_hex >> 8) & 0xFF) / 255.0f);
    B = (((a_hex)& 0xFF) / 255.0f);
    return hex();
}

0xFF0000 represents pure red when read as RGB, but when read as RGBA it would be 0x00FF0000 which means the alpha bits are completely transparent.

Because I'd like to let the user enter RGB or RGBA and I know of no way to distinguish between 0x00000000 and 0x000000 at compile or run time automatically, I have to add a "readAlphaBits" flag which if set to true will read the extra bits, otherwise default to 1.0f.

Ideally I would like this to be automatically detected. Does anyone know of a way to determine how the literal was written?

I have an idea that involves differentiating based on a macro that goes

#define C_HEX(x) 0xFF##x

So that if someone enters C_HEX(000000) or C_HEX(00000000) it would be prefixed in such a way that at six digits the fully qualified value would be 32 bits, and at eight digits it would be 40. I haven't tried this, but even if it works to distinguish the call at compile time I would prefer not to do something like that, I think a flag might be better in that case anyway. If a user forgets to enter that macro it would kinda be lame even if I can detect it missing based on the memory layout of the value supplied.

But I can't think of much else in terms of detection.

Halp!

EDIT:

The ideal calling code would look like this (I have a similar hex based constructor for Color):

  • Color(0xFF0000); //red rgba with alpha being 1.0
  • Color(0x88FF0000); //red rgba with alpha being .533333333
Was it helpful?

Solution

I have two suggestions, neither of which is perfect.

First is to modify your original function so that an alpha value of 00 results in the default alpha of 1.0:

uint32_t Color::hex(uint32_t a_hex, bool a_readAlphaBits = false) {
    A = a_readAlphaBits || (a_hex & 0xff000000) ? ((a_hex >> 24) & 0xFF) / 255.0f : 1.0f;
    R = ((a_hex >> 16) & 0xFF) / 255.0f;
    G = ((a_hex >> 8) & 0xFF) / 255.0f;
    B = ((a_hex)& 0xFF) / 255.0f;
    return hex();
}

This leaves you with having to specify the extra parameter if you want any fully transparent color.

Now you can use a macro to specify both parameters, going by the number of digits.

#define C_HEX(x) 0x##x, (sizeof(#x)>7)

OTHER TIPS

Without having to rely on C++11 or newer, you can just read it as a string, and parse it. It is of course a little slower than the alternatives, but as it only has to be evaluated on user input, speed is not a factor here. An advantage would be that it's easier to understand and maintain than using macros and bit shifts.

With C++11 you can use a literal template operator:

template< char ... c >
rgba_type operator "" _rgb ();

or a raw literal operator:

rgba_type operator "" _rgb ( char * token, std::size_t len );

But then you have to take the trouble of parsing the hexadecimal digits, and if you want this to happen at compile time (as opposed to runtime) it all has to be constexpr. This is less painful in the upcoming C++14, but touching hex digits at all is probably overkill.

It would be much more reasonable to let the user specify ARGB with zero being opaque so they can omit the high-order bits. If that is the wrong format for your application, you can let a much simpler user-defined literal translate it for you:

constexpr rgba_type operator "" _rgb ( unsigned long long argb_opaque )
    { return ( ~ argb_opaque >> 24 ) & 0x000000FF
              | ( argb_opaque << 8 ) & 0xFFFFFF00; ) }
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top