문제

I wrote a generic function to convert a binary reflected gray code to standard binary. I used an algorithm I found on this page. Here is the aforementioned algorithm:

unsigned short grayToBinary(unsigned short num)
{
        unsigned short temp = num ^ (num>>8);
        temp ^= (temp>>4);
        temp ^= (temp>>2);
        temp ^= (temp>>1);
        return temp;
}

Then I modified the code so that it would work for any standard unsigned type. Here is what I wrote:

template<typename Uint>
Uint grayToBinary(Uint value)
{
    for (Uint mask = sizeof(Uint)*4 ; mask ; mask >>= 1)
    {
        value ^= value >> mask;
    }
    return value;
}

The algorithm seems to work fine for every unsigned standard type. However, when writing it, I instinctively used sizeof(Uint)*4 since it made sense that the end condition would depend on the type size, but the truth is that I have no idea what sizeof(Uint)*4 actually represents. For now, it is a magic number that I wrote instinctively, but I am unable to explain why it works with *4 and not with any other coefficient.

Does anybody know what this magic number actually correspond to?

도움이 되었습니까?

해결책

4 happens to be 8 / 2 [citation needed], or CHAR_BIT / 2.

Your gray code decoding algorithm start by taking the left half of the given integer type, and shifts it to the right half of the type, which happens to be sizeof(type) * (CHAR_BIT / 2) bits to the right, which is exactly what you are seeing.

As pointed out in the comments, std::numeric_limits<type>::digits / 2 would be the more idiomatic solution for C++.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top