I recently devised a new algorithm to add two Gray codes. Unfortunately, it is still slower than the naive double-conversion solution and is also slower than Harold Lucal's algorithm (the one in the accepted answer). But any new solution to a problem is welcome, right?
// lhs and rhs are encoded as Gray codes
unsigned add_gray(unsigned lhs, unsigned rhs)
{
// Highest power of 2 in lhs and rhs
unsigned lhs_base = hyperfloor(lhs);
unsigned rhs_base = hyperfloor(rhs);
if (lhs_base == rhs_base) {
// If lhs and rhs are equal, return lhs * 2
if (lhs == rhs) {
return (lhs << 1u) ^ __builtin_parity(lhs);
}
// Else return base*2 + (lhs - base) + (rhs - base)
return (lhs_base << 1u) ^ add_gray(lhs_base ^ lhs, lhs_base ^ rhs);
}
// It's easier to operate from the greatest value
if (lhs_base < rhs_base) {
swap(&lhs, &rhs);
swap(&lhs_base, &rhs_base);
}
// Compute lhs + rhs
if (lhs == lhs_base) {
return lhs ^ rhs;
}
// Compute (lhs - base) + rhs
unsigned tmp = add_gray(lhs ^ lhs_base, rhs);
if (hyperfloor(tmp) < lhs_base) {
// Compute base + (lhs - base) + rhs
return lhs_base ^ tmp;
}
// Here, hyperfloor(lhs) == hyperfloor(tmp)
// Compute hyperfloor(lhs) * 2 + ((lhs - hyperfloor(lhs)) + rhs) - hyperfloor(lhs)
return (lhs_base << 1u) ^ (lhs_base ^ tmp);
}
The algorithm uses the following utility functions in order tow work correctly:
// Swap two values
void swap(unsigned* a, unsigned* b)
{
unsigned temp = *a;
*a = *b;
*b = temp;
}
// Isolate the most significant bit
unsigned isomsb(unsigned x)
{
for (unsigned i = 1u ; i <= CHAR_BIT * sizeof(unsigned) / 2u ; i <<= 1u) {
x |= x >> i;
}
return x & ~(x >> 1u);
}
// Return the greatest power of 2 not higher than
// x where x and the power of 2 are encoded in Gray
// code
unsigned hyperfloor(unsigned x)
{
unsigned msb = isomsb(x);
return msb | (msb >> 1u);
}
So, how does it work?
I have to admit, it's quite a wall of code for something as « simple » as an addition. It is mostly based on observations about bit patterns in Gray codes; that is, I didn't prove anything formally, but I have yet to find cases where the algorithm doesn't work (if I don't take into account overflow, it does not handle overflow). Here are the main observations used to construct the algorithm, assume that everything is a Gray code:
- 2 * n = (n << 1) ⊕ parity(n)
- If a is a power of 2 and a > b, then a ⊕ b = a + b
- Consequently, i a is a power of 2 and a < b, then a ⊕ b = a - b. This will only work if b < 2 * a though.
- If a and b have the same hyperfloor but are not equal, then a + b = (hyperfloor(a) << 1) ⊕ ((hyperfloor(a) ⊕ a) + (hyperfloor(b) ⊕ b)).
Basically, it means that we know how to multiply by 2, how to add a power of 2 to a smaller Gray code and how to subtract a power of 2 from a Gray code that is bigger than that power of 2 but smaller than the next power of 2. Everything else are tricks so that we can reason in terms of equal values or powers of 2.
If you want more details/information, you can also check this Q&A on Code Review which proposes a modern C++ implementation of the algorithm as well as some optimizations (as a bonus, there are some nice MathJax equations that we can't have here :D).