Question

In the google protocol buffers encoding overview, they introduce something called "Zig Zag Encoding", this takes signed numbers, which have a small magnitude, and creates a series of unsigned numbers which have a small magnitude.

For example

Encoded => Plain
0 => 0
1 => -1
2 => 1
3 => -2
4 => 2
5 => -3
6 => 3

And so on. The encoding function they give for this is rather clever, it's:

(n << 1) ^ (n >> 31) //for a 32 bit integer

I understand how this works, however, I cannot for the life of me figure out how to reverse this and decode it back into signed 32 bit integers

Was it helpful?

Solution

Try this one:

(n >> 1) ^ (-(n & 1))

Edit:

I'm posting some sample code for verification:

#include <stdio.h>

int main()
{
  unsigned int n;
  int r;

  for(n = 0; n < 10; n++) {
    r = (n >> 1) ^ (-(n & 1));
    printf("%u => %d\n", n, r);
  }

  return 0;
}

I get following results:

0 => 0
1 => -1
2 => 1
3 => -2
4 => 2
5 => -3
6 => 3
7 => -4
8 => 4
9 => -5

OTHER TIPS

How about

(n>>1) - (n&1)*n

Here's yet another way of doing the same, just for explanation purposes (you should obviously use 3lectrologos' one-liner).

You just have to notice that you xor with a number that is either all 1's (equivalent to bitwise not) or all 0's (equivalent to doing nothing). That's what (-(n & 1)) yields, or what is explained by google's "arithmetic shift" remark.

int zigzag_to_signed(unsigned int zigzag)
{
    int abs = (int) (zigzag >> 1);

    if (zigzag % 2)
        return ~abs;
    else
        return abs;
}

unsigned int signed_to_zigzag(int signed)
{
    unsigned int abs = (unsigned int) signed << 1;

    if (signed < 0)
        return ~abs;
    else
        return abs;
}

So in order to have lots of 0's on the most significant positions, zigzag encoding uses the LSB as sign bit, and the other bits as the absolute value (only for positive integers actually, and absolute value -1 for negative numbers due to 2's complement representation).

After fiddling with the accepted answer proposed by 3lectrologos, I couldn't get it to work when starting with unsigned longs (in C# -- compiler error). I came up with something similar instead:

( value >> 1 ) ^ ( ~( value & 1 ) + 1 )

This works great for any language that represents negative numbers in 2's compliment (e.g. .NET).

I have found a solution, unfortunately it's not the one line beauty I was hoping for:

uint signMask = u << 31;
int iSign = *((Int32*)&signMask);
iSign >>= 31;
signMask = *((UInt32*)&iSign);

UInt32 a = (u >> 1) ^ signMask;
return *((Int32*)&a);

I'm sure there's some super-efficient bitwise operations that do this faster, but the function is straightforward. Here's a python implementation:

def decode(n):
  if (n < 0):
    return (2 * abs(n)) - 1
  else:
    return 2 * n

>>> [decode(n) for n in [0,-1,1,-2,2,-3,3,-4,4]]
[0, 1, 2, 3, 4, 5, 6, 7, 8]
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top