How do you determine if there is even 1s in a number's binary representation using C? [duplicate]

StackOverflow https://stackoverflow.com/questions/19373034

  •  30-06-2022
  •  | 
  •  

Question

There are already questions about counting how many 1s are there in a number, but this question is about judging whether there is even or odd number of 1s.

Any loop or conditional (including switch) statements are not allowed. Also, division, multiplication or modulus operators should be avoided. To be more specific, we may assume that it's a 32-bits unsigned integer.

Actually I've got an implementation already, but I cannot work out the reason why it works. Any proof of its correctness or any new idea would be very helpful.

int even_ones(unsigned x)
{
    x ^= x>>16;
    x ^= x>>8;
    x ^= x>>4;
    x ^= x>>2;
    x ^= x>>1;

    return !(x & 1);
}
Was it helpful?

Solution

I assume you know what the exclusive or ^ operation does - if two bits are set to the same value, the result is 0 - otherwise it is 1. So when I have two one-bit numbers, A and B, then A^B will be zero if either A and B are both 0, or both 1. In other words - the sum of ones in A and B is even...

Now let's do this two bits at a time: C and D are two-bit numbers. Here are the possible combinations:

C    D    C^D
00   00   00   even
01   00   01    odd
10   00   10    odd
11   00   11   even
00   01   01    odd
01   01   00   even
10   01   11   even
11   01   10    odd
00   10   10    odd
01   10   11   even
10   10   00   even
11   10   01    odd
00   11   11   even
01   11   10    odd
10   11   01    odd
11   11   00   even

As you can see - the operation in each instance reduces the number of bits by one half, and the resulting number of 1 bits is odd if you started out with an odd number (because pairs of 1s cancel out, but all the others are unchanged).

Now it should be obvious to see why the same thing continues to be true when you start with larger numbers (4 bit, 8 bit, 16 bit). In essence, your algorithm starts with a 32 bit number and splits it into two 16 bit numbers. By getting rid of "double 1's", it reduces the number of bits by half; then operates on the half that remains, and repeats until there is just a single bit left. By testing whether that bit is a one (odd) or zero (even), you get your answer.

In case it's not clear, an operation like x ^= x>>16 actually shifts the top 16 bits into the bottom 16 and produces an exclusive OR there. It doesn't actually clear the top bits, so "a mess is left behind". But the algorithm ignores that mess in what follows. See the following (starting with 8 bits for simplicity):

x =      abcdefgh 
x >> 4 = 0000abcd
new x  = abcdijkl
x >> 2 = 00abcdij
new x  = abmnopqr
x >> 1 = 0abmnopq
new x  = astuvwxy

In this, the last digit y is the XOR of r and q, which in turn are the XOR of l,j and k,i; these in turn are the XOR of h,d, f,b, g,c, and e,a respectively. As you can see, you ended up with the XOR of all the bits; and as I explained above, that means either "all even" or "all odd", depending on whether the least significant bit is now a 1 or a 0.

I hope this helped.

OTHER TIPS

Here are several variants to quickly compute the parity of a byte or a word. Exactly which method will be fastest depends on your CPU and how fast the different basic operations are relative to one another. So if this is somehow the bottleneck of your application, you should profile each one to find out which one runs best on your target machine.

Your solution is very similar to the "compute parity in parallel" implementation, just without the clever optimization at the end. What's going on here is that at each step, you're XORing half of the bits together with the other half of the bits until you have just one bit left. The parity of a word is 0 if there are an even number of 1's and 1 if there are an odd number of 1's; or equivalently, the parity of a word is just the XOR of all of the bits in the word.

Since the XOR operator is commutative and associative, we can reorder how all of the bits in the word are XORed together. So instead of calculating the XOR of all the bits by individually XORing each bit into the result, we XOR the higher half of the bits with the lower half of the bits, reducing the number of bits we care about by half; when there's one bit left, we're done.

Note that this XOR the most significant 16-bit with the least significant 16-bit.:

x ^= x>>16;

Then the series continues with XOR-ing the 2nd least significant 8-bit with the least significant 4-bit, note that the most significant 16-bit is now simply junk, whatever happens there can be ignored:

x ^= x>>8;

And so on we continue XOR-ing the 2nd least significant 4-bit with the least significant 4-bit until we get to 1-bit; by now every bits except the least significant bit are junk, and the final line just use bitwise and with 1 to get just the least significant bit and flip it for evenness test.

Perhaps, it's easier to follow if you write it this way:

int even_ones(unsigned x)
{
    a = (x ^ x>>16) & 0x0000FFFF;
    b = (a ^ a>>8)  & 0x000000FF;
    c = (b ^ b>>4)  & 0x0000000F;
    d = (c ^ c>>2)  & 0x00000003;
    e = (d ^ d>>1)  & 0x00000001;
    return !(e&1);
}

Why does this work? Because XOR is equivalent to bitwise addition without carry.

Hope this helps ::

   enum {
      EVEN,
      ODD
    } even_odd;

unsigned int check_even_odd_no_of_ones(unsigned int num)
{
  if(num_of_ones(num) & 1)
    return ODD;
  else
    return EVEN;
}

Thank you

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top