Question

I know and understand the result.

For example:

<br>
 7 (decimal) = 00000111 (binary) <br>

and 7 >> 2 = 00000001 (binary) <br>
00000001 (binary) is same as 7 / 4 = 1 <br>
So 7 >> 2 = 7 / 4 <br>
<br>

But I'd like to know how this logic was created.
Can anyone elaborate on this logic?
(Maybe it just popped up in a genius' head?)

And are there any other similar logics like this ?

Was it helpful?

Solution

It didn't "pop-up" in a genius' head. Right shifting binary numbers would divide a number by 2 and left shifting the numbers would multiply it by 2. This is because 10 is 2 in binary. Multiplying a number by 10(be it binary or decimal or hexadecimal) appends a 0 to the number(which is effectively left shifting). Similarly, dividing by 10(or 2) removes a binary digit from the number(effectively right shifting). This is how the logic really works.

There are plenty of such bit-twiddlery(a word I invented a minute ago) in computer world.

http://graphics.stanford.edu/~seander/bithacks.html Here is for the starters.

This is my favorite book: http://www.amazon.com/Hackers-Delight-Edition-Henry-Warren/dp/0321842685/ref=dp_ob_image_bk on bit-twiddlery.

OTHER TIPS

It is actually defined that way in the C standard.

From section 6.5.7:

The result of E1 >> E2 is E1 right-shifted E2 bit positions. [...] the value of the result is the integral part of the quotient of E1 / 2E2

On most architectures, x >> 2 is only equal to x / 4 for non-negative numbers. For negative numbers, it usually rounds the opposite direction.

Compilers have always been able to optimize x / 4 into x >> 2. This technique is called "strength reduction", and even the oldest compilers can do this. So there is no benefit to writing x / 4 as x >> 2.

Elaborating on Aniket Inge's answer:

Number: 30710 = 1001100112

How multiply by 10 works in decimal system

10 * (30710)

= 10 * (3*102 + 7*100)

= 3*102+1 + 7*100+1

= 3*103 + 7*101

= 307010

= 30710 << 1

Similarly multiply by 2 in binary,

2 * (1001100112)

= 2 * (1*28 + 1*25 + 1*24 + 1*21 1*20)

= 1*28+1 + 1*25+1 + 1*24+1 + 1*21+1 1*20+1

= 1*29 + 1*26 + 1*25 + 1*22 + 1*21

= 10011001102

= 1001100112 << 1

I think you are confused by the "2" in:

7 >> 2

and are thinking it should divide by 2.

The "2" here means shift the number ("7" in this case) "2" bit positions to the right.

Shifting a number "1"bit position to the right will have the effect of dividing by 2:

8 >> 1 = 4    // In binary: (00001000) >> 1 = (00000100)

and shifting a number "2"bit positions to the right will have the effect of dividing by 4:

8 >> 2 = 2    // In binary: (00001000) >> 2 = (00000010)

Its inherent in the binary number system used in computer.

a similar logic is --- left shifting 'n' times means multiplying by 2^n.

An easy way to see why it works, is to look at the familiar decimal ten-based number system, 050 is fifty, shift it to the right, it becomes 005, five, equivalent to dividing it by 10. The same thing with shifting left, 050 becomes 500, five hundred, equivalent to multiplying it by 10.

All the other numeral systems work the same way.

they do that because shifting is more efficient than actual division. you're just moving all the digits to the right or left, logically multiplying/dividing by 2 per shift

If you're wondering why 7/4 = 1, that's because the rest of the result, (3/4) is truncated off so that it's an interger.

Just my two cents: I did not see any mention to the fact that shifting right does not always produce the same results as dividing by 2. Since right shifting rounds toward negative infinity and integer division rounds to zero, some values (like -1 in two's complement) will just not work as expected when divided.

It's because >> and << operators are shifting the binary data.

Binary value 1000 is the double  of binary value 0100
Binary value 0010 is the quarter of binary value 1000

You can call it an idea of a genius mind or just the need of the computer language.

To my belief, a Computer as a device never divides or multiplies numbers, rather it only has a logic of adding or simply shifting the bits from here to there. You can make an algorithm work by telling your computer to multiply, subtract them up, but when the logic reaches for actual processing, your results will be either an outcome of shifting of bits or just adding of bits.

You can simply think that for getting the result of a number being divided by 4, the computer actually right shifts the bits to two places, and gives the result:

7 in 8-bit binary = 00000111
Shift Right 2 places = 00000001 // (Which is for sure equal to Decimal 1)

Further examples:
//-- We can divide 9 by four by Right Shifting 2 places
9 in 8-bit binary = 00001001
Shift right 2 places: 00000010 // (Which is equal to 9/4 or Decimal 2)

A person with deep knowledge of assembly language programming can explain it with more examples. If you want to know the actual sense behind all this, I guess you need to study bit level arithmetic and assembly language of computer.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top