Question

I was going through mit's opencourseware related to performance engineering.

The quickest method (requiring least number of clock cycles) for finding the minimum of two numbers(say x and y) is stated as:

min= y^((x^y) & -(x<y))

The output of the expression x < y can be 0 or 1 (assuming C is being used) which then changes to -0 or -1. I understand that xor can be used to swap two numbers.

Questions: 1. How is -0 different from 0 and -1 in terms of binary? 2. How is that result used with the and operator to get the minimum?

Thanks in advance.

Was it helpful?

Solution

-false=0, -true=-1=255d=11111111b. see here
If x>y, the line will be:

min=y^((x^y) & -false)=y^(x^y & 0)=y^0=y

And if y>x the line will be:

min=y^((x^y) & -true)=y^(x^y & 11111111)=y^(x^y)=x

hence min will be the actuall minimum.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top