Question

I have a code that changes two sets of hex numbers and then stores them into a new unsigned char. The code looks like the following:

unsigned char OldSw = 0x1D;
unsigned char NewSw = 0xF0;
unsgined char ChangedSw;

ChangedSw = (OldSw ^ ~NewSw) & ~OldSw;

So what I know is:

0x1D = 0001 1101

0xF0 = 1111 0000

Im confused on what the changedSw line is doing. I know it will give the output 0x02 but I can not figure out how its doing it.

Was it helpful?

Solution

ChangedSw = (OldSw ^ ~NewSw) & ~OldSw;

It means "zero one part of OldSw and inverse other part". NewSw indicates what bits of OldSw to zero and what bits to inverse. Namely, 1's in NewSw indicate bits to be zeroed, 0's indicate bits to be inverted.

This operation implemented in two steps.

Step 1. Invert bits.

(OldSw ^ ~NewSw):

  0001 1101
^ 0000 1111
  ---------
  0001 0010

See, we inverted bits which were 0's in original NewSw.

Step 2. Zero bits which were not inverted in previous step.

& ~OldSw:

  0001 0010
& 1110 0010
  ---------
  0000 0010

See, it doesn't change inverted bits, but zero all the rest.

OTHER TIPS

the first part would be 1F ie. 0001 1111.So when ended with ~oldsw(1110 0010) the operation will be something like this:

0001 1111
1110 0010
----------
0000 0010

So the output will be 2. The tilde operator is 1's complement.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top