Question

I'm just going through a bunch of C++ interview questions just to make sure there's nothing obvious that I don't know. So far I haven't found anything that I didn't know already, except this:

long value;
//some stuff
value &= 0xFFFF;

The question is "what's wrong with this code?" And hints that it's something to do with target architectures.

Unless the answer is just "value isn't initialized", I can't see any problem. As far as I can tell, it's just masking the 2 least significant bytes of the value, and long is guaranteed to be at least 2 bytes, so there's no problem there.

Could it possibly be that long might only be 2 bytes on the target architecture, and you might be losing the sign bit? Or perhaps that the 0xFFFF is an int and int is only 2 bytes?

Thanks in advance.

Was it helpful?

Solution

This problem with this code is that it does a bit-wise operation on a signed value. The results of such operations on negative values vary greatly for different integer representations.

For example consider the following program:

#include <iostream>

int main(void)
{
    long value;
    value = -1; // Some stuff
    value &= 0xffff;
    std::cout << "Value = " << value << std::endl;
}

On a two's-complement architecture the result is:

Value = 65535

On a one's-complement architecture the result is:

Value = 65534

On a sign-and-magnitude architecture the result is:

Value = 1

OTHER TIPS

It's hard to know what the interviewer expected you to say. We sort of just have to guess.

My guess is that on some architectures, 0xFFFF will be a signed 16 bit value, while long is a signed 32 bit value. When you extend the constant so that it can be used to mask the long value, will be sign extended and become 0xFFFFFFFFl, which isn't what you intended at all.

Addendum: The code as written works correctly on all three of the compilers that I currently use, so this is indeed a guessing game of trying to figure out what the interviewer intended. A properly standards compliant 16 bit compiler would also generate correct code, so we are left with guessing whether there is something we missed, whether the example is not in fact broken, or whether the interviewer once used a 16 bit compiler that would treat 0xFFFF as a signed quantity when forced to extend it to a long. It would be interesting to ask him.

May I say, there is nothing wrong in this code unless the context or intention of the surrounding code is known!

This might be a shot in the dark but assuming that the long vs int isn't the issue (others have posted answers answering just that) and that the 0xFFFF encompasses the whole amount required by the type, wouldn't this just be making value = 0xFFFF and there is no need for the bit manipulation? Isn't the bit manipulation redundant?

The only other thing I can see is if the person asking the question wanted you to realize that the missing data contained in the long would not be affected by just using 0xFFFF.

To me comes across a bit wacky of a question to ask based on how the question is being presented here or maybe it's just so obvious we are all over thinking it :)

I suspect we are all thinking too hard.

Whats wrong with this code is that value is not initialized. The question is - do you realize the &= (or += , -= /= etc.) is meaningless when used on an unitialized value?

If value is initialized then the behavior is well defined. The least significant 16 bits are preserved, the rest are zeroed out

sounds like Big-Endian vs. Little-Endian to me. http://en.wikipedia.org/wiki/Endianness

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top