Question

The ISO C standard allows three encoding methods for signed integers: two's complement, one's complement and sign/magnitude.

What's an efficient or good way to detect the encoding at runtime (or some other time if there's a better solution)? I want to know this so I can optimise a bignum library for the different possibilities.

I plan on calculating this and storing it in a variable each time the program runs so it doesn't have to be blindingly fast - I'm assuming the encoding won't change during the program run :-)

Was it helpful?

Solution

You just have to check the low order bits of the constant -1 with something like -1 & 3. This evaluates to

  1. for sign and magnitude,
  2. for one's complement and
  3. for two's complement.

This should even be possible to do in a preprocessor expression inside #if #else constructs.

OTHER TIPS

Detecting one's complement should be pretty simple -- something like if (-x == ~x). Detecting two's complement should be just about as easy: if (-x == ~x + 1). If it's neither of those, then it must be sign/magnitude.

Why not do it at compile time? You could have the build scripts/makefile compile a test program if need be, but then use the preprocessor to do conditional compilation. This also means performance is much less important, because it only runs once per compile, rather than once per run.

Get a pointer to to an int that would show a distinctive bit-pattern. Cast it as a pointer to unsigned int and then examine the bit values.

Doing this with a couple of carefully chosen values should do what you want.

I guess you'd store a negative number as an int into a char array large enough to hold it and compare the array with the various representations to find out.

But uhm... unsigned integers should not have a sign, do they ?

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top