문제

I am trying to convert 65529 from an unsigned int to a signed int. I tried doing a cast like this:

unsigned int x = 65529;
int y = (int) x;

But y is still returning 65529 when it should return -7. Why is that?

도움이 되었습니까?

해결책

It seems like you are expecting int and unsigned int to be a 16-bit integer. That's apparently not the case. Most likely, it's a 32-bit integer - which is large enough to avoid the wrap-around that you're expecting.

Note that there is no fully C-compliant way to do this because casting between signed/unsigned for values out of range is implementation-defined. But this will still work in most cases:

unsigned int x = 65529;
int y = (short) x;      //  If short is a 16-bit integer.

or alternatively:

unsigned int x = 65529;
int y = (int16_t) x;    //  This is defined in <stdint.h>

다른 팁

I know it's an old question, but it's a good one, so how about this?

unsigned short int x = 65529U;
short int y = *(short int*)&x;

printf("%d\n", y);

@Mysticial got it. A short is usually 16-bit and will illustrate the answer:

int main()  
{
    unsigned int x = 65529;
    int y = (int) x;
    printf("%d\n", y);

    unsigned short z = 65529;
    short zz = (short)z;
    printf("%d\n", zz);
}

65529
-7
Press any key to continue . . .


A little more detail. It's all about how signed numbers are stored in memory. Do a search for twos-complement notation for more detail, but here are the basics.

So let's look at 65529 decimal. It can be represented as FFF9h in hexadecimal. We can also represent that in binary as:

11111111 11111001

When we declare short zz = 65529;, the compiler interprets 65529 as a signed value. In twos-complement notation, the top bit signifies whether a signed value is positive or negative. In this case, you can see the top bit is a 1, so it is treated as a negative number. That's why it prints out -7.

For an unsigned short, we don't care about sign since it's unsigned. So when we print it out using %d, we use all 16 bits, so it's interpreted as 65529.

To understand why, you need to know that the CPU represents signed numbers using the two's complement (maybe not all, but many).

    byte n = 1; //0000 0001 =  1
    n = ~n + 1; //1111 1110 + 0000 0001 = 1111 1111 = -1

And also, that the type int and unsigned int can be of different sized depending on your CPU. When doing specific stuff like this:

   #include <stdint.h>
   int8_t ibyte;
   uint8_t ubyte;
   int16_t iword;
   //......

The representation of the values 65529u and -7 are identical for 16-bit ints. Only the interpretation of the bits is different.

For larger ints and these values, you need to sign extend; one way is with logical operations

int y = (int )(x | 0xffff0000u); // assumes 16 to 32 extension, x is > 32767

If speed is not an issue, or divide is fast on your processor,

int y = ((int ) (x * 65536u)) / 65536;

The multiply shifts left 16 bits (again, assuming 16 to 32 extension), and the divide shifts right maintaining the sign.

You are expecting that your int type is 16 bit wide, in which case you'd indeed get a negative value. But most likely it's 32 bits wide, so a signed int can represent 65529 just fine. You can check this by printing sizeof(int).

To answer the question posted in the comment above - try something like this:

unsigned short int x = 65529U;
short int y = (short int)x;

printf("%d\n", y);

or

unsigned short int x = 65529U;
short int y = 0;

memcpy(&y, &x, sizeof(short int);
printf("%d\n", y);

Since converting unsigned values use to represent positive numbers converting it can be done by setting the most significant bit to 0. Therefore a program will not interpret that as a Two`s complement value. One caveat is that this will lose information for numbers that near max of the unsigned type.

template <typename TUnsigned, typename TSinged>
TSinged UnsignedToSigned(TUnsigned val)
{
    return val & ~(1 << ((sizeof(TUnsigned) * 8) - 1));
}
라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top