문제

Wow I thought I knew my C++ but this is weird

This function returns an unsigned int so I thought that means I will never get a negative number returned right?

The function determines how many hours ahead or behind of UTC you are. So for me I'm in Australia, Sydney so I am +10 GMT which means I am UTC = LocalTime + (-10). Therefore the GetTimeZoneInformation correctly determines I am -10.

BUT my function returns an unsigned int so shouldn't it return 10 not -10?

unsigned int getTimeZoneBias()
{
    TIME_ZONE_INFORMATION tzInfo;
    DWORD res  = GetTimeZoneInformation( &tzInfo );

    if ( res == TIME_ZONE_ID_INVALID )
    {
        return (INT_MAX/2); 
    }

    return (unsigned int(tzInfo.Bias / 60));  // convert from minutes to hours         
}

TCHAR ch[200];
_stprintf( ch, _T("A: %d\n"), getTimeZoneBias()); // this prints out A: -10
debugLog += _T("Bias: ") + tstring(ch) + _T("\r\n");
도움이 되었습니까?

해결책

Here's what I think is happening:

The value of tzInfo.Bias is actually -10. (0xFFFFFFF6) On most systems, casting a signed integer to an unsigned integer of the same size does nothing to the representation.

So the function still returns 0xFFFFFFF6.

But when you print it out, you're printing it back as a signed integer. So it prints-10. If you printed it as an unsigned integer, you'll probably get 4294967286.

What you're probably trying to do is to get the absolute value of the time difference. So you want to convert this -10 into a 10. In which you should return abs(tzInfo.Bias / 60).

다른 팁

You are trying to print an unsigned int as a signed int. Change %d to %u

_stprintf( ch, _T("A: %u\n"), getTimeZoneBias());
                       ^

The problem is that integers aren't positive or negative by themselves for most computers. It's in the way they are interpreted.

So a large integer might be indistinguishable from a small (absolute value) negative one.

One error is in your _T call. It should be:

_T("A: %u\n")

The function does return a non-negative integer. However, by using the wrong printf specifier, you're causing it to get popped off the stack as an integer. In other words, the bits are interpreted wrong. I believe this is also undefined behavior.

As other people have pointed out, when you cast to an unsigned int, you are actually telling the compiler to use the pattern of bits in the int and use it as an unsigned int. If your computer uses two's complement, as most do, then your number will be interpreted as UINT_MAX-10 instead of 10 as you expected. When you use the %d format specifier, the compiler goes back to using the same bit pattern as an int instead of an unsigned int. This is why you are still getting -10.

If you want the absolute value of an integer, you should try to get it mathematically instead of using a cast.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top