Converting integer to binary shows different outputs when number is declared as uint32 and uint64

StackOverflow https://stackoverflow.com/questions/23086392

  •  04-07-2023
  •  | 
  •  

Question

I was trying to convert an integer into its equivalent binary representation.

I was using the following algorithm

void decimal_to_binary(uint32_t number)
{
    char bitset[32];
    for(uint32_t i=0; i<32; ++i)
    {
        if((number & (1 << i)) != 0)
        {
            bitset[31-i] = '1';
        }
        else
        {
            bitset[31-i] = '0';
        }
    }
    for(uint32_t i=0; i<32; ++i)                                                                                                               
    {
        cout << bitset[i];
    }
    cout << "\n";
}

When I run this function against say for instance '5' declared as uint32_t I get the right results

decimal_to_binary(5)
00000000000000000000000000000101

But when I declare the number as uint64_t and also change the size of bitset to 64 bits the results are quite different

Adding the code to do the same

void decimal_to_binary(uint64_t number)
{
    char bitset[64];
    for(uint64_t i=0; i<64; ++i)
    {
        if((number & (1 << i)) != 0)
        {
            bitset[63-i] = '1';
        }
        else
        {
            bitset[63-i] = '0';
        }
    }
    for(uint64_t i=0; i<64; ++i)
    {
        cout << bitset[i];
    }
    cout << "\n";
}

decimal_to_binary(5)
0000000000000000000000000000010100000000000000000000000000000101

I see the same results as the one I got in uint32 but placed one beside the other.

This got me wondering as to how is an uint64_t implemented in a programming language like CPP??

I tried to get some more details by looking at the stdint header file but the link there did help me out much.

Thanks in advance for your time!!

Était-ce utile?

La solution

the (1 << i) in your 64 bit code might be using a regular 32-bit int for the 1. (default word size)

So the 1 is shifted out completely. I don't understand how this produces the output you supplied though :)

Use 1ull for the constant (unsigned long long)

Autres conseils

The problem lies in this line:

if((number & (1 << i)) != 0)

The << operator's return type is the type of the left operand, which apparently is assumed to be 32 bit long on your implementation. Shifting a type further than the total number of its bits yields undefined behavior.

To fix it use

if((number & (static_cast<uint64_t>(1) << i)) != 0) 

Shifting 1 more than 32-bits is undefined behavior if it's only a 32-bit number. Undefined behavior means that it can do anything. As Raymond Chen said, it's probably limiting the right-hand operand to 31 (by bitwise-anding it with 32). That's why you get two copies of the lower-half of the 64-bit value. Try shifting number to the right instead of 1 to the left:

void decimal_to_binary(uint64_t number)
{
    char bitset[64];
    for(size_t i=0; i<64; ++i)
    {
        if(number & 1) != 0)
        {
            bitset[63-i] = '1';
        }
        else
        {
            bitset[63-i] = '0';
        }
        number >>= 1;
    }
    for(size_t i=0; i<64; ++i)
    {
        cout << bitset[i];
    }
    cout << "\n";
}
Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top