Domanda

I would like to read in a number, say a float, and allow the user to see what bit pattern is responsible for their input. How do I allow a variable to be printed or stored as an int or array as simple binary values instead of 0-9 or a-z, etc?

This doesn't do what I want it to. It instead gives an int with digits 0-9, which is obviously not a binary number.

int main(){

cout << "Please enter a float number." << endl;
float number;
    cin >> number;

    int bits = *((int*) &number);

    cout << number << endl;

    cout << bits << endl;

    return 0;
}

Nessuna soluzione corretta

Altri suggerimenti

The easiest (and C-friendly) way to do what you're trying to is to employ a pointer to char and use it to access the individual bytes of the float variable:

unsigned char *b = (unsigned char *)&number;

Then iterate over the bytes:

for (i = 0; i < sizeof number; i++)
{
    printf("%02x", b[i]);
}

Note that this approach prints out a hexadecimal value, but that's directly convertible to a binary representation if you really want to do it that way:

for (i = 0; i < sizeof number; i++)
{
    for (j = 0; j < CHAR_BIT; j++)
    {
        printf("%d", (b[i] >> j) & 1);
    }
}
template <class T>
std::string to_binary(const T &t)
{
    const char *bytes = reinterpret_cast<const char *>(&t);

    std::string result;
    result.reserve(sizeof(t) * CHAR_BIT);

    for (int i = sizeof(t) - 1; i >= 0; --i)
    {
        for (int j = CHAR_BIT - 1; j >= 0; --j)
            result += (bytes[i] & (1 << j) ? '1' : '0');
    }

    return result;
}

will return a string containing the bits of a variable, starting with the most-significant bit.

For data of types such as int, char, you could just print them with "%x" to get their binary representations. But floating point number is different, you usually need an union at here. For example, to get the binary representation of 1.2 as a double, you could do something like:

#include <stdio.h>
#include <stdlib.h>

int
main(int argc, char *argv[])
{
    union {
        double number;
        unsigned char bytes[sizeof(double)];
    } double_bytes;

    double_bytes.number = 1.2;
    for (size_t i = 0; i < sizeof(double); i++) {
        printf("%x ", double_bytes.bytes[i]);
    }
    printf("\n");

    exit(EXIT_SUCCESS);
}

This version also include the binary representation, which sometimes is harder to understand than hexadecimal:

#include <stdio.h>
#include <stdlib.h>

char *
byte2bin(char buf[10], unsigned char ch)
{
    char *bins[] = {
        "0000", "0001", "0010", "0011",
        "0100", "0101", "0110", "0111",
        "1000", "1001", "1010", "1011",
        "1100", "1101", "1110", "1111",
    };

    sprintf(buf, "%s %s", bins[(ch & 0xf0)>>4], bins[ch & 0xf]);
    return buf;
}

int
main(int argc, char *argv[])
{
    union {
        double number;
        unsigned char bytes[sizeof(double)];
    } double_bytes;

    double_bytes.number = 1.2;
    for (size_t i = 0; i < sizeof(double); i++) {
        printf("%x ", (unsigned int)double_bytes.bytes[i] & 0xff);
    }
    printf("\n");

    for (size_t i = 0; i< sizeof(double); i++) {
        char buf[10] = { '\0' };
        printf("%s ", byte2bin(buf, double_bytes.bytes[i]));
    }
    printf("\n");

    exit(EXIT_SUCCESS);
}

Compiled with GCC 4.7.2: gcc -std=c99

Output:

$ ./a.out 
33 33 33 33 33 33 f3 3f 
0011 0011 0011 0011 0011 0011 0011 0011 0011 0011 0011 0011 1111 0011 0011 1111

I believe those spaces in the output made it a little bit easier for human eyes and minds.

You should consider using bit fields.

Wikipedia has pretty good explanation of it.

You need to determine the size of the float (or any other type) and then cast the address of the value to unsigned int*. I give a tested example below which prints the float in hex on my system.

Edited to add display of DW and Bits:

float myVal = 1.0;

cout << "In raw DW/Bin:" << endl;

for (unsigned int loop=0; loop<sizeof(float)/sizeof (unsigned int);loop++)
{
    unsigned int val = reinterpret_cast<unsigned int*>(&myVal)[loop];

    cout << hex << val << " - " << bitset<32>(val) << endl;
}

Output:

In raw DW/Bin:
3f800000 - 00111111100000000000000000000000
Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top