سؤال

I am trying to decode some data sent from a big-endian machine to the decoder, which is residing on a little-endian machine. I haven't worked with this very much, and I feel like I am confusing myself.

I use bitsets to print my data so that I can see exactly how it is coming out for a particular 32 bit structure, and I can see that the data I need is in the middle of the bit sequence.

Now, I know that if you have a 32-bit value, to go from big to little, you reverse the byte ordering. If I do that, my numbers are not ending up where I expect them to be (done by hand).

So, for example. I have a 32-bit unsigned int. I know that it is coming from my Big-Endian machine as 0x50000000. When I print this using the bitset on the little-endian machine cout << "packSpare 32: " << bitset<32>(Data.pack_Spare).to_string() << endl; I get 0x00005000. So it looks more like it swapped the first two bytes, in order, with the second two bytes, in order.

I originally had a struct like this:

#pragma pack(push, 1)
    struct PACK_SPARE
    {
        int         Spare3:28;
        int         Heading_Reference:1;
        int         Spare2:1;
        int         H_Valid:1;
        int         Spare5:1;
    };
    #pragma pack(pop)

Which is in the reverse order of how it is sent from the big endian machine, but I noticed the issure regarding some obvious swapping of bits happening, so I wanted to just pull the whole thing in as 32 bits, then swap, then print the data. Now I am just using int pack_Spare;

Is it just taking the two 16 bit chunks, and swapping those, rather than doing the swap on the entire 32-bit value? Sorry if this doesn't make sense, like I said, I am kind of confused.

EDIT This data is not coming over a network. I am streaming bits from a video file. I store this data into its corresponding values. So I guess my question is if I have a 32 bit int, and then fread data into that variable, if I use bitset, why is it swapping the two 16-bit groupings of my 32 bit int, rather than doing it by byte? I'm expecting 0x50000000, but I get 0x00005000 (0000 and 5000 got swapped, instead of what I was expecting for typical endian swapping, which was reversing the order of all bytes).

هل كانت مفيدة؟

المحلول 3

Although I don't often recommend Wikipedia for anything beyond introduction, I did find a good endian discussion here, (In answer to your question why is it swapping the two 16-bit groupings of my 32 bit int, rather than doing it by byte?).

The discussion occurs about 2/3 down the page of that link... Look for this:

enter image description here

There are several interesting things regarding endian on that same page.

نصائح أخرى

Just use hton() and co. for converting between network and native machine byte order, then apply real bitwise arithmetic operations, such as shifting and masking. You don't need to mess around with immediate memory access.

You will need to convert to the network byte ordering (using htonX()) when sending and then convert it back when reading (using ntohX()).

NOTE: The links provided are for the Windows versions of the functions. POSIX systems have similar signatures, though.

One way I used to rearrange things out was using union. I'm new to Endianings, but soon I figured out what I needed so, the fastest way I come up with (I believe, less CPU intensive), was using the folowing code:

template <class T, const int size = sizeof (T)>
class cTypeConvert final
{
    public:
        union
        {
            T val;
            char c [size];
        }uTC;
};

In my case, using a string of chars, I just have to feed cTypeConvert::uTC.c[] array from size to 0 and read as the type I instanciate this template with. The best part is that I can use it for any of the standard types I may need and it will always work.

Beware though if you have to cast its type to char before using this template, you migth end up wasting a bit more CPU than the methods described above.

مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى StackOverflow
scroll top