Question

Can anyone explain verbosely what this accomplishes? Im trying to learn c and am having a hard time wrapping my head around it.

void tonet_short(uint8_t *p, unsigned short s) {
  p[0] = (s >> 8) & 0xff;
  p[1] = s & 0xff;
}

void tonet_long(uint8_t *p, unsigned long l)
{
  p[0] = (l >> 24) & 0xff;
  p[1] = (l >> 16) & 0xff;
  p[2] = (l >> 8) & 0xff;
  p[3] = l & 0xff;
}
Was it helpful?

Solution

Verbosely, here it goes:

As a direct answer; both of them stores the bytes of a variable inside an array of bytes, from left to right. tonet_short does that for unsigned short variables, which consist of 2 bytes; and tonet_long does it for unsigned long variables, which consist of 4 bytes.

I will explain it for tonet_long, and tonet_short will just be the variation of it that you'll hopefully be able to derive yourself:

unsigned variables, when their bits are bitwise-shifted, get their bits shifted towards the determined side for determined amount of bits, and the vacated bits are made to be 0, zeros. I.e.:

unsigned char asd = 10; //which is 0000 1010 in basis 2
asd <<= 2;              //shifts the bits of asd 2 times towards left
asd;                    //it is now 0010 1000 which is 40 in basis 10

Keep in mind that this is for unsigned variables, and these may be incorrect for signed variables.

The bitwise-and & operator compares the bits of two operands on both sides, returns a 1 (true) if both are 1 (true), and 0 (false) if any or both of them are 0 (false); and it does this for each bit. Example:

unsigned char asd = 10; //0000 1010
unsigned char qwe = 6;  //0000 0110
asd & qwe;              //0000 0010 <-- this is what it evaluates to, which is 2

Now that we know the bitwise-shift and bitwise-and, let's get to the first line of the function tonet_long:

p[0] = (l >> 24) & 0xff;

Here, since l is unsigned long, the (l >> 24) will be evaluated into the first 4 * 8 - 24 = 8 bits of the variable l, which is the first byte of the l. I can visualize the process like this:

abcd efgh   ijkl mnop   qrst uvwx   yz.. ....   //letters and dots stand for
                                                //unknown zeros and ones
//shift this 24 times towards right
0000 0000   0000 0000   0000 0000   abcd efgh

Note that we do not change the l, this is just the evaluation of l >> 24, which is temporary.

Then the 0xff which is just 0000 0000 0000 0000 0000 0000 1111 1111 in hexadecimal (base 16), gets bitwise-anded with the bitwise-shifted l. It goes like this:

0000 0000   0000 0000   0000 0000   abcd efgh
&
0000 0000   0000 0000   0000 0000   1111 1111
=
0000 0000   0000 0000   0000 0000   abcd efgh

Since a & 1 will be simply dependent strictly on a, so it will be a; and same for the rest... It looks like a redundant operation for this, and it really is. It will, however, be important for the rest. This is because, for example, when you evaluate l >> 16, it looks like this:

0000 0000   0000 0000   abcd efgh   ijkl mnop

Since we want only the ijkl mnop part, we have to discard the abcd efgh, and that will be done with the aid of 0000 0000 that 0xff has on its corresponding bits.

I hope this helps, the rest happens like it does this far, so... yeah.

OTHER TIPS

These routines convert 16 and 32 bit values from native byte order to standard network(big-endian) byte order. They work by shifting and masking 8-bit chunks from the native value and storing them in order into a byte array.

If I see it right, I basically switches the order of bytes in the short and in the long ... (reverses the byte order of the number) and stores the result at an address which hopefully has enough space :)

explain verbosely - OK...

    void tonet_short(uint8_t *p, unsigned short s) {

short is typically a 16-bit value (max: 0xFFFF)
The uint8_t is an unsigned 8-bit value, and p is a pointer to some number of unsigned 8-bit values (from the code we're assuming at least 2 sequential ones).

  p[0] = (s >> 8) & 0xff;

This takes the "top half" of the value in s and puts it in the first element in the array p. So let's assume s==0x1234.
First s is shifted by 8 bits (s >> 8 == 0x0012)
then it's AND'ed with 0xFF and the result is stored in p[0]. (p[0] == 0x12)

  p[1] = s & 0xff;

Now note that when we did that shift, we never changed the original value of s, so s still has the original value of 0x1234, thus when we do this second line we simply do another bit-wise AND and p[1] get the "lower half" of the value of s (p[0] == 0x34)

The same applies for the other function you have there, but it's a long instead of a short, so we're assuming p in this case has enough space for all 32-bits (4x8) and we have to do some extra shifts too.

This code is used to serialize a 16-bit or 32-bit number into bytes (uint8_t). For example, to write them to disk, or to send them over a network connection.

A 16-bit value is split into two parts. One containing the most-significant (upper) 8 bits, the other containing least-significant (lower) 8 bits. The most-significant byte is stored first, then the least-significant byte. This is called big endian or "network" byte order. That's why the functions are named tonet_.

The same is done for the four bytes of a 32-bit value.

The & 0xff operations are actually useless. When a 16-bit or 32-bit value is converted to an 8-bit value, the lower 8 bits (0xff) are masked implicitly.

The bit-shifts are used to move the needed byte into the lowest 8 bits. Consider the bits of a 32-bit value:

AAAAAAAABBBBBBBBCCCCCCCCDDDDDDDD

The most significant byte are the 8 bits named A. In order to move them into the lowest 8 bits, the value has to be right-shifted by 24.

The names of the functions are a big hint... "to net short" and "to net long".

If you think about decimal... say we have a two pieces of paper so small we can only write one digit on each of them, we can therefore use both to record all the numbers from 0 to 99: 00, 01, 02... 08, 09, 10, 11... 18, 19, 20...98, 99. Basically, one piece of paper holds the "tens" column (given we're in base 10 for decimal), and the other the "units".

Memory works like that where each byte can store a number from 0..255, so we're working in base 256. If you have two bytes, one of them's going to be the "two-hundred-and-fifty-sixes" column, and the other the "units" column. To work out the combined value, you multiple the former by 256 and add the latter.

On paper we write numbers with the more significant ones on the left, but on a computer it's not clear if a more significant value should be in a higher or lower memory address, so different CPU manufacturers picked different conventions.

Consequently, some computers store 258 - which is 1 * 256 + 2 - as low=1 high=2, while others store low=2 high=1.

What these functions do is rearrange the memory from whatever your CPU happens to use to a predictable order - namely, the more significant value(s) go into the lower memory addresses, and eventually the "units" value is put into the highest memory address. This is a consistent way of storing the numbers that works across all computer types, so it's great when you want to transfer the data over the network; if the receiving computer uses a different memory ordering for the base-256 digits, it can move them from network byte ordering to whatever order it likes before interpreting them as CPU-native numbers.

So, "to net short" packs the most significant 8 bits of s into p[0] - the lower memory address. It didn't actually need to & 0xff as after taking the 16 input bits and shifting them 8 to the "right", all the left-hand 8 bits are guaranteed 0 anyway, which is the affect from & 0xFF - for example:

      1010 1111 1011 0111  // = decimal 10*256^3 + 15*256^2 + 11*256 + 7
>>8   0000 0000 1010 1111 // move right 8, with left-hand values becoming 0
 0xff 0000 0000 1111 1111 // we're going to and the above with this
 &    0000 0000 1010 1111 // the bits that were on in both the above 2 values
                          // (the and never changes the value)
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top