Question

I am trying to understand and implement a simple file system based on FAT12. I am currently looking at the following snippet of code and its driving me crazy:

int getTotalSize(char * mmap) { int *tmp1 = malloc(sizeof(int)); int *tmp2 = malloc(sizeof(int)); int retVal;

* tmp1 = mmap[19];
* tmp2 = mmap[20];
printf("%d and %d read\n",*tmp1,*tmp2);
retVal = *tmp1+((*tmp2)<<8);
free(tmp1);
free(tmp2);
return retVal;

};

From what I've read so far, the FAT12 format stores the integers in little endian format. and the code above is getting the size of the file system which is stored in the 19th and 20th byte of boot sector.

however I don't understand why

  retVal = *tmp1+((*tmp2)<<8); 
works. is the bitwise <<8 converting the second byte to decimal? or to big endian format? why is it only doing it to the second byte and not the first one?

the bytes in question are [in little endian format] :

40 0B

and i tried converting them manually by switching the order first to

0B 40

and then converting from hex to decimal, and I get the right output, I just don't understand how adding the first byte to the bitwise shift of second byte does the same thing? Thanks

Était-ce utile?

La solution

The use of malloc() here is seriously facepalm-inducing. Utterly unnecessary, and a serious "code smell" (makes me doubt the overall quality of the code). Also, mmap clearly should be unsigned char (or, even better, uint8_t).

That said, the code you're asking about is pretty straight-forward.

Given two byte-sized values a and b, there are two ways of combining them into a 16-bit value (which is what the code is doing): you can either consider a to be the least-significant byte, or b.

Using boxes, the 16-bit value can look either like this:

+---+---+
| a | b |
+---+---+

or like this, if you instead consider b to be the most significant byte:

+---+---+
| b | a |
+---+---+

The way to combine the lsb and the msb into 16-bit value is simply:

result = (msb * 256) + lsb;

UPDATE: The 256 comes from the fact that that's the "worth" of each successively more significant byte in a multibyte number. Compare it to the role of 10 in a decimal number (to combine two single-digit decimal numbers c and d you would use result = 10 * c + d).

Consider msb = 0x01 and lsb = 0x00, then the above would be:

result = 0x1 * 256 + 0 = 256 = 0x0100

You can see that the msb byte ended up in the upper part of the 16-bit value, just as expected.

Your code is using << 8 to do bitwise shifting to the left, which is the same as multiplying by 28, i.e. 256.

Note that result above is a value, i.e. not a byte buffer in memory, so its endianness doesn't matter.

Autres conseils

I see no problem combining individual digits or bytes into larger integers.

Let's do decimal with 2 digits: 1 (least significant) and 2 (most significant):

  1 + 2 * 10 = 21 (10 is the system base)

Let's now do base-256 with 2 digits: 0x40 (least significant) and 0x0B (most significant):

  0x40 + 0x0B * 0x100 = 0x0B40 (0x100=256 is the system base)

The problem, however, is likely lying somewhere else, in how 12-bit integers are stored in FAT12.

A 12-bit integer occupies 1.5 8-bit bytes. And in 3 bytes you have 2 12-bit integers.

Suppose, you have 0x12, 0x34, 0x56 as those 3 bytes.

In order to extract the first integer you only need take the first byte (0x12) and the 4 least significant bits of the second (0x04) and combine them like this:

0x12 + ((0x34 & 0x0F) << 8) == 0x412

In order to extract the second integer you need to take the 4 most significant bits of the second byte (0x03) and the third byte (0x56) and combine them like this:

(0x56 << 4) + (0x34 >> 4) == 0x563

If you read the official Microsoft's document on FAT (look up fatgen103 online), you'll find all the FAT relevant formulas/pseudo code.

The << operator is the left shift operator. It takes the value to the left of the operator, and shift it by the number used on the right side of the operator.

So in your case, it shifts the value of *tmp2 eight bits to the left, and combines it with the value of *tmp1 to generate a 16 bit value from two eight bit values.

For example, lets say you have the integer 1. This is, in 16-bit binary, 0000000000000001. If you shift it left by eight bits, you end up with the binary value 0000000100000000, i.e. 256 in decimal.

The presentation (i.e. binary, decimal or hexadecimal) has nothing to do with it. All integers are stored the same way on the computer.

Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top