Domanda

This morning I lost a bunch of files, but because the volume they were one was both internally and externally defragmented, all of the information necessary for a 100% recovery is available; I just need to fill in the FAT where required.

I wrote a program to do this and tested it on a copy of the FAT that I dumped to a file and it works perfectly except that for a few of the files (17 out of 526), the FAT chain is one single cluster too long, and thus cross-linked with the next file.

Fortunately I know exactly what the problem is. I used ceil in my EOF calculation because even a single byte over will require a whole extra cluster:

//Cluster    is the starting cluster of the file
//Size       is the size (in bytes) of the file
//BPC        is the number of bytes per cluster
//NumClust   is the number of clusters in the file
//EOF        is the last cluster of the file’s FAT chain

DWORD NumClust = ceil( (float)(Size / BPC) )
DWORD EOF      = Cluster + NumClust;

This algorithm works fine for everything except files whose size happens to be exactly a multiple of the cluster size, in which case they end up being one cluster too much.

I thought about it for a while but am at a loss as to a way to do this. It seems like it should be simple but somehow it is surprisingly tricky.

What formula would work for files of any size?

È stato utile?

Soluzione

Perhaps ceil( (float)((Size - 1) / BPC) )?

If everything is an integral type, even better would be ((Size - 1) / BPC) + 1.

Altri suggerimenti

If you want the number of clusters it would be (size + BPC - 1) / BPC, with all integral data types.

ceil( (float)(Size / BPC) ) does integer division, then casts that to float.

You need ceil( (float)Size / BPC ) to do that correctly. But using floats here does seem like a bad idea in the first place... See other answers for integer solution.

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top