Question

I have a product that does online backups using rdiff. What currently happens is:

  1. Copy the file to a staging area (so the file won't disappear or be modified while we work on it)

  2. Hashes the original file, and computes an rdiff signature (used for delta differencing) Computes an rdiff delta difference (if we have no prior version, this step is skipped)

  3. Compresses & encrypts the resulting delta difference

Currently, these phases are performed distinctly from one another. The end result is we iterate over the file multiple times. For small files, this is not a big deal (especially given disk caching), but for big files (10's or even 100's of GB) this is a real performance-killer.

I want to consolidate all of these steps into one read/write pass.

To do so, we have to be able to perform all of the above steps in a streaming fashion, while still preserving all of the "outputs" -- file hash, rdiff signature, compressed & encrypted delta difference file. This will entail reading a block of data from the source file (say, 100k?), then iterating over the file in memory to update the hash, rdiff signature, do delta differencing, and then write the output to a compress/encrypt output stream. The goal is to greatly minimize the amount of disk thrashing we do.

Currently we use rdiff.exe (which is a thin layer on top of an underlying librsync library) to calculate signatures and generate binary deltas. This means these are done in a separate process, and are done in one-shot instead of a streaming fashion.

How can I get this to do what I need using the librsync library?

Was it helpful?

Solution

You can probably skip step 1 completely. The file can't be deleted while it's open, and choosing appropriate locking flags when opening it can prevent it from being modified as well. For example, the CreateFile function takes a dwShareMode argument.

You need to compute the entire rdiff signature before you can start creating the rdiff delta. You can avoid reading the entire file by computing signatures and then deltas for each (say) 100 MB block of the file at a time. You will lose some compression efficiency this way*. You might also consider switching from rdiff to xdelta, which can create a delta file in a single pass over the input.

Compression and encryption can be done in parallell with computing the delta. If the compression and encryption is done by separate programs, they often allow reading from standard input and writing to standard output. This can be used easiest by pipes in a batch file, for example:

rdiff signature oldfile oldfile.sig
rdiff delta oldfile.sig newfile | gzip -c | gpg -e -r ... > compressed_encrypted_delta

If you use libraries for compression/encryption in your program, you will need to choose libraries that support streaming operation.

*or lose a lot of efficiency if data is moved around in the file. If someone prepends 100 MB to a 10 GB file, rdiff will produce a delta file of about 100MB. rdiff done in blocks of 100 MB or less at a time will produce about 10 GB of delta. Blocks of 200 MB will produce about 5 GB of delta, since only half the data in each block is from the corresponding block of the old version of the file.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top