Every access you make to the hard-drive (or any other stream), is costly.
Convert your code to use a cached copy of the next X
(say, 1024) bytes to read, and Y
(say, 1024) bytes to write.
I didn't exactly understand what your code is supposed to be doing, but say you want to copy a file between streams, your function should be something in the spirit of:
private const int BUFFER_SIZE = 1024;
void copy(BinaryReader inStream, BinaryWriter outStream)
{
byte[] cache = new byte[BUFFER_SIZE];
int readCount = 0;
while ((readCount = inStream.Read(cache, 0, BUFFER_SIZE)) != 0)
{
outStream.Write(cache, 0, readCount);
}
}
In this example, BUFFER_SIZE
not be too small (so bulk reading and writing would be efficient), and not too large - overflowing your memory.
In your code example you are reading one byte every time (i.e. BUFFER_SIZE = 1
), and so this slows your application down.
EDIT: Added the code you needed to write:
public Boolean Patch(string path)
{
const int BUFFER_SIZE = 512;
// VERY IMPORTANT: The throw operation will cause the stream to remain open the function returns.
using (FileStream fs = new FileStream(path, FileMode.Open))
{
BinaryReader br = new BinaryReader(fs);
BinaryWriter bw = new BinaryWriter(fs);
if (fs.Length != this.rawdata.Length)
throw new ArgumentException();
byte[] cache = new byte[BUFFER_SIZE];
int readCount = 0, location = 0;
while ((readCount = br.Read(cache, 0, BUFFER_SIZE)) != 0)
{
int changeLength = 0;
for (int j = 0; j < readCount; j++)
{
if (cache[j] != rawdata[j + location])
{
changeLength++;
}
else if (changeLength > 0)
{
fs.Position = location + j - changeLength;
bw.Write(rawdata, location + j - changeLength, changeLength);
fs.Position = location + j;
changeLength = 0;
}
}
location += readCount;
}
return true;
}
}