Question

In my application I need to continuously write data chunks (around 2MB) about every 50ms in a large file (around 2-7 GB). This is done in a sequential, circular way, so I write chunk after chunk into the file and when I'm at the end of the file I start again at the beginning.

Currently I'm doing it as follows:

In C# I call File.OpenWrite once to open the file with read access and set the size of the file with SetLength. When I need to write a chunk, I pass the safe file handle to the unmanaged WriteFile (kernel32.dll). Hereby I pass an overlapped structure to specify the position within the file where the chunk has to be written. The chunk I need to write is stored in unmanaged memory, so I have an IntPtr which I can pass to WriteFile.

Now I'd like to know if and how I can make this process more efficient. Any ideas?

Some questions in detail:

  • Will changing from file I/O to memory-mapped file help?
  • Can I include some optimizations for NTFS?
  • Are there some useful parameters when creating the file that I'm missing? (maybe an unmanaged call with special parameters)
Was it helpful?

Solution

Using better hardware will probably be the most cost efficient way to increase file writing efficiency. There is a paper from Microsoft research that will answer most of your questions: Sequential File Programming Patterns and Performance with .NET and the downloadable source code (C#) if you want to run the tests from the paper on your machine.

In short:

  • The default behavior provides excellent performance on a single disk.
  • Unbufffered IO should be tested if you have a disc array. Could improve write speed with a factor of eight.

This thread on social.msdn might also be of interest.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top