Question

Writing Stringbuilder to file asynchronously. This code takes control of a file, writes a stream to it and releases it. It deals with requests from asynchronous operations, which may come in at any time.

The FilePath is set per class instance (so the lock Object is per instance), but there is potential for conflict since these classes may share FilePaths. That sort of conflict, as well as all other types from outside the class instance, would be dealt with retries.

Is this code suitable for its purpose? Is there a better way to handle this that means less (or no) reliance on the catch and retry mechanic?

Also how do I avoid catching exceptions that have occurred for other reasons.

public string Filepath { get; set; }
private Object locker = new Object();

public async Task WriteToFile(StringBuilder text)
    {
        int timeOut = 100;
        Stopwatch stopwatch = new Stopwatch();
        stopwatch.Start();
        while (true)
        {
            try
            {
                //Wait for resource to be free
                lock (locker)
                {
                    using (FileStream file = new FileStream(Filepath, FileMode.Append, FileAccess.Write, FileShare.Read))
                    using (StreamWriter writer = new StreamWriter(file, Encoding.Unicode))
                    {
                        writer.Write(text.ToString());
                    }
                }
                break;
            }
            catch
            {
                //File not available, conflict with other class instances or application
            }
            if (stopwatch.ElapsedMilliseconds > timeOut)
            {
                //Give up.
                break;
            }
            //Wait and Retry
            await Task.Delay(5);
        }
        stopwatch.Stop();
    }
Was it helpful?

Solution

How you approach this is going to depend a lot on how frequently you're writing. If you're writing a relatively small amount of text fairly infrequently, then just use a static lock and be done with it. That might be your best bet in any case because the disk drive can only satisfy one request at a time. Assuming that all of your output files are on the same drive (perhaps not a fair assumption, but bear with me), there's not going to be much difference between locking at the application level and the lock that's done at the OS level.

So if you declare locker as:

static object locker = new object();

You'll be assured that there are no conflicts with other threads in your program.

If you want this thing to be bulletproof (or at least reasonably so), you can't get away from catching exceptions. Bad things can happen. You must handle exceptions in some way. What you do in the face of error is something else entirely. You'll probably want to retry a few times if the file is locked. If you get a bad path or filename error or disk full or any of a number of other errors, you probably want to kill the program. Again, that's up to you. But you can't avoid exception handling unless you're okay with the program crashing on error.

By the way, you can replace all of this code:

                using (FileStream file = new FileStream(Filepath, FileMode.Append, FileAccess.Write, FileShare.Read))
                using (StreamWriter writer = new StreamWriter(file, Encoding.Unicode))
                {
                    writer.Write(text.ToString());
                }

With a single call:

File.AppendAllText(Filepath, text.ToString());

Assuming you're using .NET 4.0 or later. See File.AppendAllText.

One other way you could handle this is to have the threads write their messages to a queue, and have a dedicated thread that services that queue. You'd have a BlockingCollection of messages and associated file paths. For example:

class LogMessage
{
    public string Filepath { get; set; }
    public string Text { get; set; }
}

BlockingCollection<LogMessage> _logMessages = new BlockingCollection<LogMessage>();

Your threads write data to that queue:

_logMessages.Add(new LogMessage("foo.log", "this is a test"));

You start a long-running background task that does nothing but service that queue:

foreach (var msg in _logMessages.GetConsumingEnumerable())
{
    // of course you'll want your exception handling in here
    File.AppendAllText(msg.Filepath, msg.Text);
}

Your potential risk here is that threads create messages too fast, causing the queue to grow without bound because the consumer can't keep up. Whether that's a real risk in your application is something only you can say. If you think it might be a risk, you can put a maximum size (number of entries) on the queue so that if the queue size exceeds that value, producers will wait until there is room in the queue before they can add.

OTHER TIPS

You could also use ReaderWriterLock, it is considered to be more 'appropriate' way to control thread safety when dealing with read write operations...

To debug my web apps (when remote debug fails) I use following ('debug.txt' end up in \bin folder on the server):

public static class LoggingExtensions
{
    static ReaderWriterLock locker = new ReaderWriterLock();
    public static void WriteDebug(string text)
    {
        try
        {
            locker.AcquireWriterLock(int.MaxValue); 
            System.IO.File.AppendAllLines(Path.Combine(Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().GetName().CodeBase).Replace("file:\\", ""), "debug.txt"), new[] { text });
        }
        finally
        {
            locker.ReleaseWriterLock();
        }
    }
}

Hope this saves you some time.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top