سؤال

I have a question here, that I'm hoping some super expert will be able to explain. I have a small HttpHandler. It takes in a file id from a query string, finds a corresponding document path (from the database), and returns the file contents in the response.

The relevant code looks something like below:

context.Response.Clear();
context.Response.AddHeader("content-disposition", "attachment;filename=" + file.DisplayName);
context.Response.AddHeader("content-type", mimetype);

string path = context.Server.MapPath(DocumentUploadPath + "/" + file.FileName);

using (FileStream stream = File.Open(path, FileMode.Open, FileAccess.Read))
{
    byte[] buffer = new byte[32768];
    int bytesRead = 0;

    while ((bytesRead = stream.Read(buffer, 0, buffer.Length)) > 0)
    {
        context.Response.OutputStream.Write(buffer, 0, bytesRead);
    }
}
context.Response.OutputStream.Flush();

After checking our error logs today, we have noticed we're getting some IoExceptions along the lines of:

The process cannot access the file 'file_path_here' because it is being used by another process.

To test, I setup a simple Parallel.For loop to attempt to access the file. Sure enough, the errors were very reproducible. It looks like so:

static void Main(string[] args)
{
    Parallel.For(0, 1000, ctr =>
    {
        GetFile();
    });
}

static void GetFile()
{
    string fileUrl = "url_to_file.ashx?Id=blah";

    HttpWebRequest request = WebRequest.Create(fileUrl) as HttpWebRequest;

    HttpWebResponse response = (HttpWebResponse)request.GetResponse();

    string responseText = "";

    using (StreamReader reader = new StreamReader(response.GetResponseStream()))
    {
        responseText = reader.ReadToEnd();
    }

    response.Close();
    Console.WriteLine(response.StatusDescription);
}

I have modified the handler code to retry if it encounters that exception, so the problem itself is alleviated, but that leads me to my general question:

How is that my server is able to send thousands of concurrent copies of index.html no problem, but breaks when attempting this (assuming no caching)? Is the server handling the retries for me, or is it somehow able to have lots of simultaneous reads to index.html?

هل كانت مفيدة؟

المحلول

I think, if you add FileShare.Read it will solve your problem

See the explanation: http://msdn.microsoft.com/en-us/library/system.io.fileshare(v=vs.110).aspx

A typical use of this enumeration is to define whether two processes can simultaneously read from the same file. For example, if a file is opened and Read is specified, other users can open the file for reading but not for writing.

using (FileStream stream = File.Open(path, FileMode.Open, FileAccess.Read, FileShare.Read))
{
   .....
}

نصائح أخرى

The C# "FileStream" constructor allows you to specify "FileShare.Read":

http://msdn.microsoft.com/en-us/library/system.io.fileshare%28v=VS.90%29.aspx

FileStream fs = 
   new FileStream(name, FileMode.Open, FileAccess.Read, FileShare.Read);

I believe that's also true for the "Open" method:

مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى StackOverflow
scroll top