Question

I currently have a download site for my school that is based in .net. We offer anything from antivirus, autocad, spss, office, and a number of large applications for students to download. It's currently setup to handle them in 1 of 2 ways; anything over 800 megs is directly accessable through a seperate website while under 800 megs is secured behind .net code using a filestream to feed it to the user in 10,000 byte chunks. I have all sorts of issues with feeding downloads this way... I'd like to be able to secure the large downloads, but the .net site just can't handle it, and the smaller files will often fail. Is there a better approach to this?

edit - I just wanted to update on how I finally solved this: I ended up adding my download directory as a virtual directory in iis and specified custom http handler. The handler grabbed the file name from the request and checked for permissions based on that, then either redirected the users to a error/login page, or let the download continue. I've had no problems with this solution, and I've been on it for probably 7 months now, serving files several gigs in size.

Was it helpful?

Solution

I have two recommendations:

  • Increase the buffer size so that there are less iterations

AND/OR

  • Do not call IsClientConnected on each iteration.

The reason is that according to Microsoft Guidelines:

Response.IsClientConnected has some costs, so only use it before an operation that takes at least, say 500 milliseconds (that's a long time if you're trying to sustain a throughput of dozens of pages per second). As a general rule of thumb, don't call it in every iteration of a tight loop

OTHER TIPS

If you are having performance issues and you are delivering files that exist on the filesystem (versus a DB), use the HttpResponse.TransmitFile function.

As for the failures, you likely have a bug. If you post the code you may be better response.

Look into bit torrent. It's designed specifically for this sort of thing and is quite flexible.

Whats wrong with using a robust web server (like Apache) and let it deal with files. Just as you now separate larger files to a webserver, why not serve smaller files the same way too?

Is there some hidden requirements to prevent this?

Ok, this is what it currently looks like...

    Stream iStream = null;

// Buffer to read 10K bytes in chunk:
byte[] buffer = new Byte[10000];

// Length of the file:
int length;

// Total bytes to read:
long dataToRead;

if (File.Exists(localfilename))
{
    try
    {
        // Open the file.
        iStream = new System.IO.FileStream(localfilename, System.IO.FileMode.Open, System.IO.FileAccess.Read, System.IO.FileShare.Read);

        // Total bytes to read:
        dataToRead = iStream.Length;

        context.Response.Clear();
        context.Response.Buffer = false;
        context.Response.ContentType = "application/octet-stream";
        Int64 fileLength = iStream.Length;
        context.Response.AddHeader("Content-Length", fileLength.ToString());
        context.Response.AddHeader("Content-Disposition", "attachment; filename=" + originalFilename);

        // Read the bytes.
        while (dataToRead > 0)
        {
            // Verify that the client is connected.
            if (context.Response.IsClientConnected)
            {
                // Read the data in buffer.
                length = iStream.Read(buffer, 0, 10000);

                // Write the data to the current output stream.
                context.Response.OutputStream.Write(buffer, 0, length);

                // Flush the data to the HTML output.
                context.Response.Flush();

                buffer = new Byte[10000];
                dataToRead = dataToRead - length;
            }
            else
            {
                //prevent infinite loop if user disconnects
                dataToRead = -1;
            }
        }
        iStream.Close();
        iStream.Dispose();
    }
    catch (Exception ex)
    {
        if (iStream != null)
        {
            iStream.Close();
            iStream.Dispose();
        }
        if (ex.Message.Contains("The remote host closed the connection"))
        {
            context.Server.ClearError();
            context.Trace.Warn("GetFile", "The remote host closed the connection");
        }
        else
        {
            context.Trace.Warn("IHttpHandler", "DownloadFile: - Error occurred");
            context.Trace.Warn("IHttpHandler", "DownloadFile: - Exception", ex);
        }
        context.Response.Redirect("default.aspx");
    }
}

There's a lot of licensing restrictions... for example we have an Office 2007 license agreement that says any technical staff on campus can download and install Office, but not students. So we don't let students download it. So our solution was to hide those downloads behind .net.

Amazon S3 sounds ideal for what you need, but it is commercial service and fileas are served from their servers.

You should try to contact amazon and ask for academic pricing. Even if they don't have one.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top