質問

I have written an application that implements a file copy that is written as below. I was wondering why, when attempting to copy from a network drive to a another network drive, the copy times are huge (20-30 mins to copy a 300mb file) with the following code:

 public static void CopyFileToDestination(string source, string dest)
    {
        _log.Debug(string.Format("Copying file {0} to {1}", source, dest));
        DateTime start = DateTime.Now;

        string destinationFolderPath = Path.GetDirectoryName(dest);
        if (!Directory.Exists(destinationFolderPath))
        {
            Directory.CreateDirectory(destinationFolderPath);
        }
        if (File.Exists(dest))
        {
            File.Delete(dest);
        }

        FileInfo sourceFile = new FileInfo(source);
        if (!sourceFile.Exists)
        {
            throw new FileNotFoundException("source = " + source);
        }

        long totalBytesToTransfer = sourceFile.Length;

        if (!CheckForFreeDiskSpace(dest, totalBytesToTransfer))
        {
            throw new ApplicationException(string.Format("Unable to copy file {0}: Not enough disk space on drive {1}.",
                source, dest.Substring(0, 1).ToUpper()));
        }

        long bytesTransferred = 0;

        using (FileStream reader = sourceFile.OpenRead())
        {
            using (FileStream writer = new FileStream(dest, FileMode.OpenOrCreate, FileAccess.Write))
            {
                byte[] buf = new byte[64 * 1024];
                int bytesRead = reader.Read(buf, 0, buf.Length);
                double lastPercentage = 0;
                while (bytesRead > 0)
                {
                    double percentage = ((float)bytesTransferred / (float)totalBytesToTransfer) * 100.0;
                    writer.Write(buf, 0, bytesRead);
                    bytesTransferred += bytesRead;
                    if (Math.Abs(lastPercentage - percentage) > 0.25)
                    {
                        System.Diagnostics.Debug.WriteLine(string.Format("{0} : Copied {1:#,##0} of {2:#,##0} MB ({3:0.0}%)",
                            sourceFile.Name,
                            bytesTransferred / (1024 * 1024),
                            totalBytesToTransfer / (1024 * 1024),
                            percentage));
                        lastPercentage = percentage;
                    }
                    bytesRead = reader.Read(buf, 0, buf.Length);
                }
            }
        }

        System.Diagnostics.Debug.WriteLine(string.Format("{0} : Done copying", sourceFile.Name));
        _log.Debug(string.Format("{0} copied in {1:#,##0} seconds", sourceFile.Name, (DateTime.Now - start).TotalSeconds));
    }

However, with a simple File.Copy, the time is as expected.

Does anyone have any insight? Could it be because we are making the copy in small chunks?

役に立ちましたか?

解決

Changing the size of your buf variable doesn't change the size of the buffer that FileStream.Read or FileStream.Write use when communicating with the file system. To see any change with buffer size, you have to specify the buffer size when you open the file.

As I recall, the default buffer size is 4K. Performance testing I did some time ago showed that the sweet spot is somewhere between 64K and 256K, with 64K being more consistently the best choice.

You should change your File.OpenRead() to:

new FileStream(sourceFile.FullName, FileMode.Open, FileAccess.Read, FileShare.None, BufferSize)

Change the FileShare value if you don't want exclusive access, and declare BufferSize as a constant equal to whatever buffer size you want. I use 64*1024.

Also, change the way you open your output file to:

new FileStream(dest, FileMode.Create, FileAccess.Write, FileShare.None, BufferSize)

Note that I used FileMode.Create rather than FileMode.OpenOrCreate. If you use OpenOrCreate and the source file is smaller than the existing destination file, I don't think the file is truncated when you're done writing. So the destination file would contain extraneous data.

That said, I wouldn't expect this to change your copy time from 20-30 minutes down to the few seconds that it should take. I suppose it could if every low-level read requires a network call. With the default 4K buffer, you're making 16 read calls to the file system in order to fill your 64K buffer. So by increasing your buffer size you greatly reduce the number of OS calls (and potentially the number of network transactions) your code makes.

Finally, there's no need to check to see if a file exists before you delete it. File.Delete silently ignores an attempt to delete a file that doesn't exist.

他のヒント

Call the SetLength method on your writer Stream before actual copying, this should reduce operations by the target disk.

Like so

writer.SetLength(totalBytesToTransfer);

You may need to set the Stream's psoition back to the start after calling this method by using Seek. Check the position of the stream after calling SetLength, should be still zero.

writer.Seek(0, SeekOrigin.Begin); // Not sure on that one

If that still is too slow use the SetFileValidData

ライセンス: CC-BY-SA帰属
所属していません StackOverflow
scroll top