Frage

Maybe this is not a good question to post but I am kinda desperate. I have a little piece of code, and i have a memory leak, but i don't know how to overcome it. please help.

        var nPiece = stream.Length / BufferLen;
        var lastPieceLen = stream.Length - (nPiece * BufferLen);

        for (int i = 0; i < nPiece + 1; i++)
        {
            var buffer = new byte[i != nPiece ? BufferLen : lastPieceLen];
            stream.Read(buffer, 0, buffer.Length); 
            using (var chunk = new MemoryStream(buffer))
                SsClient.SendChunk(i != nPiece ? (i + 1).ToString() : ((i + 1) + "last"), SsSession, chunk);
            buffer = null;
            GC.Collect();
        }

I split a large stream into smaller chunks and send them to a WCF service over SsClient.SendChunk() method. Let's say I have a file which is 700 mb, i split it into 7 pieces of 100 mb chunks, and send them one by one. but after this method is done there is around 400 mb in memory which i see as memory leak. when i debug it i see memory gets filled with chunk right after SendChunk method of web service. When the method finished here I'm left with the memory leak. GC.collect() doesn't seem to work either. I didn't even understand why there is 400 mb leftover of 700 mb file? maybe that might give some clues i don't know.

any ideas?

War es hilfreich?

Lösung

    for (int i = 0; i < nPiece + 1; i++)
    {
         var buffer = new byte[i != nPiece ? BufferLen : lastPieceLen];

You don't have a memory leak. You've merely got a bad case of the munchies. You are gobbling up memory in a hurry, allocating the buffer inside the for() loop. It is not subtle either, buffers of a 100 megabytes don't fall from the sky. Any object larger than 85,000 bytes is allocated from the Large Object Heap, not the regular generational GC heap. LOH allocations do not get compacted, they are too large, and do not get collected frequently. It requires a gen #2 collection, they don't happen very often.

And allocating like this has delayed effects, the reason that you don't see GC.Collect() do much to reduce the number. You are also using a lot of RAM. Windows doesn't unmap memory pages until it has to. Usually because another process needs RAM. And you are gobbling up a lot of virtual memory address space. Again the Windows memory manager is not a hurry to de-allocate that space. Which is okay, it is virtual. It doesn't cost anything. It otherwise isn't clear what number you are looking at. The biggest reason that GC.Collect() has little effect is because the MemoryStream still has a reference to the array. Calling its Dispose() method doesn't change that. Last but not least, the jitter removes null assignments to local variables when you build and run the Release version.

A simple workaround is re-use the buffer. Allocate it outside of the for() loop. It will help a lot if you also adjust SendChunk() to take a length argument. And simply reduce the chunk size, 100 megabytes is far too much. This goes over a network in most WCF scenarios, network chunk sizes are typically only 1500 bytes at best. A buffer size of 4096 bytes for I/O is about right in most cases.

Lizenziert unter: CC-BY-SA mit Zuschreibung
Nicht verbunden mit StackOverflow
scroll top