Question

The destructor should only release unmanaged resources that your object holds on to, and it should not reference other objects. If you have only managed references you do not need to (and should not) implement a destructor. You want this only for handling unmanaged resources. Because there is some cost to having a destructor, you ought to implement this only on methods that consume valuable, unmanaged resources.

-- Top Ten Traps in C# for C++ Programmers

The article doesn't go into this in more depth, but what sorts of costs are involved with using a destructor in C#?

Note: I know about the GC and the fact the destructor isn't called at reliable times, that all aside, is there anything else?

Was it helpful?

Solution

Any object that has a finalizer (I prefer that term over destructor, to emphasize the difference from C++ destructors) is added to the finalizer queue. This is a list of references to objects that has a finalizer that has to be called before they are removed.

When the object is up for garbage collection, the GC will find that it's in the finalizer queue and move the reference to the freachable (f-reachable) queue. This is the list that the finalizer background thread goes through to call the finalizer method of each object in turn.

Once the finalizer of the object has been called, the object is no longer in the finalizer queue so it's just a regular managed object that the GC can remove.

This all means that if an object has a finalizer, it will survive at least one garbage collection before it can be removed. This usually means that the object will be moved to the next heap generation, which involves actually moving the data in memory from one heap to another.

OTHER TIPS

The most extensive discussion I've seen on how this all works was done by Joe Duffy. It has more detail than you might imagine.

Following that up, I put together a practical approach to doing this on a day to day - less about the cost but more about the implementation.

Guffa and JaredPar cover the details pretty well, so I'll just add a somewhat esoteric note on finalizers or destructors as the C# language specification unfortunately calls them.

One thing to keep in mind is that since the finalizer thread runs all finalizers in sequence, a deadlock in a finalizer will prevent all remaining (and future) finalizers from running. Since these instances are not collected until their finalizers complete a deadlocked finalizer will also cause a memory leak.

This article covers the problem in detail. It's really hard to sum up in a simple SO post: http://msdn.microsoft.com/en-us/magazine/bb985010.aspx

Guffa has summed up the factors in the finalizer cost quite well. There was a recent article about the cost of finalizers in Java which also gives some insight.

Part of the cost in .net can be avoided by removing the object from the finalizer queue with GC.SuppressFinalize. I ran some quick tests in .net based on article and posted it here (although the focus is far more on the Java side).


Below is a graph of the results - it doesn't really have the best labels ;-). "Debug=true/false" refers to the empty vs simple finalizer:

~ConditionalFinalizer()  
{  
    if (DEBUG)  
    {  
        if (!resourceClosed)  
        {  
            Console.Error.WriteLine("Object not disposed");  
        }  
        resourceClosed = true;  
    }  
} 

"Suppress=true" refers to whether GC.SuppressFinalize was called in the Dipose method.

Summary

For .net, removing the object from the finalizer queue by calling GC.SuppressFinalize is half the cost of leaving the object on the queue.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top