Question

Update: I wrote a program to test the memory implications of each of the techniques I mention below. Not too surprisingly, I found that, sure enough, the conventional approach using .NET events creates a lot more garbage than the other approaches (meaning, it does actually create garbage, as opposed to the other two strategies both of which seem not to create any garbage at all).

Really, I should have stressed all along that I was more interested in the memory overhead of TEventArgs arguments in .NET events than the cost in terms of speed. Ultimately I have to concede that, for all practical purposes—in terms of both memory and speed—the cost is negligible. Still, I thought it was interesting to see that raising a lot of events the "conventional" way does cost something—and that, in extreme cases, it can even lead to gen 1 garbage collections, which may or may not matter depending on the situation (the more "real-time" a system needs to be, in my experience, the more important it is to be mindful of where garbage is being created and how to minimize it where appropriate).


This might seem like a dumb question. I realize that Windows Forms, for instance, could easily be considered a "high-performance" scenario, with hundreds or even thousands of events being raised in very rapid succession (e.g., the Control.MouseMove event) all the time. But still I wonder if it's really reasonable to design a class with .NET events when it is expected that the class will be used in high-performance, time-critical code.

The main concern I have is with the convention that one use something like EventHandler<TEventArgs> for all events, where TEventArgs derives from EventArgs and is in all likelihood a class that must be intantiated every single time the event is raised/handled. (If it's just plain EventsArgs, obviously, this is not the case as EventArgs.Empty can be used; but assuming any meaningful and non-constant information is contained in the TEventArgs type, instantiation will probably be needed.) It seems like this results in greater GC pressure than I would expect for a high-performance library to create.

That said, the only alternatives I can think of are:

  1. Using unconventional delegate types (i.e., not EventHandler<TEventArgs>) for events, taking only parameters that don't require object instantiation such as int, double, etc. (even string, and passing references to existing string objects).
  2. Skipping events altogether and using virtual methods, forcing client code to override them as desired. This seems to have basically the same effect as the previous idea but in a somewhat more controlled way.

Are my concerns about the GC pressure of .NET events unfounded to begin with? If so, what am I missing there? Or, is there some third alternative that is better than the two I've just listed?

Was it helpful?

Solution

Before you do anything, you should consider doing some level of profiling to make sure this is actually going to present a real problem. For cases like UI mouse movement, events happen at such a low frequency compared to the machine that it generally has a negligible impact on GC behavior. Keep in mind that the .NET GC is quite good at collecting short lived objects with few references to them - it's medium and long-lived objects referenced in many places that could create problems.

However, if (for some reason) this proves to indeed by an issue, there are a couple of possible approaches to mitigate the cost.

  1. Use a non-standard event delegate signature. You have already identified this as a possible alternative - however, you could always use a generic signature that restricts the type of the event arguments to be a struct. Passing a struct around as a parameter should reduce cases where the parameter is allocated on the heap, at the cost of creating copies of this data.

  2. Use a TEventArgs flyweight implementation that recycles instances of the event argument after use. This can be a tricky proposition, as you need to make sure that event handlers never store an instance of the parameter for use elsewhere. When properly implemented however, this pattern can significantly reduce the number of instances of lightweight types that need to be managed and collected.

OTHER TIPS

Nah, Winforms events happen in human time, not CPU time. Nobody can move the mouse fast enough to put any kind of serious pressure on a modern machine. There is no message for each individual traversed pixel.

More to the point, the delegate argument objects are always gen #0 objects. They don't stick around long enough to ever get promoted. Both allocating and garbage collecting them is dirt cheap. This is just not a real problem, don't chase that ghost.

I suspect your concerns are well-founded, but it's really necessary to look at a concrete case.

For example, in any particular example, stackshots would tell you what is costing enough to be concerned about. It might be the case that the cost is not so much in the creation / destruction of event objects, but in what processing they trigger.

These days, the performance problem I often see is "runaway notifications". The event might just set a property of an object. No big deal, right? But then that property setting might get intercepted by a superclass, which might look for collections the object is in and send notifications to add/remove/reposition the object in those collections, and that can cause windows to be invalidated, or menu items to be created/deleted, or tree-control items to be expanded/closed, and so on ...

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top