Question

I am trying to reconcile the Law of Demeter for programming environments where events are involved - I tagged this javascript and obj-c (Cocoa's NSNotificationCenter) because both allow for events.

In such an environment, you can arbitrarily decouple any two objects by just having them throw and bind/subscribe to events. In obj-c it can be a lot easier to just do this instead of passing in a reference to the object you need to invoke a method on. I'm thinking this probably isn't good to always use: on the performance standpoint you miss out on optimizations for method dispatch (probably negligible unless it's a huge app). For readability a programmer may want to make it explicit that one object is a dependency of another, something that isn't obvious when an object just throws events around.

I'd like some thoughts on the role of events in software architecture: how do you like to balance event binding and direct method invocation?

Was it helpful?

Solution

Be careful with your terminology. The word "events" in a GUI context often means user-generated events, such as mouse clicks, taps, key presses, etc., and these kinds of events are usually not handled using the observer pattern that you seem to be referring to. In Cocoa and Cocoa-Touch, user events are handled using the chain of responsibility pattern.

Both patterns promote loose coupling between objects, but the coupling in observer is arguably looser. Objects that participate in a chain of responsibility generally all inherit from a common base class or otherwise conform to some common interface, and each object in the chain is generally aware of its neighbors in the chain. With observer, the object sending a message (such as a notification in Cocoa) doesn't know what objects might receive the message, and the objects receiving the message usually don't know from whence it came. In Cocoa and NSNotificationCenter in particular, there's not even a common interface -- each "observer" object registers a notification handler when it signs up to receive a notification.

You can create quite a mess with the observer pattern if you overuse it. Debugging a tangle of messages can be very difficult. Far worse, if messages are delivered to observers synchronously (and they usually are), the object sending the message has no way to know how expensive it might be to send a message, and objects receiving a message have no sense of how their own performance might affect the rest of the application. The fact that the number of observers for any given message is usually unbounded makes it easy to accidentally create real performance problems. Law of Demeter advocates might click their tongues and say "I told you so," but that doesn't mean that you should avoid observers. Used appropriately and with the understanding that observers shouldn't do a lot of heavy lifting in response to a message, the pattern is a powerful way to convey information to the places where its needed, and it can simplify code by eliminating the need for many relationships between objects.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top