Question

I have ideas for solving this, but I have a feeling this problem has been solved many times over.

I have implemented an observer pattern, similar to this:

struct IObserver {
  virtual void notify(Event &event) = 0;
}

struct Notifier {
  void registerObserver(IObserver* observer, EventRange range) {
    lock(_mutex);
    _observers[observer] = range;
  }

  void deregisterObserver(IObserver* observer) {
    lock(_mutex);
    _observers.erase(observers.find(observer));
  }

  void handleEvent() { /* pushes event onto queue */ }

  void run();

  mutex _mutex;
  queue<Event> _eventQueue;
  map<IObserver, EventRange> _observers;
}

The run method is called from a thread I create (it is actually owned by the notifier). The method looks something like...

void Notifier::run() {
  while(true) {
    waitForEvent();
    Event event = _eventQueue.pop();

    // now we have an event, acquire a lock and notify listeners
    lock(_mutex);
    BOOST_FOREACH(map<IObserver, EventRange>::value_type &value, _observers){
      value.first->notify(event);
    }
  }
}

This works perfectly, until notify attempts to create an object that in turn attempts to register an observer. In this scenario, an attempt is made to acquire the already locked lock, and we end up in a deadlock. This situation can be avoided by using a recursive mutex. However, now consider the situation where a notification triggers removal of an Observer. Now map iterators are invalidated.

My question is, is there a pattern available that prevents this deadlock situation?

Was it helpful?

Solution

I think the real problem here is that you have an event that is manipulating the list of observers while you are iterating over the list of observers. If you are executing a notify(...) operation, you are iterating over the list. If you are iterating over the original list (and not a copy), then either registration or deregistration alters the list while you are iterating over it. I don't believe the iterators in a std::map would handle this well.

I have had this problem as well (just in a single threaded context) and found the only way to deal with it was to create a temporary copy of the observer list and iterate over that.

I also cached off removed observers during iteration so I would be sure that if I had observers A, B, and C, then if A leads to C being removed, the list still has C in it but C gets skipped.

I have an implementation of this for single threaded applications.

You could convert it to a threaded approach with a little work.

EDIT: I think the points of vulnerability for a multi-threaded application are the creation of the copy of the observer list (which you do when you enter notify(...)) and the addition of the observers to the "recently removed" list when observers detach. Don't place mutexes around these functions; place mutexes around the creation/update of the lists inside those function or create functions for just that purpose and place mutexes around them.

EDIT: I also strongly suggest creating some unit test cases (e.g. CPP Unit) to hammer the attach/detach/multi-detach scenarios from multiple threads. I had to do this in order to find one of the subtler problems when I was working on it.

EDIT: I specifically don't try to handle the case of new observers added as a consequence of a notify(...) call. That is to say, there is a list of recently removed but not a list of recently added. This is done to prevent a "notify->add->notify->add->ect." from happening, which can happen if somebody sticks a notify in a constructor.

The general approach is sketched out here.

The code is available on github here.

I have used this approach in several example solutions, which you can find on this site (and code for many of them on github as well).

Was this helpful?

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top