Question

I'm currently in the design phase of a class library and stumbled up on a question similar to "Managing diverse classes with a central manager without RTTI" or "pattern to avoid dynamic_cast".

Imagine there is a class hierarchy with a base class Base and two classes DerivedA and DerivedB that are subclasses of Base. Somewhere in my library there will be a class that needs to hold lists of objects of both types DerivedA and DerivedB. Further suppose that this class will need to perform actions on both types depending on the type. Obviously I will use virtual functions here to implement this behavior. But what if I will need the managing class to give me all objects of type DerivedA?

Is this an indicator of a bad class design because I have the need to perform actions only on a subset of the class hierarchy?

Or does it just mean that my managing class should not use a list of Base but two lists - one for DerivedA and one for DerivedB? So in case I need to perform an action on both types I would have to iterate over two lists. In my case the probability that there will be a need to add new subclasses to the hierarchy is quite low and the current number is around 3 or 4 subclasses.

Was it helpful?

Solution

But what if I will need the managing class to give me all objects of type DerivedA?

Is this an indicator of a bad class design because I have the need to perform actions only on a subset of the class hierarchy?

More likely yes than no. If you often need to do this, then it makes sense to question whether the hierarchy makes sense. In that case, you should separate this into two unrelated lists.

Another possible approach is to also handle it through virtual methods, where e.g. DeriveB will have a no-op implementation for methods which don't affect that. It is hard to tell without knowing more information.

OTHER TIPS

It certainly is a sign of bad design if you store (pointers to) objects together that have to be handled differently.

You could however just implement this differing behaviour as an empty function in the base class or use the visitor pattern.

You can do it in several ways.

  • Try to dynamic_cast to specific class (this is a bruteforce solution, but I'd use it only for interfaces, using it for classes is a kind of code smell. It'll work though.)
  • Do something like:

    class BaseRequest {};
    class DerivedASupportedRequest : public BaseRequest {};
    

    Then modify your classes to support the method:

    // (...)
    void ProcessRequest(const BaseRequest & request);
    
  • Create a virtual method bool TryDoSth() in a base class; DerivedB will always return false, while DerivedA will implement the required functionality.

  • Alternative to above: Create method Supports(Action action), where Action is an enum defining possible actions or groups of actions; in such case calling DoSth() on class, which does not support given feature should result in thrown exception.
  • Base class may have a method ActionXController * GetControllerForX(); DerivedA will return the actual controller, DerivedB will return nullptr.
  • Similarly, base class can provide method: BaseController * GetController(Action a)

You asked, if it is a bad design. I believe, that it depends on how much functionality is common and how much is different. If you have 100 common methods and only one different, it would be weird to hold these data in separate lists. However, if count of different methods is noticeable, consider changing design of your application. This may be a general rule, but there are also exceptions. It's hard to tell without knowing the context.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top