I have recently witnessed more and more problems similar to the ones explained in this article on feature intersections. Another term for it would be product lines, though I tend to attribute these to actually different products, whereas I usually encounter these problems in the form of possible product configurations.

The basic idea of this type of problem is simple: You add a feature to a product, but somehow things get complicated due to a combination of other existing features. Eventually, QA finds a problem with a rare combination of features that no one thought of before and what should have been a simple bugfix may even turn into requiring major design changes.

The dimensions of this feature intersection problem are of a mind-blowing complexity. Let's say the current software version has N features and you add one new feature. Let's also simplify things by saying that each of the features can turned on or off only, then you already have 2^(N+1) possible feature combinations to consider. Due to a lack of better wording / search terms, I refer to the existence of these combinations as feature intersection problem. (Bonus points for an answer including reference(s) for a more established term.)

Now the question I struggle with is how to deal with this complexity problem on each level of the development process. For obvious cost reasons, it is impractical up to the point of being utopian, to want to address each combination individually. After all, we try to stay away from exponential complexity algorithms for a good reason, but to turn the very development process itself into an exponentially sized monster is bound to lead to utter failure.

So how do you get the best result in a systematic fashion that does not explode any budgets and is complete in a decent, useful, and professionally acceptable way.

  • Specification: When you specify a new feature - how do you ensure that it plays well with all the other children?

    I can see that one could systematically examine each existing feature in combination with the new feature - but that would be in isolation of the other features. Given the complex nature of some features, this isolated view is often already so involved that it needs a structured approach all in itself, let alone the 2^(N-1) factor caused by the other features that one willingly ignored.

  • Implementation: When you implement a feature - how do you ensure your code interacts / intersects properly in all cases.

    Again, I am wondering about the sheer complexity. I know of various techniques to reduce the error potential of two intersecting features, but none that would scale in any reasonable fashion. I do assume though, that a good strategy during the specification should keep the problem at bay during implementation.

  • Verification: When you test a feature - how do you deal with the fact, that you can only test a fraction of this feature intersection space?

    It is tough enough to know that testing a single feature in isolation guarantees nothing anywhere near error-free code, but when you reduce that to a fraction of 2^-N it seems like hundreds of tests are not even covering a single drop of water in all oceans combined. Even worse, the most problematic errors are those that stem from the intersection of features, which one might not expect to lead to any problems - but how do you test for these if you don't expect such a strong intersection?

While I would like to hear how others deal with this problem, I am primarily interested in literature or articles analyzing the topic in greater depth. So if you personally follow a certain strategy it would be nice to include corresponding sources in your answer.

没有正确的解决方案

许可以下: CC-BY-SA归因
scroll top