Domanda

There has been a lot of discussion on the subject of “Open Closed Principle” on stackoverflow. It seems however, that generally a more relaxed interpretation of the principle is prevalent, so for example the Eclipse is open for modification through plug-ins.

According to strict OCP, you should modify the original code only to fix bugs, not to add new behaviour.

Are there any good examples of strict interpretation of OCP in public or OS libraries, where you can observe evolution of a feature through OCP: there is a class Foo with method bar() and than there is a FooDoingAlsoX with foo2() method in the next version of the library, where original class has been extended where original code was not modified.

EDIT: According to Robert C. Martin: “The binary executable version of the module, whether a linkable library, a DLL, or a Java .jar remain untouched”*. I never see libraries kept closed, in practice new behaviour is added to a library and new version published. According to OCP, new behaviour belongs to new binary module.

*Agile Software Development, Principles, Patterns, and Practices by Robert C. Martin

È stato utile?

Soluzione

The OCP principle says that a class shall be open for extension but closed for changes. The key to achieve this is abstraction. If you also read the DIP principle you'll find out that abstractions should not depend upon details, but details should depend upon abstractions. In your example you have details in your interface (two specific methods bar() and foo2()). To fully implement OCP you shall try to avoid such details (and for example try to move them behind the abstraction and instead have one general foo-method with different implementations).

For example take a look at this interface in SolrNet: https://github.com/mausch/SolrNet/blob/master/SolrNet/ISolrCommand.cs This is a general command that that only tell that a command can be executed, it doesn't give more details than that.

The details instead lies in the implementations of the interface: https://github.com/mausch/SolrNet/tree/master/SolrNet/Commands

As you see you can add as many commands as you wish without changing the implementation of any other class. The specific implementations can hereby be considered closed for modifications, but the interface allow us to extend the functionality with new commands, and is hereby open for extension.

(SolrNet isn't extraordinarily in anyway, I just used examples from this project because I happen to have it in my browser when I read this post, almost all good coded OO projects make use of the OCP principle in one way or another)

EDIT: If you want examples of this on the binary level you can for example take a look at nopCommerce (http://nopcommerce.codeplex.com/releases/view/69081) where you for example can add your own shipping providers, payment providers or exchange rate providers without even touching the original DLL by implementing a set of interfaces. And again, it is not something extraordinarily with nopCommerce, it was just the first project that came into mind because I used it a couple of days ago ;)

OCP is not not a principle that shall only be used on binary level though, good OOD uses OCP, not everywhere, but in all levels where it is suitable ;) "Strict" OCP on the binary level is not always suitable and would add an extra level of complexity if you used it in every single situation, it is mostly interesting in situations when you want to change implementation in runtime or when you want to let external developers be able to extend your interfaces. You shall always keep the OCP principle in mind when you desing your interfaces, but you shall not see it as a law but a principle that shall be used in the correct situations.

I guess you refer to Agile Principles, Patterns and Practices when you quote Robert C Martin, if so, also read the conclusion in the same chapter where he says about the same thing as I did above. If you for example read his book Clean Code he gives a more gradate explanation of the OCP principle and I would say the quote above is a bit unfortunate since it can let people think that you shall always put new code in new DLL:s, JAR:s or libs when the truth is that you shall always consider the context.

I think your rather should take a look at Martins more up to date whitepaper about OCP http://objectmentor.com/resources/articles/ocp.pdf (which he also refer to in his later book Clean Code), there he never refer to separate binaries, rather he refer to "classes, modules, functions". I think this proves that Martin means not just binary extension when he speaks about OCP but also extensions of classes and functions, so binary extension is not more "strict" than the class extension in my first example.

Altri suggerimenti

I am not aware of really good examples but I think that there might be a reason for the more "relaxed interpretation" (for example here on SO):

To fully realize the OCP principle in a real world project you need to do the coupling via lean Interfaces (see ISP and DIP for this) and Dependency Injection (either property or constructor based)... otherwise you are really fast either stuck or need to resort to the "relaxed interpretation"...

some interesting links in this regard:

Background

In page 100 of PPP Robert Martin says

"Closed for modification"
Extending the behavior of a module does not result in changes to the source or binary code of the module. The binary executable version of the module, whether a linkable library, a DLL, or a Java .jar, remains untouched.

Also on page 103 he discusses an example, written in C, where a non-OCP design results in recompiling the existing classes:

So, not only must we change the source code of all witch/case statements or if/else chains, but we also must alter the binary files (via recompilation) of all the modules that use any of the Shape data structures. Changing the binary files means that any DLLs, shared libraries, or other kinds of binary components must be redeployed.

It's good to remember that this book was published in 2003 and many of the examples use C++, which is a language notorious for long compile times (unless header file dependencies are handled well - developers from Remedy mentioned in one presentation that Alan Wake's full build takes only about 2 minutes).

So when discussing binary compatibility in the small scale (i.e. within one project), one benefit of OCP (and DIP) is faster compile times, which is less of an issue with modern languages and machines. But in the large scale, when a library is used by many other projects, especially if their code is not in our control, the benefits of not having to release new versions of the software still apply.

Example

As an example of an open source library which follows OCP in binary compatibility, look at JUnit. There are tens of testing frameworks which rely on JUnit's @RunWith annotation and Runner interface, so that they can be run with the JUnit test runner - without having to change JUnit, Maven, IDEs etc.

Also JUnit's recently added @Rule annotation allows test writers to plug into standard JUnit tests custom behavior, which would before have required a custom test runner. Once more an example of library-level OCP.

To contrast, TestNG does not follow OCP, but contains JUnit specific checks to execute TestNG and JUnit tests differently. A representative line can be found from the TestRunner.run() method:

  if(test.isJUnit()) {
    privateRunJUnit(test);
  }
  else {
    privateRun(test);
  }

As a result, even tough the TestNG test runner has in some aspects more features (for example is supports running tests in parallel), other testing frameworks do not use it, because it's not extensible to support other testing frameworks without modifying TestNG. (TestNG has a way to plug in custom test runners using the -testrunfactory argument, but AFAIK it allows only one type of runner per suite. So it would not be possible to use many different testing frameworks in one project, unlike with JUnit.)

Conclusion

However, in most situations OCP is used within an application or library, in which case both the base module and its extensions are packaged inside the same binary. In that situation OCP is used to improve the maintainablity of the source code, and not to avoid redeploys and new releases. The possible benefit of not having to recompile an unchanged file is still there, but since compile times are so low with most modern languages, that's not very important.

The thing to always keep in mind is that following OCP is expensive, as it makes the system more complex. Robert Martin talks about this on PPP page 105 and the conclusion of the chapter. OCP should be applied carefully, for only the most probable changes. You should not preemptively put in the hooks to follow OCP, but you should put in the hooks only after a change happens that needs them. Thus it is unlikely to find a project where all new features would have been added without changing existing classes - unless somebody does it as an academic exercise (my intuition says that it would be very hard and the resulting code would not be clean).

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top