Pregunta

This is basically the same as Coding to interfaces, but played out in the real world of when there are various engineering complexities such as immutability of published interfaces and implementations, etc.


Consider the following situation:

A OS-level object-oriented library with a set of published interfaces and an implementation provided by the OS.

Some extension points are provided so that third parties can extend specific behaviors by implementing a subset of the published interfaces and registering them with the OS.

A programmer looks at the whole set of published interfaces, and says to self, "it seems like I can re-implement most of the functionalities (with better algorithms), including those areas which aren't designated as extensible." And so the programmer worketh.

Only when the programmer's own implementation mash up (COM-style) with the OS implementation and fails miserably, did the programmer realize that the suite of objects as implemented by the OS rely heavily on communications using non-published interfaces, because the published interface was minimalistic and blocked many opportunities to optimizations(*).

(*) In encoding/decoding/trans-coding (of data/image/sound/video compression) tasks, it is well known that certain pipelines stages can cancel each other if they perform the exact opposite operation / inverse, e.g. G(F(x)) == x.

A typical interaction is like:

  1. Consumer asks Producer if it implements a special interface X that only vendor V knows.
  2. If yes, Consumer talks to Producer using interface X and see if they are exact opposite (#1).
  3. If yes, Consumer asks Producer for its "upper stage" (i.e. bypassing Producer) (#2) so that in effect both are being cancelled out.

(#1) and (#2) are capabilities that are missing in the minimalistic published interface. On first glance it seems that the vendor might have a chance to provide them as well, but choose not to.

It might also be the case that providing those performance-oriented interfaces would severely pollute the namespace.


The end result is that whenever a programmer provides its own implementation of a certain class, it would either (i) fail miserably, or (ii) perform very slowly, because it could not interact with the rest of the suite using the performance-enhanced internal interfaces only known to that suite.

Is this a more frequent problem with some flavors of Object-Oriented technology? Or is it more common with some flavors of Component-based Engineering?

Advocates of OOP will argue that Refactoring and Publishing those interfaces would solve the problem. This assumes that it is possible to distribute a new version of the library along with new set of interface. For some technology this is not possible.


The way it matters to is that its QueryInterface method allows run-time query (and run-time response) of additional interfaces implemented by a class. Its Java equivalent would be like a library that makes heavy use of instanceof for interfaces internal to the package.

To Reviewers: I'm open to suggestions on trimming down the question to its essence.

The correct term seems to be Mixin, thanks to this question.


This answer is partly relevant, but my example focuses on the lost opportunities for optimizations within a single library. Basically, if a third-party implementer didn't do both:

  1. Implement both encoder and decoder
  2. Implement its own proprietary interface on both the encoder and decoder, in order to detect and bypassing operations that cancel out each other

Then the implementation will only get "baseline performance" when inter-operating with different encoders/decoders from other vendors.

No hay solución correcta

Licenciado bajo: CC-BY-SA con atribución
scroll top