Question

First let me say that I'm not that experienced with OO patterns, practices, clean code etc. I'm actually learning all these techniques.

The most loosely coupled way would be to use primitive types for constructing new objects or executing methods but I think it´s not practicable, is this correct? Because it is more prone to errors. I could give over lets say integers that represent Ids that do simply not exist. If I would use an object I actually know that there is valid data for it otherwise the object would not have been created (exception) or would be in an invalid state which I have to check for.

This article says that it is evil to use concrete objects for this, instead I should hand over their interfaces (as you all know, I guess). Changes to the concrete type (not the interface) would cause the dependent type to "breakdown". Is that so in all cases? Is this also true for a closed single project environment? I would only understand that, if interfaces are - once written - untouchable and never be modified/refactored again.

Was it helpful?

Solution

You should use everything when needed. Use a primitive only if it makes sense to. Same with concrete types. It's true that it's better to be 'coupled' to an abstraction however in many cases you won't have one and you don't need one. You can extract interfaces from all the objects you're using but it would be pointless if you don't have a real reason to do so (polymorphism).

You have to think. You can start using a concrete type but then you observe that you'd really need an abstraction not really a concrete type ( for example, abstracting the storage is a very common occurence if you know the storage can change). Instead of depending on a Mysql Database object, an object can depend on a Database abstraction (abstract class or an interface) which allows you to switch any implementation (Mysql, MMSql or even a NoSql). However if you're coding a small app where you'll use only mysql, just use the concrete type directly.

There is one more reason you might want to extract interfaces and that is testing. Of course, extract an interface only if the concrete type will be used as a dependency and it has complex enough behavior. If a dependecy is just a simple DTO (Data Transfer Object) you don't need to abstract it.

Most of this stuff comes with experience, but a rule of the thumb is start with concrete types then abstract it if needed. If an object already implements an interface that includes the functionality you want use that interface directly .

OTHER TIPS

I guess that this question raises many issues. I'll try to keep things as short as possible:

I could give over lets say integers that represent Ids that do simply not exist.

From my point of view, programs are (computational) models that represent your problem domain (an analogy can be made with physicist or astronomers writing equations to represent a phenomenon). When you model with objects, what you are doing is creating a representation of that domain using some particular rules. So, going back to your question, you could represent what is conceptually an ID with an integer, but then you will have a concept in your problem domain that is not properly represented (because, for example, there are integers that are not valid IDs). Also, besides the conceptual issue, the problem is that you can't add (and thus delegate) new behavior to an integer and if you could (e.g. in Smalltalk everything is an object and you can extend any class) it would be also wrong from the modeling point of view. As a general rule of thumb I consider a model lacking an abstraction when I have to write a behavior in an object that shouldn't have a given responsibility. In this case would be something like having a Util class with a class method isValidId.

If I would use an object I actually know that there is valid data for it otherwise the object would not have been created (exception) or would be in an invalid state which I have to check for.

Agree 100%. I've written a couple of articles about this you may find useful (disclaimer: I work at Quanbit Research)

This article says that it is evil to use concrete objects for this, instead I should hand over their interfaces (as you all know, I guess). Changes to the concrete type (not the interface) would cause the dependent type to "breakdown". Is that so in all cases?

The story involving objects, types and interfaces is quite long. To sum up a bit, ideally you should program against interfaces and not concrete classes, since (in theory) you should only care that a given object (e.g. a parameter) implements a set of messages with a predefined semantic. However, if you go this road, in practice you will see that one class generally implements more than one interface and the bookkeeping of having all the interfaces in sync with the classes is prohibitive. I usually work with dynamic-typed languages, so this is not an issue in most cases for me, but if I had to work on a statically typed language, I would use interfaces when the system has to interface with code form outside the project or in APIs between modules. In other words, I would try to lower the coupling in "the boundaries" of the system.

Is this also true for a closed single project environment? I would only understand that, if interfaces are - once written - untouchable and never be modified/refactored again.

I have to disagree here. A program, being a computational model, reflects what we know at a given point in time of our problem domain. As such, the more we work on it, the more we know about it. Programming involves learning, and as we learn we better understand things; thus our models change. And as our models change, the elements that we use to represent them also change (like classes or interfaces). As time goes by you will see that your model becomes more robust and conceptual changes will slow down and at some point you will have a stable one. But changes are rafactorings are things that you should expect :).

HTH

I think the answer (as it often is) is 'it depends'. Decisions are affected by the following:

  1. What types of objects are you working with? Domain objects (representing concepts in the business domain) or Services (providing find or save operations for example).

  2. How big / complex is your application?

  3. Do you 'own' all the objects you're working with? Is it likely that an object you're using will be changed by someone else outside of your control?

On projects complex enough to justify a Domain Model I like to use the following setup:

  1. A Data Access Layer which contains 'finder' Service objects which take ids and return domain objects, and 'save' Service objects which take domain objects.

  2. A Domain Model which contains domain objects which only take other domain objects in their methods.

  3. A Service Layer which takes ids, uses the Data Access Layer to retrieve domain objects, triggers domain operations in the Domain Model, then uses the Data Access Layer to save the changes.

  4. One or more UI layers which use the Service Layer, providing ids.

I put Service objects in the Data Access Layer and Service Layer behind interfaces because I want the option to easily mock those components out in my unit tests. I don't generally bother making interfaces for my domain objects unless I find a specific benefit for doing so.

Finally, where class A has a concrete reference to class B, I don't see how a change to a part of class B which has nothing to do with an interface class A would use would break class A.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top