Question

I've read a lot about dependency inversion principle but still, I can't apply it to my case. I just don't know when I should apply it and when not. I write a simple application in Java to generate invoices. For now, I have basic classes: Client, Product, InvoiceLine and Invoice. Should these classes communicate through interfaces? For instance, I have a method in Product for getting name of the product:

public String getName() {
    return name;
}

And I use this method in class Invoice

public void addLineToInvoice(Product product, int quantity) {
    rows.add(new InvoiceLine(rows.size(), product.getName(), quantity, product.getPrice()));
}

Now, should I create an interface for Product? Or is it unnecessary?

Was it helpful?

Solution

(Disclaimer: I understand this question as "applying the Dependency Inversion Principle by injecting objects through interfaces into other object's methods", a.k.a "Dependency Injection", in short, DI.)

Programs were written in the past with no Dependency Injection or DIP at all, so the literal answer to your question is obviously "no, using DI or the DIP is not necessary".

So first you need to understand why you are going to use DI, what's your goal with it? A standard "use case" for applying DI is "simpler unit testing". Refering to your example, DI could make sense under the following conditions

  • you want to unit test addLineToInvoice, and

  • creating a valid Product object is a very complex process, which you do not want to become part of the unit test (imagine the only way to get a valid Product object is to pull it from a database, for example)

In such a situation, making addLineToInvoice accept an object of type IProduct and providing a MockProduct implementation which can be instantiated simpler than a Product object could be a viable solution. But in case a Product can be easily created in-memory by some standard constructor, this would be heavily overdesigned.

DI or the DIP are not an end in itself, they are a means to an end. Use them accordingly.

OTHER TIPS

At its core DI means streamlining your dependencies. Rather than making A, B and C aware of each other (to know each other's type) you introduce another thing D. You then make A. B and C know D but not each other. This kind of decoupling gives you more flexibility. You can easily rework A, B or C without the need to revisit any of the others. But it comes at a price: some effort and housekeeping. As your system grows the benefits will eventually outweigh the effort.

It is like building a tool rack. If you only have a screw driver and a hammer it will be more convenient to just leave them lingering on the bench. As you get more tools it gets harder to find what you need and you will need some ordering system. It is up to you to choose the moment to implement that ordering system. You may never need it but if you know more tools are coming you might as well start with it.

The best thing to learn about these things is to run into problems once for not doing it. As long as you have no picture of the benefits and you do not see how it is going to help you with what you are doing it may not be worth the trouble yet.

In my field, DIP is just too impractical in many cases. The cost of virtual dispatch for the CPU in our lowest-level modules (memory allocators, core data structures, etc), even ignoring the extra programmer overhead of creating abstract interfaces and testing them with mock objects, is just too much to maintain a competitive performance advantage in an industry where users always want more. This isn't an opinion formed absent measurements. It's not a hunch.

Low-level concretes are usually performance-critical concretes in our cases (computer graphics including things like image processing), and we usually can't afford to abstract them in any way that imposes runtime costs. It would certainly be extremely convenient if I could abstract the low-level details of an image, like its pixel format, away in favor of dynamic dispatch just to do things like set or get a specific pixel, but we simply can't afford it from a runtime perspective... or even the programmer overhead if we tried to abstract all these things with static polymorphism and used elaborate code generation techniques with something like C++ templates. In the zealous pursuit of eliminating logical redundancy for the latter case, we'd skyrocket build times and the expertise required to maintain code with the likes of recursive metatemplate programming that combines with SIMD intrinsics even with the absence of runtime costs. If an image uses 32-bit single-precision floating-point for its channels, we can't abstract away such details without major costs. It would certainly be so much simpler if we could, but we simply can't without our competition leap-frogging ahead of us in terms of user interactivity and response.

I used to be a C++ meta-template programming zealot so eager to keep being able to use abstractions without runtime costs while touting the idea that these abstractions were "cost-free" back in the 90s when this stuff was just starting to get really popular. All I did was end up causing tremendous grief for my team imposing a cost that I was oblivious towards until it was inflicted upon me by others later on.

There are no such things as "cost-free" abstractions in my experience if "cost-free" extends to both programmer and runtime overheads combined where a net positive in one is not allowed to produce a net negative in the other. There are cheap abstractions. There are ones where the savings in one area more than compensate for the costs in the other. But there aren't any free ones in my experience, or at least not ones we have to maintain ourselves.

We have a tendency to want to future-proof our software but a future-proofing mindset often yields code that is even more costly to change should it fail to meet future design requirements. YAGNI might be the most important of the software principles, because following it, even when we discover something we do need that we didn't have, tends to be much less costly than discovering we built all sorts of things, especially abstractions, that we didn't actually need or were too generalized and insufficiently tailored for the problem at hand.

So in my opinion at least, and keep in mind my bias given my field and domain because I'm never speaking for everybody in any one of my opinions, is "no". DIP is actually one of the most useless principles in my blunt opinion, although that's speaking only with respect to the design requirements I work with. We simply can't afford to always sandwich abstract interfaces between high-level modules and low-level ones. We can't even afford to do it most of the time. We can usually afford it at the mid to high-level end of the spectrum, and obviously, we can abstract things like file I/O with trivial cost since the functions involved make things like virtual dispatch trivial in comparison.

DI is the way you keep your code at least minimally testable. Being dependent upon interface rather then strict implementation gives you a chanсe to mock some dependencies while keeping others as they are in production and make sure they match expectations.

That's it. Except that, it doesn't make your code any better. You still may be dependent upon terribly bad interfaces with awful contracts. You still may rely on global state somewhere under the hood or may base upon god objects that implement tens of different interfaces in one place and horrify your project.

So, don't overestimate nor underestimate.

Licensed under: CC-BY-SA with attribution
scroll top