Question

Dependency injection frameworks like Google Guice give the following motivation for their usage (source):

To construct an object, you first build its dependencies. But to build each dependency, you need its dependencies, and so on. So when you build an object, you really need to build an object graph.

Building object graphs by hand is labour intensive (...) and makes testing difficult.

But I don't buy this argument: Even without a dependency injection framework, I can write classes which are both easy to instantiate and convenient to test. E.g. the example from the Guice motivation page could be rewritten in the following way:

class BillingService
{
    private final CreditCardProcessor processor;
    private final TransactionLog transactionLog;

    // constructor for tests, taking all collaborators as parameters
    BillingService(CreditCardProcessor processor, TransactionLog transactionLog)
    {
        this.processor = processor;
        this.transactionLog = transactionLog;
    }

    // constructor for production, calling the (productive) constructors of the collaborators
    public BillingService()
    {
        this(new PaypalCreditCardProcessor(), new DatabaseTransactionLog());
    }

    public Receipt chargeOrder(PizzaOrder order, CreditCard creditCard)
    {
        ...
    }
}

So there may be other arguments for dependency injection frameworks (which are out of scope for this question!), but easy creation of testable object graphs is not one of them, is it?

Was it helpful?

Solution 3

The "instantiate my own collaborators" approach may work for dependency trees, but it certainly won't work well for dependency graphs which are general directed acyclic graphs (DAG). In a dependency DAG, multiple nodes can point to the same node - which means that two objects use the same object as collaborator. This case can in fact not be constructed with the approach described in the question.

If some of my collaborators (or collaborator's collaborators) should share a certain object, I'd need to instantiate this object and pass it to my collaborators. So I in fact would need to know more than my direct collaborators, and this obviously doesn't scale.

OTHER TIPS

There is an old, old ongoing debate about the best way to do dependency injection.

  • The original cut of spring instantiated a plain object, and then injected dependencies though setter methods.

  • But then a large contingency of folks insisted that injecting dependencies through constructor parameters was the correct way to do it.

  • Then, lately, as using reflection became more common, setting the values of private members directly, without setters or constructor args, became the rage.

So your first constructor is consistent with the second approach to dependency injection. It allows you to do nice things like inject mocks for testing.

But the no-argument constructor has this problem. Since it's instantiating the implementation classes for PaypalCreditCardProcessor and DatabaseTransactionLog, it creates a hard, compile-time dependency on PayPal and the Database. It takes responsibility for building and configuring that entire dependency tree correctly.

  • Imagine that the PayPay processor is a really complicated subsystem, and additionally pulls in a lot of support libraries. By creating a compile-time dependency on that implementation class, you are creating an unbreakable link to that entire dependency tree. The complexity of your object graph has just jumped up by an order of magnitude, maybe two.

  • A lot of those items in the dependency tree will be transparent, but a lot of them will also need to be instantiated. Odds are, you won't be able to just instantiate a PaypalCreditCardProcessor.

  • In addition to instantiation, each of the objects will need properties applied from configuration.

If you only have a dependency on the interface, and allow an external factory to build and inject the dependency, they you chop off the entire PayPal dependency tree, and the complexity of your code stops at the interface.

There are other benefits, as being to specify implementations classes at in configuration (i.e. at runtime rather than compile time), or having more dynamic dependency specification that varies, say, by environment (test, integration, production).

For example, let's say that the PayPalProcessor had 3 dependent objects, and each of those dependencies had two more. And all those objects have to pull in properties from configuration. The code as-is would assume the responsibility of building all that out, setting properties from configuration, etc. etc. -- all concerns that the DI framework will take care of.

It may not seem obvious at first what you're shielding yourself from by using a DI framework, but it adds up and becomes painfully obvious over time. (lol I speak from the experience of having tried to do it the hard way)

...

In practice, even for a really tiny program, I find I end up writing in a DI style, and break up the classes into implementation / factory pairs. That is, if I'm not using a DI framework like Spring, I just throw together some simple factory classes.

That provides the separation of concerns so that my class can just do it's thing, and the factory class takes on the responsibility of building & configuring stuff.

Not a required approach, but FWIW

...

More generally, the DI / interface pattern reduces the complexity of your code by doing two things:

  • abstracting downstream dependencies into interfaces

  • "lifting" upstream dependencies out of your code and into some sort of container

On top of that, since object instantiation and configuration is a pretty familiar task, the DI framework can achieve a lot of economies of scale through standardized notation & using tricks like reflection. Scattering those same concerns around classes ends up adding a lot more clutter than one would think.

  1. When you swim at the shallow end of the pool, everything is "easy and convenient". Once you get past a dozen or so objects, it is no longer convenient.
  2. In your example, you have bound your billing process forever and a day to PayPal. Suppose you want to use a different credit card processor? Suppose you want to create a specialty credit card processor that is constrained on the network? Or you need to test credit card number handling? You have created non-portable code: "write once, use only once because it depends on the specific object graph for which it was designed."

By binding your object graph early in the process, i.e., hardwiring it into the code, you require both the contract and the implementation to be present. If someone else (maybe even you) wants to use that code for a slightly different purpose, they have to recalculate the entire object graph and reimplement it.

DI frameworks allow you to take a bunch of components and wire them together at runtime. This makes the system "modular", composed of a number of modules that work to each others' interfaces instead of to each others' implementations.

I've not used Google Guice, but I've taken a great deal of time migrating old legacy N-tier applications in .Net to IoC architectures like Onion Layer that depend on Dependency Injection to decouple things.

Why Dependency Injection?

The purpose of Dependency Injection isn't actually for testability, it's actually to take tightly coupled applications and loosen the coupling as much as possible. (Which has the desirable by product of making your code a lot easier to adapt for proper unit testing)

Why should I worry about coupling at all?

Coupling or tight dependencies can be a very dangerous thing. (Especially in compiled languages) In these cases you could have a library, dll, etc that is very rarely used that has an issue that effectively takes the entire application offline. (Your entire application dies because an unimportant piece has an issue... this is bad... REALLY bad) Now when you decouple things you can actually setup your application so it can run even if that DLL or Library is missing entirely! Sure that one piece that needs that library or DLL won't work, but the rest of the application chugs on happy as can be.

Why do I need Dependency Injection for proper testing

Really you just want loosely coupled code, the Dependency Injection just enables that. You can loosely couple things without IoC, but typically it's more work and less adaptable (I'm sure someone out there has an exception)

In the case you've given I think it would be a great deal easier to just setup dependency injection so I can mock code I'm not interested in counting as part of this test. Just tell you method "Hey I know I told you to call the repository, but instead here's the data it "should" return winks" now because that data never changes you know you're only testing the part that uses that data, not the actual retrieval of the data.

Basically when testing you want Integration(functionality) Tests which test a piece of functionality from start to finish, and full unit testing which test each and every piece of code (typically at the method or function level) independently.

The idea is you want to make sure the entire functionality is working, if not you want to know the exact piece of the code that isn't working.

This CAN be done without Dependency Injection, but usually as your project grows it becomes more and more cumbersome to do so without Dependency Injection in place. (ALWAYS assume your project will grow! It's better to have needless practice of useful skills than find a project ramping up fast and requires serious refactoring and reengineering after things are already taking off.)

As I mention in another answer, the issue here is that you want class A to depend on some class B without hard-coding which class B is used into the source code of A. This is impossible in Java and C# because the only way to import a class is to refer to it by a globally-unique name.

Using an interface you can work-around the hard-coded class dependency, but you still need to get your hands on an instance of the interface, and you can't call constructors or you're right back in square 1. So now code that could otherwise create its dependencies pushes off that responsibility to somebody else. And its dependencies are doing the same thing. So now every time you need an instance of a class you end up building the entire dependency tree manually, whereas in the case where class A depends on B directly you could just call new A() and have that constructor call new B(), and so on.

A dependency injection framework attempts to get around that by letting you specify the mappings between classes and building the dependency tree for you. The catch is that when you screw up the mappings, you'll find out at runtime, not at compile time like you would in languages that support mapping modules as a first-class concept.

I think this is a big misunderstanding here.

Guice is a dependency injection framework. It makes DI automatic. The point they made in the excerpt you quoted is about Guice being able to remove the need of manually creating that "testable constructor" you presented in your example. It has absolutely nothing to do with dependency injection itself.

This constructor:

BillingService(CreditCardProcessor processor, TransactionLog transactionLog)
{
    this.processor = processor;
    this.transactionLog = transactionLog;
}

already uses dependency injection. You basically just said that using DI is easy.

Problem which Guice solves is that to use that constructor you now must have an object graph constructor code somewhere, manually passing the already instantiated objects as the arguments of that constructor. Guice allows you to have a single place where you can configure what real implementation classes correspond to those CreditCardProcessor and TransactionLog interfaces. After that configuration, every time you create BillingService using Guice, those classes will be passed to constructor automatically.

This is what dependency injection framework does. But the constructor itself which you presented is already an implementation of depencency injection principle. IoC containers and DI frameworks are means to automate the corresponding principles but there's nothing stopping you to do everything by hand, that was the whole point.

Licensed under: CC-BY-SA with attribution
scroll top