Вопрос

In regards to unit testing, I was taught that production code shouldn't have test-related code in it.

Well, I feel like I'm breaking that rule every time I try to unit test.

I have a class internal to my assembly, Xyzzy. I want to dependency inject it into another class and then stub it so I can test that the other class in isolation, so I make an interface, IXyzzy. Oops, now I have code in production that's really only there for test. Even worse, I've kind of gone against what interface is (describes what an implementer can do, not what it is). Xyzzy's public interface and IXyzzy are exactly the same and no one else (except the stubs) implements IXyzzy.

That seems like a bad thing to me.

I could create an abstract base class or make all the public methods I want to test on Xyzzy Overridable/virtual, but that feels wrong too since Xyzzy isn't designed for inheritance and from a YAGNI perspective, won't ever be inherited from.

Is creating single-implementer interfaces solely for the purpose of testing an anti-pattern? Are there better alternatives?

Это было полезно?

Решение

It is not wrong to have code just for tests. This is actually normal, just like production code contains features that are made just for debugging and production monitoring. There is no clear reason this should be disallowed. Code should support all aspects of the lifecyle of the application. Testing is just another part of the lifecycle.

In that sense your approach using interfaces is correct. If you make the rest of the production application also use the interface (and not the concrete class although there is only one) it is architecturally sound.

I've kind of gone against what interface is (describes what an implementer can do, not what it is)

I did not get your point here because the interface does describe what the object can do. There being only one concrete (production) implementation does not destroy this property.

If you think about it, every class has an "interface" in a looser sense of the word: the public signature of all methods exposes an interface which the class supports to the outside. Whether a .NET interface is implemented or not is just a detail. The class still makes the same promises to the outside.

Другие советы

In my experience, this is pretty typical of .NET development, stemming from the fact that method overriding is on an opt-in basis; if you want to mock a dependency, you need either an interface or an object whose methods are all virtual.

In a language like Java where every method is overridable single-implementation interfaces are indeed an antipattern, and good devs will call it out.

Keep doing what you're doing - whatever sin you're committing is, in my view, handily outweighed by the benefits of your unit testing!

Yes, it is an anti-pattern. A pattern would be "a solution to a common problem in a certain context". But in this case, what we have is a work-around, not a solution.

The problem in question is the need to isolate a unit to be tested from (some of) its dependencies, so that the implementation of those dependencies doesn't have to be considered when writing the unit tests. The general and true solution to this problem is called "mocking", where the test writer can specify whatever behavior is needed from the mocked dependencies.

In contrast, forcing the developer to create unnecessary separate interfaces, or declare methods as virtual, is only a work-around for the technical inability to cleanly isolate a unit from others.

For .NET, there are several mocking tools that provide this isolation ability, namely TypeMock Isolator, JustMock, and MS Fakes. Other languages/platforms (including Java, Ruby, and Python) have their own tools of similar expressive power.

Лицензировано под: CC-BY-SA с атрибуция
Не связан с StackOverflow
scroll top