質問

I've seen multiple answers regarding 'how to stub your classes so you can control what happens within the SUT'.

They say one thing:

Create an interface and inject that interface using dependency injection and create a stub using that same interface that you then inject into the SUT.

However, what I've learned in my previous working places:

If you unit test, you test all classes/functionality.

Does that mean that for every class that has a specific function-layout you have to create an interface?

That would mean the amount of classes/files would just about be twice as many.

As seen in the example below, is this 'the way to go' or am I missing something in my unit testing process?

As a note: I am using VS2012 Express. That means no 'Faker' framework. I am using the 'standard' VS2012 unit testing framework.

As a very, very simple example, which allows me to stub each interface passed down to a SUT.

IFoo.cs

public interface IFoo
{
    string GetName();
}

Foo.cs

public class Foo : IFoo
{
    public string GetName()
    {
        return "logic goes here";
    }
}

IBar.cs:

public interface IBar : IFoo
{
    IFoo GetFoo();
}

Bar.cs:

public class Bar : IBar
{
    public string GetName()
    {
        return "logic goes here";
    }

    public IFoo GetFoo()
    {
        return null; // some instance of IFoo
    }
}

IBaz.cs:

public interface IBaz
{
    IBar GetBar();
}

Baz.cs:

public class Baz
{
    public IBar GetBar()
    {
        return null; // some instance of IBar
    }
}
役に立ちましたか?

解決 2

Yes and no. In order to stub dependency you need some sort of abstraction, but that's in majority because of how mocking frameworks work (not all, naturally).

Consider simple example. You test class A that takes dependencies to classes B and C. For unit tests of A to work, you need to mock B and C - you'll need IB and IC (or base classes /w virtual members). Do you need IA? No, at least not for this test. And unless A becomes dependency to some other class, abstracting it behind interface/base class is not required.

Abstraction is great as it helps you build losely coupled code. You should abstract your dependencies. However, in practice some classes need not to be abstracted as they serve top-level/end-of-hierarchy/root roles and are not used elsewhere.

他のヒント

In my opinion, you should not create interfaces just for the purpose of unit testing. If you start adding code abstractions to please the tools, then they are not helping you to be more productive. The code you write should ideally serve a specific business purpose/need - either directly, or indirectly by making the code base easier to maintain or evolve.

Interfaces sometimes do this, but certainly not always. I find that providing interfaces for components is usually a good thing, but try to avoid using interfaces for internal classes (that is, code only used inside of the given project, regardless of whether the types are declared public or not). This is because a component (as in, a set of classes working together to solve some specific problem) represents a larger concept (such as a logger or a scheduler), which is something that I may feasibly want to replace or stub out when testing.

The solution (hat tip to Robert for being first in the comments) is to use a mocking framework to generate a compatible substitution type at run-time. Mocking frameworks then allow you to verify that the class being tested interacted correctly with the substituted dummy. Moq is as mentioned a snazzy choice. Rhino.Mocks and NMock are two other popular frameworks. Typemock Isolator hooks into the profiler and is among the more powerful options (allows you to substitute even non-virtual private members), but is a commercial tool.

It's no good making up rules for how much you should unit test. It depends on what you're developing and what your goals are - if correctness always trumps time-to-market and cost is not a factor then unit testing everything is great. Most people are not so lucky and will have to compromise to achieve a reasonable level of test coverage. How much you should test may also depend on overall skill level of the team, expected lifetime and reuse of the code being written, etc.

Maybe from a purist perspective that is the right way to go, but the really important thing is to make sure that external dependencies (e.g. database, network access, etc), anything that is computationally expensive/time consuming, and anything that isn't fully deterministic is abstracted away and easy to replace in your unit tests.

From a testing perspective, there is no need to make an interface for every class in your code. You make an interface to hide concrete execution of external dependencies behind a layer of abstraction. So instead of having a class that requires a direct HTTP connection mixed in with your logic, you would isolate the connection code to a class, have it implement an interface that is a member of your class, and inject a mock in place pf that interface. That way, you can test your logic in isolation, free of dependency, and the only "untested" code is boilerplate HTTP connection code that can be tested through other means.

I'd go the virtual method route. Creating interfaces for every class you need to test gets really burdensome, especially when you need tools like Resharper for the "go to implementation" every time you'd like to see the definition of a method. And there's the overhead of managing and modifying both files any time a method signature is changed or a new property or method is added.

ライセンス: CC-BY-SA帰属
所属していません StackOverflow
scroll top