There's a debate going on in our team at the moment as to whether modifying code design to allow unit testing is a code smell, or to what extent it can be done without being a code smell. This has come about because we're only just starting to put practices in place that are present in just about every other software dev company.

Specifically, we will have a Web API service that will be very thin. Its main responsibility will be marshalling web requests/responses and calling an underlying API that contains the business logic.

One example is that we plan on creating a factory that will return an authentication method type. We have no need for it to inherit an interface as we don't anticipate it ever being anything other than the concrete type it will be. However, to unit test the Web API service we will need to mock this factory.

This essentially means we either design the Web API controller class to accept DI (through its constructor or setter), which means we're designing part of the controller just to allow DI and implementing an interface we don't otherwise need, or we use a third party framework like Ninject to avoid having to design the controller in this way, but we'll still have to create an interface.

Some on the team seem reluctant to design code just for the sake of testing. It seems to me that there has to be some compromise if you hope to unit test, but I'm unsure how allay their concerns.

Just to be clear, this is a brand new project, so it's not really about modifying code to enable unit testing; it's about designing the code we're going to write to be unit testable.

有帮助吗?

解决方案

Reluctance to modify code for the sake of testing shows that a developer hasn't understood the role of tests, and by implication, their own role in the organization.

The software business revolves around delivering a code base that creates business value. We have found, through long and bitter experience, that we cannot create such code bases of nontrivial size without testing. Therefore, test suites are an integral part of the business.

Many coders pay lip service to this principle but subconsciously never accept it. It is easy to understand why this is; the awareness that our own mental capability is not infinite, and is in fact, surprisingly limited when confronted with the enormous complexity of a modern code base, is unwelcome and easily suppressed or rationalized away. The fact that test code is not delivered to the customer makes it easy to believe that it is a second-class citizen and non-essential compared to the "essential" business code. And the idea of adding testing code to the business code seems doubly offensive to many.

The trouble with justifying this practice has to do with the fact that the entire picture of how value is created in a software business is often only understood by higher-ups in the company hierarchy, but these people don't have the detailed technical understanding of the coding workflow that is required to understand why testing can't be gotten rid of. Therefore they are too often pacified by practitioners who assure them that testing may be a good idea in general, but "we are elite programmers who don't need crutches like that", or that "we don't have time for that right now", etc. etc. The fact that business success is a numbers game and that avoiding technical debt, assuring quality etc. shows its value only in the longer term means that they are often quite sincere in that belief.

Long story short: making code testable is an essential part of the development process, no different than in other fields (many microchips are designed with a substantial proportion of elements only for testing purposes), but it's very easy to overlook the very good reasons for that. Don't fall into that trap.

其他提示

It's not as simple as you might think. Let's break it down.

  • Writing unit tests is definitely a good thing.

BUT!

  • Any change to your code can introduce a bug. So changing the code without a good business reason is not a good idea.

  • Your 'very thin' webapi doesn't seem like the greatest case for unit testing.

  • Changing code and tests at the same time is a bad thing.

I would suggest the following approach:

  1. Write integration tests. This should not require any code changes. It will give you your basic test cases and enable you to check that any further code changes you make don't introduce any bugs.

  2. Make sure new code is testable and has unit and integration tests.

  3. Make sure your CI chain runs tests after builds and deployments.

When you have those things set up, only then start thinking about refactoring legacy projects for testability.

Hopefully everyone will have learnt lessons from the process and have a good idea of where testing is most needed, how you want to structure it and the value it brings to the business.

EDIT: Since I wrote this answer, the OP has clarified the question to show that they are talking about new code, not modifications to existing code. I perhaps naively thought the "Is unit testing good?" argument was settled some years ago.

It's hard to imagine what code changes would be required by unit tests but not be general good practice you would want in any case. It would probably be wise to examine the actual objections, possibly it's the style of unit testing which is being objected to.

Designing code to be inherently testable is not a code smell; on the contrary, it is the sign of a good design. There are several well-known and widely-used design patterns based on this (e.g., Model-View-Presenter) that offer easy (easier) testing as a big advantage.

So, if you need to write an interface for your concrete class in order to more easily test it, that is a good thing. If you already have the concrete class, most IDEs can extract an interface from it, making the effort required minimal. It is a little bit more work to keep the two in sync, but an interface shouldn't change much anyway, and the benefits from testing may outweigh that extra effort.

On the other hand, as @MatthieuM. mentioned in a comment, if you're adding specific entry points into your code that shouldn't ever be used in production, solely for the sake of testing, that might be a problem.

It is IMHO very simple to understand that for creating unit tests, the code to be tested must have at least certain properties. For example, if the code does not consist of individual units which can be tested in isolation, the word "unit testing" does not even start to make sense. If the code does not have these properties, it must be changed first, that is pretty obvious.

Said that, in theory, one can try to write some testable code unit first, applying all the SOLID principles, and then try to write a test for it afterwards, without further modifying the original code. Unfortunately, writing code which is really unit testable is not always dead simple, so it is quite likely there will be some changes necessary which one will only detect when trying to create the tests. This is true for code even when was written with the idea of unit testing in mind, and it is definitely more true for code which was written where "unit testability" was not at the agenda at the beginning.

There is a well-known approach which tries to solve the problem by writing the unit tests first - it is called Test Driven Development (TDD), and it can surely help to make code more unit testable right from the start.

Of course, the reluctance of changing code afterwards for making it testable arises often in a situation where the code was manually tested first and/or works fine in prodcution, so changing it could actually introduce new bugs, that's true. The best approach to mitigate this is to create a regression test suite first (which can often be implemented with only very minimal changes to the code base), as well as other accompanying measures like code reviews, or new manual test sessions. That should you give enough confidence to make sure redesigning some internals does not break anything important.

I take issue with the (unsubstantiated) assertion you make:

to unit test the Web API service we will need to mock this factory

That's not necessarily true. There are lots of ways to write tests, and there are ways to write unit tests that don't involve mocks. More importantly, there are other kinds of tests, such as functional or integration tests. Many times it is possible to find a "test seam" at an "interface" that is not an OOP programming language interface.

Some questions to help you find an alternative test seam, which might be more natural:

  • Will I ever want to write a thin Web API over a different API?
  • Can I reduce code duplication between the Web API and the underlying API? Can one be generated in terms of the other?
  • Can I treat the whole Web API and underlying API as a single "black box" unit and meaningfully make assertions about how the whole thing behaves?
  • If the Web API had to be replaced with a new implementation in the future, how would we go about doing that?
  • If the Web API was replaced with a new implementation in the future, would clients of the Web API be able to notice? If so, how?

Another unsubstantiated assertion you make is about DI:

we either design the Web API controller class to accept DI (through its constructor or setter), which means we're designing part of the controller just to allow DI and implementing an interface we don't otherwise need, or we use a third party framework like Ninject to avoid having to design the controller in this way, but we'll still have to create an interface.

Dependency injection does not necessarily mean creating a new interface. For example, in the cause of an authentication token: can you simply create a real authentication token programmatically? Then the test can create such tokens and inject them. Does the process for validating a token depend on a cryptographic secret of some kind? I hope you haven't hardcoded a secret -- I would expect you can read it from storage somehow, and in that case you can simply use a different (well-known) secret in your test cases.

This is not to say that you should never create a new interface. But don't get fixated on there only being one way to write a test, or one way to fake a behavior. If you think outside the box you can usually find a solution that will require a minimum of contortions of your code and yet still give you the effect you want.

You are in luck as this is a new project. I've found that Test Driven Design works very well for writing good code (which is why we do it in the first place).

By figuring out up front how to invoke a given piece of code with realistic input data, and then get realistic output data which you can check is as intended, you do the API design very early in the process and have a good chance of getting a useful design because you are not hindered by existing code that has to be rewritten to accomodate. Also it is easier to understand by your peers so you can have good discussions again early in the process.

Note that "useful" in the above sentence means not only that the resulting methods are easy to invoke, but also that you tend to get clean interfaces that are easy to rig up in integration tests, and to write mockups for.

Consider it. Especially with peer review. In my experience the investment of time and effort will very quickly be returned.

If you need to modify to the code, that is the code smell.

From personal experience, if my code is difficult to write tests for, it's bad code. It's not bad code because it doesn't run or work as designed, it's bad because I can't quickly understand why it is working. If I encounter a bug, I know it's going to be a long painful job to fix it. The code is also difficult / impossible to reuse.

Good (Clean) code breaks down tasks into smaller sections that are easily understood at a glance (or at least a good look). Testing these smaller sections is easy. I can also write tests that only test a chunk of the codebase with similar ease if I'm fairly confident about the subsections (reuse also helps here as it has already been tested).

Keep the code easy to test, easy to refactor, and easy to reuse from the start and you won't be killing yourself whenever you need to make changes.

I'm typing this while completely rebuilding a project that should have been a throwaway prototype into cleaner code. It's much better to get it right from the start and refactor bad code as soon as possible rather than staring at a screen for hours on end being afraid to touch anything for fear of breaking something that partially works.

I would argue that writing code that cannot be unit tested is a code smell. In general, if your code cannot be unit tested, then it is not modular, which makes it difficult to understand, maintain, or enhance. Maybe if the code is glue code that really only makes sense in terms of integration testing you can substitute integration testing for unit testing, but even then when the integration fails you are going to have to isolate the problem and unit testing is a great way to do it.

You say

We plan on creating a factory that will return an authentication method type. We have no need for it to inherit an interface as we don't anticipate it ever being anything other than the concrete type it will be. However, to unit test the Web API service we will need to mock this factory.

I do not really follow this. The reason to have a factory that creates something is to allow you to change factories or change what the factory creates easily, so other parts of the code do not need to change. If your authentication method is never going to change, then the factory is useless code bloat. However, if you want to have a different authentication method in test than in production, having a factory that returns a different authentication method in test than in production is a great solution.

You do not need DI or Mocks for this. You just need your factory to support the different authentication types and for it to be configurable somehow, such as from a configuration file or environment variable.

In every engineering discipline I can think of, there is only one way to achieve decent or higher levels of quality:

To account for inspection/testing in the design.

This holds true in construction, chip design, software development, and manufacturing. Now, this doesn't mean that testing is the pillar that the every design needs to be built around, not at all. But with every design decision, the designers must be clear about the impacts on testing costs and make a conscious decisions about the trade off.

In some cases, manual or automated (e.g. Selenium) testing will be more convenient than Unit Tests, while also providing acceptable test coverage on their own. In rare cases throwing something out there that's almost entirely untested can also be acceptable. But these have to be conscious case by case decisions. Calling a design that accounts for testing a "code smell" indicates a serious lack of experience.

I've found that unit testing (and other types of automated testing) have a tendency to reduce code smells, and I can't think of a single example where they introduce code smells. Unit tests usually force you to write better code. If you can't use a method easily under test, why should it be any easier in your code?

Well written unit tests show you how the code is intended to be used. They are a form of executable documentation. I've see hideously written, overly long unit tests that simply couldn't be understood. Don't write those! If you need to write long tests to set up your classes, your classes need refactoring.

Unit tests will highlight where some of your code smells are. I would advise reading Michael C. Feathers' Working Effectively with Legacy Code. Even though your project is new, if it doesn't already have any (or many) unit tests, you might need some non-obvious techniques to get your code to test nicely.

In a nutshell:

Testable code is (usually) maintainable code - or rather, code that is hard to test is usually hard to maintain. Designing code that is not testable is akin to designing a machine that is not repairable - pity the poor shmuck who will be assigned to repair it eventually (it might be you).

One example is that we plan on creating a factory that will return an authentication method type. We have no need for it to inherit an interface as we don't anticipate it ever being anything other than the concrete type it will be.

You know that you will need five different types of authentication method types in three years time, now that you said that, right? Requirements change, and while you should avoid overengineering your design, having a testable design means that your design has (just) enough seams to be altered without (too much) pain - and that the module tests will provide you with automated means to see that your changes don't break anything.

Designing around dependency injection isn't a code smell - it's best practice. Using DI isn't just for testability. Building your components around DI aids modularity and reusability, more easily allows for major components to be swapped out (such as a database interface layer). While it adds a degree of complexity, done right it allows for better separation of layers and isolation of functionality which makes the complexity easier to manage and navigate. This makes it easier to properly validate the behavior of each component, reducing bugs, and can also make it easier to track down bugs.

This essentially means we either design the Web API controller class to accept DI (through its constructor or setter), which means we're designing part of the controller just to allow DI and implementing an interface we don't otherwise need, or we use a third party framework like Ninject to avoid having to design the controller in this way, but we'll still have to create an interface.

Let's look at the difference between a testable:

public class MyController : Controller
{
    private readonly IMyDependency _thing;

    public MyController(IMyDependency thing)
    {
        _thing = thing;
    }
}

and non-testable controller:

public class MyController : Controller
{
}

The former option has literally 5 extra lines of code, two of which can be autogenerated by Visual Studio. Once you've setup your dependency injection framework to substitute a concrete type for IMyDependency at runtime - which for any decent DI framework, is another single line of code - everything Just Works, except now you can mock and thus test your controller to your heart's content.

6 extra lines of code to allow testability... and your colleagues are arguing that's "too much work"? That argument doesn't fly with me, and it shouldn't fly with you.

And you don't have to create and implement an interface for testing: Moq, for example, allows you to simulate the behaviour of a concrete type for unit-testing purposes. Of course, that won't be of much use to you if you can't inject those types into the classes you're testing.

Dependency injection is one of those things that once you understand it, you wonder "how did I work without this?". It's simple, it's effective, and it just Makes Sense. Please, don't allow your colleagues' lack of understanding of new things to get in the way of making your project testable.

When I write unit tests, I begin to think of what could go wrong inside my code. It helps me to improve code design and apply the single responsibility principle (SRP). Also, when I come back to modify the same code a few months later, it helps me to confirm that existing functionality isn't broken.

There is a trend to use pure functions as much as you can (serverless apps). Unit testing helps me to isolate state and write pure functions.

Specifically, we will have a Web API service that will be very thin. Its main responsibility will be marshalling web requests/responses and calling an underlying API that contains the business logic.

Write unit tests for the underlying API first and if you have sufficient development time, you need to write tests for the thin Web API service as well.

TL;DR, unit testing helps improve quality of code and helps make future changes to code risk free. It also improves readability of the code. Use tests instead of comments to make your point.

The bottom line, and what should be your argument with the reluctant lot, is that there is no conflict. The big mistake seems to have been that someone coined the idea to "design for testing" to people that hate testing. They should have just shut their mouths or word it differently, like "let's take the time to do this right".

The idea that "you have to implement an interface" to make something testable is wrong. The interface is already implemented, it just isn't declared in the class declaration yet. It is a matter of recognizing existing public methods, copy their signatures to an interface and declaring that interface in the class's declaration. No programming, no changes to existing logic.

Apparently some people have a different idea about this. I suggest you try to fix this first.

许可以下: CC-BY-SA归因
scroll top