Question

I have been using TDD when developing some of my side projects and have been loving it.

The issue, however, is that stubbing classes for unit tests is a pain and makes you afraid of refactoring.

I started researching and I see that there is a group of people that advocates for TDD without mocking--the classicists, if I am not mistaken.

However, how would I go about writing unit tests for a piece of code that uses one or more dependencies? For instance, if I am testing a UserService class that needs UserRepository (talks to the database) and UserValidator (validates the user), then the only way would be... to stub them?

Otherwise, if I use a real UserRepository and UserValidator, wouldn't that be an integration test and also defeat the purpose of testing only the behavior of UserService?

Should I be writing only integration tests when there is dependency, and unit tests for pieces of code without any dependency?

And if so, how would I test the behavior of UserService? ("If UserRepository returns null, then UserService should return false", etc.)

Thank you.

Was it helpful?

Solution

This answer consists of two separate views on the same issue, as this isn't a "right vs wrong" scenario, but rather a broad spectrum where you can approach it the way it's most appropriate for your scenario.

Also note that I'm not focusing on the distinction between a fake, mock and stub. That's a test implementation detail unrelated to the purpose of your testing strategy.


My company's view

Otherwise, if I use a real UserRepository and UserValidator, wouldn't that be an integration test and also defeat the purpose of testing only the behavior of UserService?

I want to answer this from the point of view of the company I currently work at. This isn't actually something I agree with, but I understand their reasoning.

They don't unit test single classes, instead they test single layers. I call that an integration test, but to be honest it's somewhere in the middle, since it still mocks/stubs classes, just not all of a class' dependencies.

For example, if UserService (BLL) has a GetUsers method, which:

  • Checks with the UserAuthorizationService (BLL) if the current user is allowed to fetch lists of users.
    • The UserAuthorizationService (BLL) in turn depends on the AuthorizationRepository (DAL) to find the configured rights for this user.
  • Fetches the users from the UserRepository (DAL)
  • Check with the UserPrivacyService (BLL) if some of these users have asked to not be included in search results - if they have, they will be filtered out
    • The UserPrivacyService (BLL) in turn depends on the PrivacyRepository (DAL) to find out if a user asked for privacy

This is just a basic example. When unit testing the BLL, my company builds its tests in a way that all (BLL) objects are real and all others (DAL in this case) are mocked/stubbed. During a test, they set up particular data states as mocks, and then expect the entirety of the BLL (all references/depended BLL classes, at least) to work together in returning the correct result.

I didn't quite agree with this, so I asked around to figure out how they came to that conclusion. There were a few understandable bullet points to that decision:

  • The problem domain of the application is liable to constant business refactoring, where the business layer itself may subdivide into more niche classes without changing the public contract. By not testing every BLL class individually, tests need to be rewritten much less often since a test doesn't need to know the exact dependency graph of the class it's testing.
  • Access logic is very pervasive over the domain, but its implementation and structure changes with the modern times. By not having to rewrite tests whenever the access logic changes, the company intends to lower the threshold for developers being open to innovating the access logic. No one wants to take on a rewrite of >25000 tests.
  • Setting up a mocked situation is quite complex (cognitively), and it's easier for developers to understand how to set the data state (which is just an event store) instead of mocking all manner of complex BLL dependencies who essentially just extract information from that data store in their own unique way.
  • Since the interface between the BLL classes is so specific, you often don't need to know exactly which BLL class failed, since the odds are reasonably big that the contract between the failed class and its dependency (or vice versa) is part of the problem that needs to be adjusted. Almost always, the BLL call stack needs to be investigated in its entirety as some responsibilities may shift due to uncovered bugs (cfr the first bullet point).

I wanted to add this viewpoint because this company is quite large, and in my opinion is one of the healthiest development environments I've encountered (and as a consultant, I've encountered many).

While I still dislike the lack of true unit testing, I do also see that there are few to no problems arising from doing this kind of "layer integration" test for the business logic.

I can't delve into the specifics of what kind of software this company writes but suffice it to say that they work a field that is rife with arbitrarily decided business logic (from customers) who are unwilling to change their arbitrary rules even when proven to be wrong. My company's codebase accommodates a shared code library between tenanted endpoints with wildly different business rules.

In other words, this is a high pressure, high stakes environment, and the test suite holds up as well as any "true unit test" suite that I've encountered.


One thing to mention though: the testing fixture of the mocked data store is quite big and bulky. It's actually quite comfortable to use but it's custom built so it took some time to get it up and running.
This complicated fixture only started paying dividends when the domain grew large enough that custom-defining stubs/mocks for each individual class unit test would cost more effort than having one admittedly giant but reusable fixture with all mocked data stores in it.


My view

Should I be writing only integration tests when there is dependency, and unit tests for pieces of code without any dependency?

That's not what separate unit and integration tests. A simple example is this:

  • Can Timmy throw a ball when he has one?
  • Can Tommy catch a ball when it approaches him?

These are unit tests. They test a single class' ability to perform a task in the way you expect it to be performed.

  • Can Timmy throw a ball to Tommy and have him catch it?

This is an integration test. It focuses on the interaction between several classes and catches any issues that happen between these classes (in the interaction), not in them.

So why would we do both? Let's look at the alternatives:

If you only do integration tests, then a test failure doesn't really tell you much. Suppose our test tells use that Timmy can't throw a ball at Tommy and have him catch it. There are many possible reason for that:

  • Timmy's arms are broken. (= Timmy is defective)
  • Tommy's arms are broken. (= Tommy is defective)
  • The ball cannot travel in a throwing arc, e.g. because it is not inflated. (= Timmy and Tommy are fine but a third dependency is broken)

But the test doesn't help you narrow your search down. Therefore, you're still going to have to go on a bug hunt in multiple classes, and you need to keep track of the interaction between them to understand what is going on and what might be going wrong.

This is still better than not having any tests, but it's not as helpful as it could be.

Suppose we only had unit tests, then these defective classes would've been pointed out to us. For each of the listed reasons, a unit test of that defective class would've raised a flag during your test run, giving you the precise information on which class is failing to do its job properly.

This narrows down your bug hunt significantly. You only have to look in one class, and you don't even care about their interaction with other classes since the faulty class already can't satisfy its own public contract.

However, I've been a bit sneaky here. I've only mentioned ways in which the integration test can fail that can be answered better by a unit test. There are also other possible failures that a unit test could never catch:

  • Timmy refuses to throw a ball at Tommy because he (quote) "hates his stupid face". Timmy can (and is willing to) throw balls at anyone else.
  • Timmy is in Australia, Tommy is in Canada (= Timmy and Tommy and the ball are fine, but their relative distance is the problem).
  • We're in the middle of a hurricane (= temporary environmental "outage" similar to a network failure)

In all of these situations, Timmy, Tommy and the ball are all individually operational. Timmy could be the best pitcher in the world, Tommy could be the best catcher.

But the environment they find themselves in is causing issues. If we don't have an integration test, we would never catch these issues until we'd encounter them in production, which is the antithesis of TDD.
But without a unit test, we wouldn't have been able to distinguish individual component failures from environmental failures, which leaves us guessing as to what is actually going wrong.

So we come to the final conclusion:

  • Unit tests test uncover issues that render a specific component defective
  • Integration tests uncover issues with individually operational components that fail to work together in a particular composition.
  • Integration tests can usually catch all of the unit test failures, but it cannot accurately pinpoint the failure, which significantly detracts from the developer's quality of life.
  • When an integration tests fails but all dependent unit tests pass, you know that it's an environmental issue.

And if so, how would I test the behavior of UserService? ("If UserRepository returns null, then UserService should return false")

Be very careful of being overly specific. "returning null" is an implementation detail. Suppose your repository were a networked microservice, then you'd be getting a 404 response, not null.

What matters is that the user doesn't exist in the repository. How the repository communicates that non-existence to you (null, exception, 404, result class) is irrelevant to describing the purpose of your test.

Of course, when you mock your repository, you're going to have to implement its mocked behavior, which requires you to know exactly how to do it (null, exception, 404, result class) but that doesn't mean that the test's purpose needs to contain that implementation detail as well.

In general, you really need to separate the contract from the implementation, and the same principle applies to describing your test versus implementing it.

OTHER TIPS

How do I really write tests without mocking/stubbing?

You design your code such that it can be tested without mocking and stubbing.

That's one of the important, if perhaps subtle, ideas behind TDD: that testing is a first class concern. In other words, our designs not only have functional requirements (does our code tell the machine to do the right thing), but also testing requirements (can we measure what our code is doing).

Cory Benfield's talk on Building Protocol Libraries describes an excellent example of such a design for parsing HTTP messages. The key idea in the design is that there is an in memory state machine that accepts input data and emits events, and all of the complexity in the design is within that finite state machine. Because the state machine is "just" an isolated data structure and some methods to mutate it, it's really easy to throw all kinds of data examples at it and measure that it does the right thing.

Expressing the idea more generally: he is advocating a design where all of the complicated logic is located in code that is easy to test.

Done well, you end up with a design where your code has one of two characters

  • Complicated, but also easy to test
  • Difficult to test, but also so simple there are obviously no deficiencies

I'm self-proclaimed classicist myself, so let me clear things up a little.

First, the unit vs. integration tests. For me, 'unit' test is one that is independent of other tests and doesn't require any external service. It is not relevant how much code this 'unit' test covers. 'integration' test is one that is either not isolated from other tests (maybe there required order of tests) or that needs external service to be set up.

Going by my above definition, my 'unit' tests always include all the necessary clases to represent a useful business scenario. And whenever there is external service, I create a fake implementation, that tries to mimic the external service as closely as possible, but in a way that works only in memory and in isolation.

So in your scenario, you would have 'unit' test that includes all the classes of UserService, UserValidator and FakeUserRepository. Then, your business case would not be "If UserRepository returns null, then UserService should return false." , but it would be "If (Fake)UserRepository doesn't contain user, then UserService should return false."

After that, I would create an 'integration' test that would verify, that FakeUserRepository behaves same way as UserRepository does when talking to real database.

Let's get rid of labels such as mocks and stubs for a moment, and focus purely on the TDD process. You're starting to write the first test for UserService (I'm going to use your own example):

If UserRepository returns null, then UserService should return false

You've just discovered a new interface called UserRepository that UserService depends on. Now you need to inject an instance of the repository into the service, but you don't have any implementations yet. So you do the simplest thing required for your test: create an implementation that returns null. Then you continue the red-green-refactor cycle until UserService is done.

By now, you might have written quite a few lines of code in your repository implementation. It might even be starting to look like a very basic in-memory database. Many people would call this a stub or a fake, but there's no reason you couldn't use this in production if it does everything you need it to do. In one of his talks, Uncle Bob describes this exact situation where they ended up not needing a real database after all. And when you decide that you do need a real database, you simply go create a new implementation of the interface that has been carved out by your test.

Bottom line: don't think of it as "stubbing classes for unit tests", think of it as "creating the very first implementation".

Update in response to comment:

wouldn't it be an integration test, though? since you would be testing 2 (real) implementations. is that what classicists define as a unit?

A "unit" can be any meaningful piece of functionality, typically a class, but could be bigger or smaller than this. Unit testing simply means that you are asserting on the functionality of a single unit at a time, it doesn't matter if you are using a real or a fake dependency as long as your assertions are focused on the unit under test. An integration test usually exercises the interaction between your code and an external dependency (such as a real database or a web service).

Classicists are more likely to write unit tests that exercise a couple layers at a time, since they typically use "real" dependencies such as hand-rolled stubs and fakes. Mockists tend to be more strict about mocking the immediate boundary of a unit. In practice, almost nobody is exclusively a classicist or mockist, and I personally find both techniques to be useful in different scenarios.

This is possibly going to be controversial, but it needs to be said:

How much testing of that kind of code do you really need?

Think about it like this: most of us would agree that in a well-architected system with good separation of concerns that the business logic is factored out from incidental concerns like I/O.

I would contend that in such a system (you have it set up that way already right?) that the amount of unit testing you need to do of the I/O and the like is zero. I mean sure, have a test that wires everything up to test the boundaries, but as you yourself point out you obviously don't need (or want) to mock/stub for that.

So for your UserService, what does it do?

Maybe it does things like this:

  • Create new user
  • Verify existing user
  • Delete existing user

So let's take the creating a new user. It:

  • Gets user data from a UI
  • Validates the user data
  • Inserts the new user in the database

The first action is triggered by the UI and the test belongs there, as far as UserService is concerned it's just going to be passed as essentially funargs. Assuming you're using dependency injection the third is a super straightforward mock, and if it isn't this is a good sign that something is wrong with your design. The second is just a stateless function that takes in some arguments and returns a boolean, no mocks needed, and again if this isn't simple then it means something is wrong.

The problem with testing something like this comes when you combine 2 or more of those things in the same function/method, because at that point you really do start to have mocking problems. So consider the following pseudo code:

class UserService {
  public constructor (db: DatabaseConnection) {
    this.db = db;
  }

  public getUserById(userId: UserID): User {
    return this.db.getUserById(userId);
  }

  public verifyUser(user_id: UserID): boolean {
    return this.verify(this.getUserById());
  }

  private verify(user: User | UnverifiedUser): boolean {
    /* logic that verifies a user */
  }

  public createUser(newUser: UnverifiedUser): int {
    try {
      valid = this.verify(newUser);
      if (valid) {
        value = this.db.addUser(newUser);
      } else {
        raise InvalidUserDataError();
      }
      return value.userId;
    } catch InsertionError {
      return 0;
    }
  }
}
 

The only method with any real logic is the private verify method. Everything else is just glue. The others will have only a couple of tests around error conditions, and if not statically typed will have a few just to verify arguments and return values but no real unit tests. Only thing that needs to be mocked are whatever pipes data in and whatever pipes data out, for unit testing we only really care about the pipeline itself.

Now you could nitpick the above: maybe the verify method should throw on failure instead of returning a boolean, maybe this is too thin of a wrapper around the database interface, maybe you should split out verifying a new user from an existing one. But none of that changes the underlying point, that you split the concerns appropriately and you let the compiler do as much of the work as reasonably possible.

Edit per OP comment below

Let's go back to the code above, but in light of the conversation below:

Every single method except the private verify method is in the imperative shell.

Note that I didn't split it into two classes the way he did for the talk, but the conceptual boundary is still there. Verify has zero dependencies, performs some logic, and returns a value. Everything else depends on something external like the database and makes no decisions: the only 'branch' is to throw an exception and that could be moved into the verify method but throwing exceptions isn't very functional.

This ratio of shell to core may seem kind of counter-intuitive from what he was proposing in the talk, but remember that a User class isn't going to do much. There aren't a lot of decisions to make, it's mostly just plumbing data to/from the database/client, which means it's mostly about I/O. And indeed, if you are simply writing CRUD apps (and a lot of us are, it pays the bills) then your code may well be 70% glue and plumbing with only 30% business logic instead of the other way around.

But the business logic (i.e. functional core) is the part where the unit tests really matter, and where it really matters that they're isolated and isolate-able.

So in the code you linked in pastebin, the part you have labelled core in a comment is, as you've pointed out, superfluous, the example is too contrived. IRL you'd use a database uniqueness constraint to enforce that, no need to do anything at the app-level except plumb the error back up. So let's think about something more interesting (with apologies to Rich Hickey): luggage.

We work in an airport, and we want our luggage handlers to break down pallets of luggage, mark bags that are too heavy, throw away any bags that smell like food, and if any bags are ticking go home for the day, they're done.

So we have to process each bag, and we see we can avoid some duplication of effort by controlling the order. Assuming that a pallet is an array of bags, and we have an array of pallets, in very naive Javascript:

const bags = pallets.flatten(); // unpack the pallets
if (bags.some(bag => bag.isTicking)) throw new Error('go home');
return bags
  .filter((bag) => !bag.isFood())
  .map((bag) => {
    if (bag.weight > 75) bag.isHeavy = true;
    return bag;
  });

Do we care where the bags come from? No. Do we care where they go? No. This is a pure (mostly, we do mutate heavy bags) function of its inputs encapsulating the domain logic. So far so good. How easy is it to test?

Um. Er. Not especially.

But what if we pull all of those anonymous callbacks out into named functions (or methods) that can be tested? Now we're getting somewhere:

const isHeavy = (bag) => bag.weight > 75;
const notFood = (bag) => !bag.isFood();
const labelBag = (bag) => {
  bag.isHeavy = true;
  return bag;
};

const throwIfTicking = (bags) => {
  if (bags.some(bag => bag.isTicking())) throw new Error('go home!');
  return bags
};

const processPallets = (pallets) => {
  return throwIfTicking(pallets.flatten())
    .filter(notFood)
    // Note the lambda here. You could pull this out too.
    // it's a bit of a judgement call how far you go with this.
    .map(bag => isHeavy(bag) ? labelBag(bag) : bag);
};

Notice that there's no cumbersome indirection going on here, everything is still very straightforward. You just have to have the discipline to not use anonymous callbacks excessively and to split things into small single-purpose functions. And since you've tested all the easily-testable individual pieces, how much effort do you have to spend testing the fairly simple composition of them that is processBags? Almost none. How much time are you going to spend testing the HTTP request that gives you the bags (or wherever they come from), or the RabbitMQ queue that you put them on to after you process them (or wherever they might go)? Almost none.

I think this subject suffers from conflated and co-opted terminology, which causes people to talk past each other. (I've written about this before).

For example, take the following:

Should I be writing only integration tests when there is dependency, and unit tests for pieces of code without any dependency?

I think most people would answer this question by saying that (ideally, modulo common sense, etc.):

"When there is no dependency, unit tests are sufficient and mocks aren't needed; when there is dependency, unit tests may need mocks and there should also be integration tests."

Let's call this answer A, and I'm going to assume that it's a relatively uncontroversial thing to say.

However, two people might both give answer A, but mean very different things when they say it!

When a "classicist" says answer A, they might mean the following (answer B):

"Functionality that is internal to the application (e.g. a calculation which performs no I/O) doesn't need integration tests, and its unit tests don't need mocks. Functionality with some external dependency (e.g. a separate application like an RDBMS, or a third-party Web service) should have integration tests, and if it has unit tests they may need the external interactions to be mocked."

When others ("mockists"?) say answer A, the might mean the following (answer C):

"A class which doesn't call methods of another class doesn't need integration tests, and its unit tests don't need mocks. Classes which call methods of other classes should mock those out during their unit tests, and they should probably have integration tests too."

These testing strategies are objectively very different, but they both correspond to answer A. This is due to the different meanings they are using for words. We can caricature someone who says answer A, but means answer B, as saying the following:

  • A "dependency" is a different application, Web service, etc. Possibly maintained by a third-party. Unchangeable, at least within the scope of our project. For example, our application might have MySQL as a dependency.
  • A "unit" is a piece of functionality which makes some sort of sense on its own. For example "adding a contact" may be a unit of functionality.
  • A "unit test" checks some aspect of a unit of functionality. For example, "if we add a contact with email address X, looking up that contact's email address should give back X".
  • An "interface" is the protocol our application should follow to interact with a dependency, or how our application should behave when used as a dependency by something else. For example, SQL with a certain schema when talking to a database; JSON with a certain schema, sent over HTTP, when talking to a ReST API.
  • An "integration test" checks that the interface our application is using with a dependency will actually have the desired effect. For example "There will always be exactly one matching row after running an UPSERT query".
  • A "mock" is a simplified, in-memory alternative to a dependency. For example, MockRedisConnection may follow the same interface as RedisConnection, but just contains a HashMap. Mocks can sometimes be useful, e.g. if some of our unit tests are annoyingly slow, or if our monthly bill from a third-party Web service is too high due to all of the calls made by our tests.

We can caricature someone who says answer A, but means answer C, as saying the following:

  • A "dependency" is a different class to the one we're looking at. For example, if we're looking at the "Invoice" class, then the "Product" class might be a dependency.
  • A "unit" is a chunk of code, usually a method or class. For example "User::addContact" may be a unit.
  • A "unit test" checks only the code inside a single unit (e.g. one class). For example "Calling User::addContact with a contact with email address X will ask to DBConnection to insert a contacts row containing email address X".
  • An "interface" is like a class but only has the method names and types; the implementations are provided by each class extending that interface.
  • An "integration test" checks that code involving multiple classes gives the correct result. For example "Adding Discounts to a ShoppingCart affects the Invoice produced by the Checkout".
  • A "mock" is an object which records the method calls made on it, so we can check what the unit of code we're testing tried to do in a unit test. They are essential if we want to isolate the unit under test from every other class.

These are very different meanings, but the relationships between B's meanings and between C's meanings are similar, which is why both groups of people seem to agree with each other about answer A (e.g. their definitions of "dependency" and "integration test" differ, but both have the relationship "dependencies should have integration tests").

For the record, I would personally count myself as what you call a "classicist" (although I've not come across that term before); hence why the above caricatures are clearly biased!

In any case, I think this problem of conflated meanings needs to be addressed before we can have constructive debates about the merits of one approach versus another. Unfortunately every time someone tries to introduce some new, more specialised vocabulary to avoid the existing conflations, those terms start getting mis-used until they're just as conflated as before.

For example, "Thought Leader X" might want to talk about physical humans clicking on a UI or typing in a CLI, so they say "it's important to describe how users can interact with the system; we'll call these 'behaviours'". Their terminology spreads around, and soon enough "Though Leader Y" (either through misunderstanding, or thinking they're improving the situation), will say something like "I agree with X, that when we design a system like the WidgetFactory class, we should use behaviours to describe how it interacts with its users, like the ValidationFactory class". This co-opted usage spreads around, obscuring the original meaning. Those reading old books and blog posts from X may get confused about the original message, and start applying their advice to the newer meanings (after all, this is a highly regarded book by that influential luminary X!).

We've now reached the situation where "module" means class, "entity" means class, "unit" means class, "collaborator" means class, "dependency" means class, "user" means class, "consumer" means class, "client" means class, "system under test" means class, "service" means class. Where "boundary" means "class boundary", "external" means "class boundary", "interface" means "class boundary", "protocol" means "class boundary". Where "behaviour" means "method call", where "functionality" means "method call", where "message send" means "method call".


Hopefully that gives some context to the following answer, for your specific question:

However, how would I go about writing unit tests for a piece of code that uses one or more dependencies? For instance, if I am testing a UserService class that needs UserRepository (talks to the database) and UserValidator (validates the user), then the only way would be... to stub them?

Otherwise, if I use a real UserRepository and UserValidator, wouldn't that be an integration test and also defeat the purpose of testing only the behavior of UserService?

A 'classicist' like me would say that UserService, UserRepository and UserValidator are not dependencies, they're part of your project. The database is a dependency.

Your unit tests should check the functionality of your application/library, whatever that entails. Anything else would mean your test suite is lying to you; for example, mocking out calls to the DB could make your test suite lie about the application working, when in fact there happens to be a DB outage right now.

Some lies are more acceptable than others (e.g. mocking the business logic is worse than mocking the DB).

Some lies are more beneficial than others (e.g. mocking the DB means we don't need to clean up test data).

Some lies require more effort to pull-off than others (e.g. using a library to mock a config file is easier than manually creating bespoke mocks for a whole bunch of intricately-related classes).

There is no universal right answer here; these are tradeoffs that depend on the application. For example, if your tests are running on a machine that may not have a DB or a reliable network connection (e.g. a developer's laptop), and where left over cruft will accumulate, and where there's an off-the-shelf library that makes DB mocking easy, then maybe it's a good idea to mock the DB calls. On the other hand, if the tests are running in some provisioned environment (e.g. a container, or cloud service, etc.) which gets immediately discarded, and which it's trivial to add a DB to, then maybe it's better to just set 'DB=true' in the provisioner and not do any mocking.

The point of integration tests, to a classicist, is to perform experiments that test the theories we've used to write our application. For example, we might assume that "if I say X to the DB, the result will be Y", and our application relies on this assumption in the way it uses the DB:

  • If our tests are run with a real DB, this assumption will be tested implicitly: if our test suite passes, then our assumption is either correct or irrelevant. If our assumption is wrong in a relevant way, then our tests will fail. There is no need to check this with separate integration tests (although we might want to do it anyway).

  • If we're mocking things in our tests, then our assumptions will always be true for those mocks, since they're created according to our assumptions (that's how we think DBs work!). In this case, if the unit tests pass it doesn't tell us if our assumptions are correct (only that they're self-consistent). We do need separate integration tests in this case, to check whether the real DB actually works in the way we think it does.

Choosing Collaborators is Hard

It's just as difficult as working out the communication protocol and interface between them, because it boils down to the same problem: making a boundary.

If you are writing your unit tests and stubbing out actual collaborators, then you are doing it right. Because changes in the protocol/interface necessitate changes in the collaborator and as such your mock/stub.

If you are writing unit tests and providing internal implementation details, then you are doing the wrong thing. Because the tests will break simply due to a refactor within the unit.


Exploratory Unit Tests serve the purpose of reducing the feedback loop

If unit tests no longer serve this purpose, then (in their exploratory capacity) are without value.

There are many ways to provide the feedback loop. If you are early in a hypothetical design it may pay to ditch/not write unit tests, and instead favour other methods of obtaining fast feedback. It's not like every script you have ever written has a body of extensive tests.

That being said, once the design is settled it will pay to write unit tests for the code to improve quality and cross check the features actually desired.

Licensed under: CC-BY-SA with attribution
scroll top