Question

Short introduction to this question. I have used now TDD and lately BDD for over one year now. I use techniques like mocking to make writing my tests more efficiently. Lately I have started a personal project to write a little money management program for myself. Since I had no legacy code it was the perfect project to start with TDD. Unfortunate I did not experience the joy of TDD so much. It even spoiled my fun so much that I have given up on the project.

What was the problem? Well, I have used the TDD like approach to let the tests / requirements evolve the design of the program. The problem was that over one half of the development time as for writing / refactor tests. So in the end I did not want to implement any more features because I would need to refactor and write to many test.

At work I have a lot of legacy code. Here I write more and more integration and acceptance tests and less unit tests. This does not seem to be a bad approach since bugs are mostly detected by the acceptance and integration tests.

My idea was, that I could in the end write more integration and acceptance tests than unit tests. Like I said for detecting bugs the unit tests are not better than integration / acceptance tests. Unit test are also good for the design. Since I used to write a lot of them my classes are always designed to be good testable. Additionally, the approach to let the tests / requirements guide the design leads in most cases to a better design. The last advantage of unit tests is that they are faster. I have written enough integration tests to know, that they can be nearly as fast as the unit tests.

After I was looking through the web I found out that there are very similar ideas to mine mentioned here and there. What do you think of this idea?

Edit

Responding to the questions one example where the design was good,but I needed a huge refactoring for the next requirement:

At first there were some requirements to execute certain commands. I wrote an extendable command parser - which parsed commands from some kind of command prompt and called the correct one on the model. The result were represented in a view model class: First design

There was nothing wrong here. All classes were independent from each other and the I could easily add new commands, show new data.

The next requirement was, that every command should have its own view representation - some kind of preview of the result of the command. I redesigned the program to achieve a better design for the new requirement: Second design

This was also good because now every command has its own view model and therefore its own preview.

The thing is, that the command parser was changed to use a token based parsing of the commands and was stripped from its ability to execute the commands. Every command got its own view model and the data view model only knows the current command view model which than knows the data which has to be shown.

All I wanted to know at this point is, if the new design did not break any existing requirement. I did not have to change ANY of my acceptance test. I had to refactor or delete nearly EVERY unit tests, which was a huge pile of work.

What I wanted to show here is a common situation which happened often during the development. There were no problem with the old or the new designs, they just changed naturally with the requirements - how I understood it, this is one advantage of TDD, that the design evolves.

Conclusion

Thanks for all the answers and discussions. In summary of this discussion I have thought of an approach which I will test with my next project.

  • First of all I write all tests before implementing anything like I always did.
  • For requirements I write at first some acceptance tests which tests the whole program. Then I write some integration tests for the components where I need to implement the requirement. If there is a component which work closely together with another component to implement this requirement I would also write some integration tests where both components are tested together. Last but not least if I have to write an algorithm or any other class with a high permutation - e.g. a serializer - I would write unit tests for this particular classes. All other classes are not tested but any unit tests.
  • For bugs the process can be simplified. Normally a bug is caused by one or two components. In this case I would write one integration test for the components which tests the bug. If it related to a algorithm I would only write a unit test. If it is not easy to detect the component where the bug occurs I would write an acceptance test to locate the bug - this should be an exception.
Was it helpful?

Solution

It's comparing oranges and apples.

Integration tests, acceptance tests, unit tests, behaviour tests - they are all tests and they will all help you improve your code but they are also quite different.

I'm going to go over each of the different tests in my opinion and hopefully explain why you need a blend of all of them:

Integration tests:

Simply, test that different component parts of your system integrate correctly - for example - maybe you simulate a web service request and check that the result comes back. I would generally use real (ish) static data and mocked dependencies to ensure that it can be consistently verified.

Acceptance tests:

An acceptance test should directly correlate to a business use case. It can be huge ("trades are submitted correctly") or tiny ("filter successfully filters a list") - it doesn't matter; what matters is that it should be explicitly tied to a specific user requirement. I like to focus on these for test-driven development because it means we have a good reference manual of tests to user stories for dev and qa to verify.

Unit tests:

For small discrete units of functionality that may or may not make up an individual user story by itself - for example, a user story which says that we retrieve all customers when we access a specific web page can be an acceptance test (simulate hitting the web page and checking the response) but may also contain several unit tests (verify that security permissions are checked, verify that the database connection queries correctly, verify that any code limiting the number of results is executed correctly) - these are all "unit tests" that aren't a complete acceptance test.

Behaviour tests:

Define what the flow should be of an application in the case of a specific input. For example, "when connection cannot be established, verify that the system retries the connection." Again, this is unlikely to be a full acceptance test but it still allows you to verify something useful.

These are all in my opinion through much experience of writing tests; I don't like to focus on the textbook approaches - rather, focus on what gives your tests value.

OTHER TIPS

TL;DR: As long as it meets your needs, yes.

I've been doing Acceptance Test Driven Development (ATDD) development for many years now. It can be very successful. There are a few things to be aware of.

  • Unit tests really do help enforce IOC. Without unit tests the onus is on the developers to make sure they meet the requirements of well written code (in so far as unit tests drive well written code)
  • They can be slower and have false failures if you are actually using resources that are typically mocked.
  • The test do not pinpoint the specific problem as unit tests would. You need to do more investigation to fix test failures.

Now the benefits

  • Much better test coverage, covers integration points.
  • Ensures the system as a whole meets the acceptance criteria, which is the whole point of software development.
  • Makes large refactors much easier, faster, and cheaper.

As always it's up to you to do the analysis and figure out if this practice is appropriate for your situation. Unlike many people I don't think there is an idealized right answer. It will depend on your needs and requirements.

Well, I have used the TDD like approach to let the tests / requirements evolve the design of the program. The problem was that over one half of the development time as for writing / refactor tests

Unit tests work best when the public interface of the components they are used for does not change too often. This means, when the components already are designed well (for example, following the SOLID principles).

So believing a good design just "evolves" from "throwing" a lot of unit tests at a component is a fallacy. TDD is no "teacher" for good design, it can only help a little bit to verify that certain aspects of the design are good (especially testability).

When your requirements change, and you have to change the internals of a component, and this will break 90% of your unit tests, so you have to refactor them very often, then the design most probably was not so good.

So my advice is: think about the design of the components you have created, and how you can make them more following the open/closed principle. The idea of the latter is to make sure the functionality of your components can be extended later without changing them (and thus not breaking the component's API used by your unit tests). Such components can (and should be) covered by unit test tests, and the experience should not be as painful as you have described it.

When you cannot come up with such a design immediately, acceptance and integration tests may be indeed a better start.

EDIT: Sometimes the design of your components can be fine, but the design of your unit tests may cause issues. Simple example: You want to test the method "MyMethod" of the class X and write

    var x= new X();
    Assert.AreEqual("expected value 1" x.MyMethod("value 1"));
    Assert.AreEqual("expected value 2" x.MyMethod("value 2"));
    // ...
    Assert.AreEqual("expected value 500" x.MyMethod("value 500"));

(assume the values have some kind of meaning).

Assume further, that in production code there is just one call to X.MyMethod. Now, for a new requirement, the method "MyMethod" needs an additional parameter (for example, something like context), which cannot be omitted. Without unit tests, one would have to refactor the calling code in just one place. With unit tests, one has to refactor 500 places.

But the cause here is not the unit tests itself, it is just the fact that the same call to "X.MyMethod" is repeated again and again, not strictly following the "Don't Repeat Yourself (DRY) principle. So the solution here is to put the test data and the related expected values in a list and run the calls to "MyMethod" in a loop (or, if the testing tool supports so called "data drive tests", to use that feature). This reduces the number of places to change in the unit tests when the method signature changes to 1 (as opposed to 500).

In your real world case, the situation might be more complex, but I hope you get the idea - when your unit tests use a components API for which you don't know if it may become subject to change, make sure you reduce the number of calls to that API to a minimum.

Yes, of course it is.

Consider this:

  • a unit test is a small, targeted piece of testing that exercises a small piece of code. You write lots of them to achieve a decent code coverage, so that all (or the majority of the awkward bits) are tested.
  • an integration test is a large, broad piece of testing that exercises a large surface of your code. You write few of them to achieve a decent code coverage, so that all (or the majority of the awkward bits) are tested.

See the overall difference....

The issue is one of code coverage, if you can achieve a full test of all your code using integration/acceptance testing, then there's not a problem. Your code is tested. That's the goal.

I think you may need to mix them up, as every TDD-based project will require some integration testing just to make sure that all the units actually work well together (I know from experience that a 100% passed unit tested codebase does not necessarily work when you put them all together!)

The problem really comes down to the ease of testing, debugging the failures, and fixing them. Some people find their unit tests are very good at this, they are small and simple and failures are easy to see, but the disadvantage is that you have to reorganise your code to suit the unit test tools, and write very many of them. An integration test is more difficult to write to cover a lot of code, and you will probably have to use techniques like logging to debug any failures (though, I'd say you have to do this anyway, you can't unit test failures when on-site!).

Either way though, you still get tested code, you just need to decide which mechanism suits you better. (I'd go with a bit of a mix, unit test the complex algorithms, and integrate test the rest).

I think it's a horrible idea.

Since acceptance tests and integration test touch broader portions of your code to test a specific target, they're going to need more refactoring over time, not less. Worse yet, since they do cover broad sections of the code, they increase the time you spend tracking down the root cause since you've got a broader area to search from.

No, you should usually write more unit tests unless you have a on odd app that is 90% UI or something else that's awkward to unit test. The pain you're running into isn't from unit tests, but doing test first development. Generally, you should only spend 1/3 of your time at most writing tests. After all, they're there to serve you, not vice versa.

The "win" with TDD, is that once the tests have been written, they can be automated. The flip side is that it can consume a significant chunk of the development time. Whether this actually slows the whole process down is moot. The argument being that the upfront testing reduces the number of errors to be fixed at the end of the development cycle.

This is where BDD comes in as behaviours can be included within the unit testing so the process is by definition less abstract and more tangible.

Clearly, if an infinite amount of time was available, you'd do as many tests of various varieties as possible. However, time is generally limited and continual testing is only cost effective to a point.

This all leads to the conclusion that the tests that provide the most value should be at the front of the process. This in itself doesn't automatically favour one type of testing over another - more that each case has to be taken on its merits.

If you're writing a command line widget for personal use, you'd primarily be interested in unit tests. Whereas a web service say, would require a substantial amount of integration/behavioural testing.

Whilst most types of test concentrate on what could be called the "racing line" i.e. testing what is required by the business today, unit testing is excellent at weeding out subtle bugs that could surface in later development phases. Since this is a benefit that can't readily be measured, it is often overlooked.

The last advantage of unit tests is that they are faster. I have written enough integration tests to know, that they can be nearly as fast as the unit tests.

This is the key point, and not only "the last advantage". When the project gets bigger and bigger, your integration acceptance tests are becoming slower and slower. And here, I mean so slow that you are going to stop executing them.

Of course, unit tests are becoming slower as well, but they are still more than order of magnitude faster. For example, in my previous project (c++, some 600 kLOC, 4000 unit tests and 200 integration tests), it took about one minute to execute all and more then 15 to execute integration tests. To build and execute unit tests for the part being changed, would take less then 30 seconds on average. When you can do it so fast, you'll want to do it all the time.

Just to make it clear: I do not say not to add integration and acceptance tests, but it looks like you did TDD/BDD in a wrong way.

Unit test are also good for the design.

Yes, designing with testability in mind will make the design better.

The problem was that over one half of the development time as for writing / refactor tests. So in the end I did not want to implement any more features because I would need to refactor and write to many test.

Well, when requirements change, you do have to change the code. I would tell you didn't finish your work if you didn't write unit tests. But this doesn't mean you should have 100% coverage with unit tests - that is not the goal. Some things (like GUI, or accessing a file, ...) are not even meant to be unit tested.

The result of this is better code quality, and another layer of testing. I would say it is worth it.


We also had several 1000s acceptance tests, and it would take whole week to execute all.

Licensed under: CC-BY-SA with attribution
scroll top