Question

I know some people are massive proponents of test driven development. I have used unit tests in the past, but only to test operations that can be tested easily or which I believe will quite possibly be correct. Complete or near complete code coverage sounds like it would take a lot of time.

  1. What projects do you use test-driven development for? Do you only use it for projects above a certain size?
  2. Should I be using it or not? Convince me!
Was it helpful?

Solution

Ok, some advantages to TDD:

  1. It means you end up with more tests. Everyone likes having tests, but few people like writing them. Building test-writing into your development flow means you end up with more tests.
  2. Writing to a test forces you to think about the testability of your design, and testable design is almost always better design. It's not entirely clear to me why this happens to be the case, but my experience and that of most TDD evangelists seems to bear it out.
  3. Here's a study saying that although TDD takes a bit longer to write, there's a good return on investment because you get higher quality code, and therefore fewer bugs to fix.
  4. It gives you confidence in refactoring. It's a great feeling to be able to change one system without worrying about breaking everything else because it's pretty well covered by unit tests.
  5. You almost never get a repeat bug, since every one you find should get a test before it gets a fix.

You asked to be convinced, so these were benefits. See this question for a more balanced view.

OTHER TIPS

Robert C. Martin originally made these points - I can back them up from my own experience:

  • You will automatically build a regression test suite of unit tests as you go.
  • You will hardly ever spend time debugging, as if you code yourself into a hole, it's easier to undo your code to the point when the last test passed, rather than crack open a debugger.
  • Every few minutes you verify that your code works - all of it (or at least all of the behaviour covered by the tests, which if you're doing TDD is a very high percentage of it).

I pretty much do TDD all of the time, whether I'm working on production or play code; I find it difficult to code any other way these days.

(Disclaimer: I do hardly any UI stuff, so I can't discuss TDD for UIs.)

I use TDD in pretty much everything I do, from trivial apps to entire SIP stacks.

I don't use TDD in a legacy PHP website I took over. I find it painful not having tests. And I find it intensely annoying accidentally breaking parts of the site because I don't have a regression test suite telling me I broke something. The client doesn't have the budget for me to (a) write tests for the codebase and (b) in the process make the code testable in the first place, so I just put up with it.

  • Whenever your client can be supplied more effectively (they will possibly relate well to tests - and it will at least cut down on end-of-project discussion)
  • Whenever it would take longer keep your co-developers informed on EVERYTYHING in the code than to put effort in building the test - and this is sooner than you may think

What? No negative answer!?

Disclaimer: I am not anti-unit-testing. When people say TDD, I assume they mean the disease-sounding version where they're writing tests before they write the code for 80-100% of all the code they write.

I would argue:

  • It's an enabler. If catching regression issues is such a huge problem for you that full-auto TDD from the start seems worthwhile, writing tests for every last piece of code you write, might actually help you ignore the real problem.

  • It helps people ignore the real problem. When fixing one bug turns into a game of whack-a-mole where two more pop-up, the architecture blows. Focus. Focus on the real problem. Seeing the moles before they must be whacked is neat-o but you shouldn't be there in the first place.

  • It eats a lot of time. I hit occasional bugs. I do not hit so many that it seems worthwhile to prefix every new thing I write with a test for it. Catch issues where they're likely to happen. Handle errors such that they are easy to diagnose. Validate. Test at key points of overlap/bottleneck. But for crying out loud don't test every last getter and setter in something that probably shouldn't have had those in the first place.

  • Design Focus: There is absolutely no way even a good developer is going to write the best code they could when they are also focusing on the test. If it seems like the only way you can have a decent design, I'd recommend seeing the above about "focusing on the real problem."

  • Macro-Design Fail: The codebase at my current job is riddled with interfaces that never get used more than once and massive violations of basic DRY principle that I only finally started to understand when I realized people were writing for the test-frameworks and testing in general. Testing should not lead to stupid architecture. No really, there is nothing that is somehow more scalable or enterprise-worthy about copying and pasting 20 files and then only making significant changes to two of them. The idea is to separate concerns, not split them down the middle. Cruft and pointless abstraction will cost you more than not have 95% coverage ever will.

  • It's really popular and lots of people really, really like it. If that's not reason enough to at least second-guess and/or vet the crap out of any technology before adoption, learn you some paranoia.

Licensed under: CC-BY-SA with attribution
scroll top