Are there areas where TDD provides a high ROI and other areas where the ROI is so low that it is not worth following? [closed]

softwareengineering.stackexchange https://softwareengineering.stackexchange.com/questions/206

  •  16-10-2019
  •  | 
  •  

Question

Test driven development. I get it, like it.

But writing tests does require overhead. So should TDD be used universally throughout the code base, or are there areas where TDD provides a high ROI and other areas where the ROI is so low that it is not worth following.

Was it helpful?

Solution

I'd say avoid TDD in places where the code is likely to change structurally a lot. Ie, it's great to have a pile of tests for a method whose signature changes rarely but gets refactored internally more frequently, but it sucks to have to fix your tests every time a highly volatile interface changes dramatically.

The apps I've been working on recently have been data-driven webapps built on a Gui->Presenter->BusinessLogic->Data Access Layer-based architecture. My data access layer is tested like nobody's business. The business logic layer is pretty well tested. The Presenters are only tested in the more stable areas, and the GUI, which is changing hourly, has almost no tests.

OTHER TIPS

I suggest writing a full test suite in areas where it's sensible and practical to do. In less practical areas, write sanity checks.

In my experience, the overhead of a full set of test cases is certainly worth it in most cases, but realistically code coverage has diminishing returns. At some point, writing more tests just to increase code coverage just doesn't make sense.

For example, depending on your language/technology, testing the UI may not be practical or even feasible. A lot of tests will probably rely on what a user sees and can't be automated. How would you test that a method for generating a captcha produces an image that is readable by a human for example?

If a complete set of tests is going to take you three days to write, the likelihood of a bug being introduced in that component down the track is very low, and the function itself only takes half an hour to write, you should probably think hard about whether that time is worth it. Maybe just writing a basic sanity check for that function would provide value?

My general advice would be that you should test components fully where tests can be written relatively easily. However, if it's an area that's very hard to test, draw a line in the sand and write tests that will test the area at a higher level rather than test it fully.

In the previous captcha example, maybe write tests that check an image of the correct size and format is returned and that no exceptions are thrown. That gives you some level of assurance without going overboard.

To me, TDD is not overhead. It's just the way I write code. Why is it that you say that writing test is "overhead"? It's just part of the process. My point of view is that debugging is overhead, and that is an activity that I essentially stopped doing when I started TDD-ing. Before TDD, debugging was an integral part of my software writing process.

I think that giving up debugging for test-writing is a very good bargain.

One place where TDD really sucks hard is when testing views in an MVC app.

Since you are testing a function that returns a fat html string, your are stuck doing html parsing just to see if stuff worked out. Moreover, it can become a maintainability nightmare. One day you move a checkbox around and kaboom, your test is broken.

I like TDD for lots of my testing, but it is not the only tool in a programmers belt.

Licensed under: CC-BY-SA with attribution
scroll top