Frage

I'm a solo developer with a pretty time-constrained work environment where development time ranges usually from 1-4 weeks per project, depending on either requirements, urgency, or both. At any given time I handle around 3-4 projects, some having timelines that overlap with each other.

Expectedly, code quality suffers. I also do not have formal testing; it usually goes down to walking through the system until it somewhat breaks. As a result, a considerable amount of bugs escape to production, which I have to fix and in turn sets back my other projects.

This is where unit testing comes in. When done right, it should keep bugs, let alone those that escape to production, to a minimum. On the other hand, writing tests can take a considerable amount of time, which doesn't sound good with time-constrained projects such as mine.

Question is, how much of a time difference would writing unit-tested code over untested code, and how does that time difference scale as project scope widens?

War es hilfreich?

Lösung

The later you test, the more it costs to write tests.

The longer a bug lives, the more expensive it is to fix.

The law of diminishing returns ensures you can test yourself into oblivion trying to ensure there are no bugs.

Buddha taught the wisdom of the middle path. Tests are good. There is such a thing as too much of a good thing. The key is being able to tell when you are out of balance.

Every line of code you write without tests will have significantly greater costs to adding tests later than if you had written the tests before writing the code.

Every line of code without tests will be significantly more difficult to debug or rewrite.

Every test you write will take time.

Every bug will take time to fix.

The faithful will tell you not to write a single line of code without first writing a failing test. The test ensures you're getting the behavior you expect. It allows you to change the code quickly without worrying about affecting the rest of the system since the test proves the behavior is the same.

You must weigh all that against the fact that tests don't add features. Production code adds features. And features are what pay the bills.

Pragmatically speaking, I add all the tests I can get away with. I ignore comments in favor of watching tests. I don't even trust code to do what I think it does. I trust tests. But I've been known to throw the occasional hail mary and get lucky.

However, many successful coders don't do TDD. That doesn't mean they don't test. They just don't obsessively insist that every line of code have an automated test against it. Even Uncle Bob admits he doesn't test his UI. He also insists you move all logic out of the UI.

As a football metaphor (that's American football) TDD is a good ground game. Manual only testing where you write a pile of code and hope it works is a passing game. You can be good at either. Your career isn't going to make the playoffs unless you can do both. It won't make the superbowl until you learn when to pick each one. But if you need a nudge in a particular direction: the officials calls go against me more often when I'm passing.

If you want to give TDD a try I highly recommend you practice before trying to do it at work. TDD done half way, half hearted, and half assed is a big reason some don't respect it. It's like pouring one glass of water into another. If you don't commit and do it quickly and completely you end up dribbling water all over the table.

Andere Tipps

I agree with the rest of the answers but to answer the what is the time difference question directly.

Roy Osherove in his book The Art of Unit Testing, Second Edition page 200 did a case study of implementing similarly sized projects with similar teams (skill wise) for two different clients where one team did testing while the other one did not.

His results were like so:

Team progress and output measured with and without tests

So in the end of a project you get both less time and fewer bugs. This of course depends on how big a project is.

There is only one study I know of which studied this in a "real-world setting": Realizing quality improvement through test driven development: results and experiences of four industrial teams. It is expensive to do this in a sensible way, since it basically means you need to develop the same software twice (or ideally even more often) with similar teams, and then throw all but one away.

The results of the study were an increase in development time between 15%–35% (which is nowhere near the 2x figure that often gets quoted by TDD critics) and a decrease in pre-release defect density from 40%–90%(!). Note that all teams had no prior experience with TDD, so one could assume that the increase in time can at least partially attributed to learning, and thus would go down even further over time, but this was not assessed by the study.

Note that this study is about TDD, and your question is about unit testing, which are very different things, but it is the closest I could find.

Done well, developing with unit tests can be faster even without considering the benefits of extras bugs being caught.

The fact is, I'm not a good enough coder to simply have my code work as soon as it compiles. When I write/modify code, I have to run the code to make sure it does what I thought it does. At one project, this tended to end up looking like:

  1. Modify code
  2. Compile application
  3. Run application
  4. Log into application
  5. Open a window
  6. Select an item from that window to open another window
  7. Set some controls in that window and click a button

And of course, after all that, it usually took a few round trips to actually get it right.

Now, what if I'm using unit tests? Then the process looks more like:

  1. Write a test
  2. Run tests, make sure it fails in the expected way
  3. Write code
  4. Run tests again, see that it passes

This is easier and faster then manually testing the application. I still have to manually run the application (so I don't look silly when I turn in work that doesn't actually work at all), but for the most part I've already worked out the kinks, and I'm just verifying at that point. I actually typically make this loop even tighter by using a program that automatically reruns my tests when I save.

However, this depends on working in a test-friendly code base. Many projects, even those with many tests, make writing tests difficult. But if you work at it, you can have a code base that's easier to test via automated tests than with manual testing. As a bonus, you can keep the automated tests around, and keep running them to prevent regressions.

Despite there being a lot of answers already, they are somewhat repetitive and I would like to take a different tack. Unit tests are valuable, if and only if, they increase business value. Testing for testing's sake (trivial or tautological tests), or to hit some arbitrary metric (like code coverage), is cargo-cult programming.

Tests are costly, not only in the time it takes to write them, but also maintenance. They have to be kept in sync with the code they test or they're worthless. Not to mention the time cost of running them on every change. That's not a deal-breaker (or an excuse for not doing the truly necessary ones), but needs to be factored in to cost-benefit analysis.

So the question to ask when deciding whether or not (or of what kinds) to test a function/method, ask yourself 'what end-user value am I creating/safeguarding with this test?'. If you can't answer that question, off the top of your head, then that test is likely not worth the cost of writing/maintaining. (or you don't understand the problem domain, which is a waaaay bigger problem than a lack of tests).

http://rbcs-us.com/documents/Why-Most-Unit-Testing-is-Waste.pdf

It depends on the person, as well as the complexity and shape of the code you're working with.

For me, on most projects, writing unit tests means I get the work done about 25% faster. Yes, even including the time to write the tests.

Because the fact of the matter is that software isn't done when you write the code. It is done when you ship it to the customer and they're happy with it. Unit tests are by far the most efficient way I know of to catch most bugs, isolate most bugs for debugging, and to gain confidence that the code is good. You have to do those things anyways, so do them well.

Some aspects to consider, not mentioned in the other answers.

  • Extra Benefit/Extra Cost depend on experience with writing unittests
    • with my first unit-test project the extra costs trippled because i had to learn a lot and i made a lot of mistakes.
    • after 10 years of experience with tdd i need 25% more coding time to write the tests in advance.
  • with more tdd-moduls there is still need for manual-gui-test and integration-testing
  • tdd only works when done from the beginning.
    • applying tdd to an existing, grown project is expensive/difficuilt. But you can implement regression-tests instead.
  • automated tests (unittests and other kind of tests) require maintanace const to keep them working.
    • having created test through copy&paste can make testcode-maintanace expensive.
    • with growing experience testcode becomes more modular and more easy to maintain.
  • with growing experience you will get the feeling when it is worth to create automated tests and when not.
    • example there is no big benefit to unittest simple getters/setters/wrappers
    • i donot write automated tests via the gui
    • i take care that the businesslayer can be tested

Summary

When starting with tdd it is difficuilt to reach the "more benefit than cost" state as long as you are under "time-constrained work environment" especially if there are "clever managers" that tell you to "get rid of the expensive, useless testing stuff"

Note: with "unit testing" i mean "testing moduls in isolation".

Note: with "regression testing" i mean

  • write some code that produces some output-text.
  • write some "regression testing" code that verifies that the result of the generation ist still the same.
  • the regression test let you know whenever the result changes (which might be ok or an indicator for a new bug)
  • the idea of "regression testing" is similar to approvaltests
    • ... taking a snapshot of the results, and confirming that they have not changed.

Programmers, like people dealing with most tasks, underestimate how long it actually takes to complete it. With that in mind, spending 10 minutes to write a test can be looked at as time one could have spent writing tons of code when in reality, you would have spent that time coming up with the same function name and parameters you did during the test. This is a TDD scenario.

Not writing tests, is a lot like having a credit card; we tend to spend more or write more code. More code has more bugs.

Instead of deciding to have total code coverage or none at all, I suggest focusing on the critical and complicated part of your application and have tests there. In a banking app, that might be the interest calculation. An engine diagnostic tool may have complex calibration protocols. If you've been working on a project, you probably know what it is and where the bugs are.

Start slowly. Build some fluency before you judge. You can always stop.

Question is, how much of a time difference would writing unit-tested code over untested code, and how does that time difference scale as project scope widens?

The problem gets worse as the age of the project increases: because whenever you add new functionality and/or whenever you refactor existing implementation, you ought to retest what's previously be tested to ensure that it still works. So, for a long-lived (multi-year) project, you might need to not only test functionality but re-test it 100 times and more. For this reason you might benefit from having automated tests. However, IMO it's good enough (or even, better) if these are automated system tests, rather than automated unit tests.

A second problem is that bugs can be harder to find and fix if they're not caught early. For example if there's a bug in the system and I know it was working perfectly before you made your latest change, then I'll concentrate my attention on your latest change to see how it might have introduced the bug. But if I don't know that the system was working before you made your latest change (because the system wasn't properly tested before your latest change), then the bug could be anywhere.

The above applies especially to deep code, and less to shallow code e.g. adding new web pages where new pages are unlikely to affect existing pages.

As a result, a considerable amount of bugs escape to production, which I have to fix and in turn sets back my other projects.

In my experience that would be unacceptable, and so you're asking the wrong question. Instead of asking whether tests would make development faster you ought to ask what would make development more bug-free.

A better question might be:

  • Is unit-testing the right kind of testing, which you need to avoid the "considerable amount of bugs" you've been producing?
  • Are there other quality control/improvement mechanisms (apart from unit-testing) to recommend as well or instead?

Learning is a two-stage process: learn to do it well enough, then learn to do that more quickly.

There have been a long history of Programmers board promoting TDD and other test methodologies, I won't recall their arguments and agree with them, but here is additional things to consider that should nuance a bit:

  • Testing isn't equally convenient and efficient depending of context. I develop web software, tell me if you have a program to test the whole UI... right now I'm programming excel macros, should I really develop a test module in VBA ?
  • Writing and maintaing the test software is real work that counts in the short run (it pays off in a longer run). Writing relevant tests is also an expertise to get
  • Working as a team and working alone, haven't the same tests requirements because in team you need to validate, understand and communicate code you did not write.

I'd say testing is good, but make sure you test early and test where the gain is.

I can relate to your expierience - our code base had almost no tests and was mostly untestable. It took literally ages to develop something and fixing productions bugs took precious time from new features.

For a partial rewrite, I vowed to write tests for all core functionality. At the beginning, it took considerably longer and my productivity suffered noticeably, but afterwards my productivity was better than ever before.

Part of that improvement was that I had fewer production bugs which in turn led to fewer interruptions -> I had a better focus at any given time.

Furthermore, the abillity to test AND debug code in isolation really pays of - a suite of tests is vastly superior to a system which cannot be debugged except with manual setup, e. g. launching your app and navigate to the screen and do something...perhaps a few dozen times

But notice that there is a drop in productivity at the beginning, so start to learn testing on some project where the time pressure is not already insane. Also, try to start it on a greenfield project, unit testing legacy code is very hard, and it helps when you know how a good test suite looks like.

An oft overlooked benefit of TDD is that the tests act as a safeguard to make sure you aren't introducing new bugs when you make a change.

The TDD approach is undoubtedly more time consuming initially but the takeaway point is you'll write less code which means less things to go wrong. All those bells and whistles you often include as a matter of course won't make it into the code base.

There's a scene in the film Swordfish where if memory serves, a hacker is having to work with a gun to his head and being erm... otherwise distracted. The point is it is a lot easier to work when your headspace is in the code and you have time on your side rather than months down the line with a customer screaming at you and other priorities getting squeezed.

Developers understand that fixing bugs later is more costly, but flip that on it's head. If you could be paid $500 a day to code how you code now or $1000 if you wrote in a TDD way, you'd bite the hand off the person making you the 2nd offer. The sooner you stop seeing testing as a chore and see it as a money saver, the better off you'll be.

Just to complement previous answers: remember that testing is not a purpose itself. The purpose of making tests is for your application to behave as expected through evolution, within unexpected contexts, etc.

Therefore, writing tests does not mean to prove all behaviors of all endpoints of an entity. This is a common error. A lot of developers think they need to test all functions/objects/methods/properties/etc. This leads to a high workload and a bunch of irrelevant code and tests. This approach is common in big projects, where most developers are not aware of the holistic behavior, but can only see their domain of interaction.

The right approach when dealing with sparse resources and testing is pretty obvious and of common sense, but not commonly formalized: invest testing development resources first on high-level functionalities, and gradually descend into specificities. This means that at some point, as a lonely developer, you would not only focus on unit-testing, but on functional/integration/etc. testing and depending on your time resources, gradually into the main unitary functions, as you would plan and consider. High-level testing will provide the necessary information to address low-level/unitary testing and to plan your testing development strategy according to the resources you have.

For example, you would like to test a processing chain first as a black-box. If you find that some member of the chain fails due to the behavior didn't considered some extreme condition, you write the tests that guarantee the functionality not only on this member but also on others. Then you deliver. For the next cycle, you detect that sometimes the network fails. So you write tests that address such issue on the modules that could be vulnerable. And so on.

Lizenziert unter: CC-BY-SA mit Zuschreibung
scroll top