Question

I am developing a web application for an industry that desperately needs it. Working with a consultant in the industry, I’ve rapidly developed a prototype in 2 months that we will be testing with a company they consult with. The software will be used with 4-8 employees who will be testing it.

The prototype has been constantly changing as I work with the consultant to translate industry requirements to a usable software. I had a meeting last night that solidified 4 features/concerns that, if I had done TDD for, would have wasted approximately 8 hours, as the tests and the code would have been thrown out.

My process so far has been:

  1. Examine what employees are doing with their current operating software
  2. Listen to the consultant on what needs to change, what is inefficient, etc. and take notes on potential features in the new software
  3. Rapidly prototype no tests written (has been very successful, the consultant and the beta employees are all excited to start using it as it would save a ton of hours on their end)
  4. Get feedback from the consultant, and make adjustments as needed
  5. Repeat 1-4

This has worked, but I’m wondering if I should have been writing tests all along. I’ve done about 200 hours of development, and would probably have about 150 hours of testing (complex industry rules to test!), but I have barely been keeping up with the development work as it is.

It seems to me that testing before beta would have been a huge time sink, especially because requirements are changing. I’ve gotten most features right, but some minor things, especially related to industry rules, have changed. So spending time writing tests for things I didn’t get right to begin with would have been a huge waste of time... it seems.

Am I right in waiting to write tests until we’re done prototyping, and we get to beta where we have feedback not only from the consultant, but also the employees testing it? Or have I made a huge mistake?

Was it helpful?

Solution

I've worked on projects with no testing, development driven testing and actual red-green-refactor TDD, and what you describe is something I might have written before trying actual TDD.

I’ve rapidly developed a prototype in 2 months that we will be testing with a company they consult with. The software will be used with 4-8 employees who will be testing it.

Is the source code for the prototype going to be thrown away before the beta? If so you may have been right in avoiding TDD, at least for the simple parts of the code base - the point of such a prototype is to learn or prove that something is doable. If not then you have written legacy code and will find that the difficulty of making changes without breaking anything rises very fast with the complexity of the software. This is the case for every piece of non-TDD code I have ever seen, and a lot of DDT code.

I had a meeting last night that solidified 4 features/concerns that, if I had done TDD for, would have wasted approximately 8 hours, as the tests and the code would have been thrown out.

This is a common straw man argument. First, effort spent on non-TDD work isn't equivalent to TDD work, because you would have ended up with different code. You just have to trust me on this; I've never seen any non-TDD code for which it's easy to write automated tests. Second, my conclusion from actual TDD is that it is faster to write tests and features than to just write features.

I’ve done about 200 hours of development, and would probably have about 150 hours of testing (complex industry rules to test!), but I have barely been keeping up with the development work as it is.

Again, in my experience your velocity would be higher if you were doing TDD, even under changing requirements.

It seems to me that testing before beta would have been a huge time sink, especially because requirements are changing.

And again, this is the typical view if you simply add up the time to write the feature and the time to write the tests. But in my experience DDT looks something like this:

  1. Write code in 1 unit of time
  2. Write tests in 0.5 to 2 units of time

Your application is now 1 unit harder to modify to implement the next feature. While TDD looks something like this:

  1. Write tests in 1 unit of time
  2. Write code in 0.1-0.5 units of time because by now you should know exactly what needs to be changed and how
  3. Refactor in 0.1-0.2 units of time

Your application is now 0.2 units harder to modify. These numbers will be very different if you start testing a legacy application, but you should see a marked improvement in time to change within tested parts of your application.

On the flip side actual TDD is hard. It takes a skillful mentor to demonstrate it, and it takes a long time to learn all the techniques necessary not to arrive at an unmaintainable set of tests. I think I can honestly say it's the hardest thing I've learned as a programmer, and I'm not done learning - especially avoiding large amounts of context in tests is hard.

OTHER TIPS

Testing early is important – but not everything has to be tested, and a lot of code is not served well by TDD-style unit tests.

  • For a prototype, it's completely fine to temporarily ignore best practices, as long as you will throw that code away later.
  • Some code is really plain yet so central to your application that everything will break if it contains a problem – you don't need explicit tests for that either.
  • And some things are really difficult to test in isolation. Then don't waste your time writing unit tests. This is often the case for business logic, or GUIs.

So a big part of efficient testing is knowing what not to test.

Tests capture requirements

But automated test cases have an extremely important dual role: They are not just verification that your system meets its requirements, they are also a description of these requirements – a description that can be automatically verified. How are you currently tracking your requirements? How are you ensuring that your system satisfies these requirements? Having domain experts manually test everything is a waste of time.

Easy testing with BDD

Behaviour-Driven Development is the least bureaucratic and most effective method I know. BDD suggests a TDD-like workflow, but on the level of your problem domain rather than on the level of unit tests.

  1. You talk with domain experts to understand the required behaviour of the system. You write this down in a plain-text format that everyone involved can understand. Work through concrete examples. Suggest some inputs to the system, and ask the domain experts how the system should react.

  2. Organize these plain-text notes into a machine-readable format and write an interpreter for them. This interpreter runs each scenario and verifies the system's responses.

    Use whatever format is convenient but still readable by the domain experts. I've used ad-hoc formats, YAML files, Cucumber feature files and so on. A developer-oriented project I'm maintaining uses Makefiles to describe each scenario.

    It's OK if the test runner is unable to verify some requirements. You'll have to check those manually for now, but it's still good to keep all requirements in the same place.

  3. Pick a scenario and use a TDD-loop to implement it. Or don't. Use whatever workflow works for you.

The power of this approach is that these natural-language scenarios are test suite and requirements document at the same time. And the test runner decouples the tests from your implementation choices. If you change how you satisfy these requirements then you don't have to throw away all your tests, because your requirements have stayed the same. Instead, you only need to update the test runner.

And thanks to the test vs. test runner split, most test cases will be fairly compact.

Early testing saves effort

The earlier you can find a problem, the cheaper it is to fix that problem:

  • Early in development your design is still in flux and can more easily accommodate the necessary changes.

  • You limit the possible damage from that defect, e.g. time and other resources wasted by users. Fresh bugs are also easier to debug because you're still immersed in the context of that code.

  • You make more efficient use of your subject matter experts. Getting next-day feedback from your expert: good. Getting same-minute feedback from your tests: much better.

    Of course, tests can never replace experts: whereas tests can verify the system (check for conformance with known requirements), humans can use their judgement to validate the system (check that it actually serves the business needs).

With a test-first approach, an implementation defect will be flagged as soon as the code is written. It's not possible to be quicker than that. This minimizes the cost of defects.

Paradoxically, writing tests together with the code also reduces the cost of writing either:

  • Writing tests involves design work. When you already have a clear idea of what to do and how to do it, writing that code is much easier.
  • Testability is an important but easily forgotten design constraint. When you are writing code together with the tests, writing those tests is much easier.

Early testing is faster than writing code first and tests much later. But of course, if time to market is more important than minimizing total costs, then deferring some of the testing effort (= piling up technical debt) can be a legitimate decision.

Prototypes should be small, quick, and discarded.

Prototypes (spikes) work best when they are a just quick feasibility study. The result of a prototype is not some software, but the knowledge of whether a particular approach works.

You have a multi-month project with complex requirements. That is no longer a prototype. Let's call it an alpha-quality project instead. What is your plan to transform this alpha software into a usable product?

  • If you restart from scratch, how will you make sure that all requirements are correctly satisfied by the rewrite? How will you carry over all the little details that you figured out? But a rewrite would imply “throwing away” multiple months of work, so this is unlikely to happen.

  • If you will incrementally refactor your code, it was a mistake to treat it as a prototype. Well, it's in the past. But how will you make sure that any refactoring is safe and keeps satisfying the requirements? Tests would help with that.

So under the assumptions that you will need tests anyway and that writing tests together with the code is the cheapest way to write tests – yes, you might have made a mistake.

The good news is that now is still a good time to start writing some tests. Don't go overboard with this. Don't write meticulous unit tests for every detail quite yet. Don't consider code coverage metrics at this phase of the project. But do start running high-level tests for any new feature or business rule change from now on.

Licensed under: CC-BY-SA with attribution
scroll top