Question

I regularly achieve 100% coverage of libraries using TDD, but not always, and there always seem to be parts of applications left over that are untested and uncovered.
Then there are the cases when you start with legacy code that has very few tests and very little coverage.

Please say what your situation is and what has worked that at least improved coverage.
I'm assuming that you are measuring coverage during unit testing, but say if you are using other techniques.

Was it helpful?

Solution

Delete code.

This isn't snarky, but actually serious. Any time I would see the smallest amount of code duplication or even code that I couldn't get to execute, I deleted it. This increased coverage and increased maintainability.

I should note that this is more applicable to increasing the coverage of old code bases vs. new code bases.

OTHER TIPS

I do assume you read "Code covered vs. Code Tested", right ?

As stated in that question,

Even with 100% block coverage + 100% arc coverage + 100% error-free-for-at-least-one-path straight-line code, there will still be input data that executes paths/loops in ways that exhibit more bugs.

Now, I use eclemma, based on EMMA and that code-coverage tool explains why 100% code is not always possible: because of partially covered lines due to:

  • Implicit branches on the same line.
  • Shared constructor code.
  • Implicit branches due to finally blocks.
  • Implicit branches due to a hidden Class.forName().

So all those 4 cases might be good candidates for refactoring leading to a better code coverage.

Now, I agree with Frank Krueger's answer. Some non-covered code might also be an indication of some refactoring to be done, including some code to actually delete ;)

The two things that had the greatest impact on projects I've worked on were:

  1. Periodically "reminding" the development team to actualy implement unit tests, and reviewing how to write effective tests.
  2. Generating a report of overall test coverage, and circulating that among the development managers.

We use Perl, so Devel::Cover has been very useful for us. Shows per-statement coverage, branch coverage and conditional coverage during unit testing, as well as things like POD coverage. We use HTML output with easy-to-recognize greens for "100%", through yellow and red for lower levels of coverage.

EDIT: To expand on things a little:

  • If conditional coverage isn't complete, examine the conditions for interdependence. If it's there, refactor. If it isn't you should be able to extend your tests to hit all of the conditions.
  • If conditional and branch coverage looks complete but statement coverage isn't, you've either written the conditionals wrong (e.g. always returning early from a sub when you didn't mean to) or you've got extra code that can be safely removed.

FIT testing has improved our code coverage. It has been great because it is an entirely different tack.

Background: we have a mix of legacy and new code. We try to unit/integration test the new stuff as much as possible, but because we are migrating to Hibernate/Postgres and away from an OODB, there isn't much point to testing the legacy code.

For those who don't know, FIT is a way to test software from the user perspective. Essentially, you can specify desired behaviour in HTML tables: the tables specify the actions against the software and the desired results. Our team writes 'glue code' (aka FIT test) that map the actions to calls against the code. Note that these tests operate in a view 'from space' compared to unit tests.

Using this approach, we have increased our code-coverage by several percentage points. An added bonus is that these tests will bridge across versions: they will test legacy code but then, later, new code. i.e. they serve as regression tests, in a sense.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top