Domanda

Many coverage tools evaluate an entire project, including unit test code itself. In VS 2013, the Analyze Code Coverage/All Tests option includes test code in its report. OpenCover does so as well I believe. In Eclipse, a Maven project with the typical src/main/java and src/test/java setup, EclEmma will report coverage for both main and test code

This seems of minimal value to me, except for possibly ensuring that all tests are actually being executed. With test code included, the coverage % is often artificially high since often the tool will report close to 100% coverage for test code, which can skew the entire project above a benchmark level (say 80%) that it might not have otherwise achieved.

Are there legitimate reasons to include test code in coverage? Or should I continue filtering it out when automating our coverage reporting?

È stato utile?

Soluzione

I wouldn't say reasons exactly but they do provide a good sense check. All tests being the same (not attributed), you'd expect this to be 100% for a full run. Anything less than this suggests you have dead code which could be removed.

For CI builds, it is quite common to ignore integration tests so it is useful (for me as a build manager) to see this as I know the solution under test has an integration element or similar.

It can also be a smoking gun for developers who are seeking to hide their broken tests with the Ignore attribute. We had one cowboy who broke the tests and then simply added others to cover it up. The test count remained the same but the coverage went down over time. Needless to say he was soon shown the door!

Autorizzato sotto: CC-BY-SA insieme a attribuzione
scroll top