سؤال

If a project has 30% coverage by unit tests, 40% due to integration tests, is it fair to say the total is 70% as so moderately well covered?

Or are unit tests only ever used as the standard test type for code coverage metrics?

Coverage is tracked via a continuous integration server.

هل كانت مفيدة؟

المحلول

If a project has 30% coverage by unit tests, 40% due to integration tests, is it fair to say the total is 70% as so moderately well covered?

Only if there is zero overlap between the 40% covered by integration tests and 30% covered by unit tests. If there is some code that is covered by both tests, the total coverage will be less than 70%.

Hint: imagine the coverage numbers weren't 30% and 40% but rather 60% and 70%. Now you add them up and have 130% coverage, i.e. you are testing code that doesn't even exist. Does that sound sensible to you?

Or are unit tests only ever used as the standard test type for code coverage metrics?

No. Coverage metrics simply tell you which code was executed. Period. It doesn't you what that code was executed for (integration tests, unit tests, performance tests, etc.). You can even use coverage metrics completely outside of testing, i.e. collect code coverage information from users to see how much of your code is actually used and how much is dead weight. (E.g. the realization that 90% of users of MS Office use 10% of its features, and the changes in UI that has brought about, in automatically hiding lesser used features to declutter the menus etc.)

If you want to measure unit test coverage, you run your unit tests. If you want to measure integration test coverage, you run your integration tests. If you want to measure total test coverage, you run all your tests and record coverage in one go, but you cannot just add up the numbers.

It also doesn't really make sense to combine the two: unit tests and integration tests serve very different purposes, give very different guarantees, and have very different performance characteristics.

Unit tests test a singe piece of behavior in complete isolation. That makes them very fast (a full unit test suite should typically run in well under 10 seconds, ideally less than 1). It allows you to precisely pinpoint any problem to a single unit of behavior, typically just a single line of code. It allows you to constantly run your unit tests while developing so that you catch any potential bugs you introduce within seconds of introducing it, when you still know exactly what you changed, how, and why. They do, however, not guarantee that the system works: they are tiny isolated units, even if all units work 100%, they still need to be wired up correctly.

Integration tests test the interaction between units. They are typically slower, and a failure points to larger area you have to search for the bug. Because of their slower running time, you cannot run them as often as unit tests, you'll typically only run them over a break, before a push, after a merge, before a release, etc. Therefore, the feedback time will be slower. Note: if you have very fast, very focused integration tests, you may be able to do away with (some of) your unit tests, as then the integration tests can serve the role of unit tests.

In addition to that, you still need also functional tests, acceptance tests, performance tests (both micro and macro), fuzz tests, usability tests, …

مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى softwareengineering.stackexchange
scroll top