Question

We are currently developing a concept for our tests standards. Up until now, we don't have standards. All we do is to tell the developer to write tests. Now we have the following basic idea:

We will use the coverage metric described in Developer Testing blog article Selecting Developer Testing Metrics to visualize the status quo.

We won't make demands how much percent the code has to be covered. Nonetheless we want some kind of control. Therefore we will check if the coverage has significantly reduced over time. If this is the case the whole team will be informed. There should be no personal blaming.

To encourage the developer to write more tests we will ask them to do the estimate of a feature with the tests in mind, so that they have enough time do write the tests.

We don't have much experience in test standards. Is this a good approach? Is this metric useful? And what do you think about our strategy to control the team members?

EDIT

Some additional information regarding Sklivvz's post. We have nearly no code covered. Overall it is about 25%. Some components are just not covered. Our problem is, that our developers - mostly people with 10+ years experience - don't write tests at all. Therefor we invests much time in fixing bugs found by our testing team. What we want, is to have more assurance that we don't break something by implementing another thing and that we don't make the same bugs.

Additional it would be nice to let the tests guide our team members to make a better architecture in their components. I personally have seen it in a component where I came in later and had refactored the whole component at some point, because it was not even a little bit testable.

We develop a software platform which grows several years now. Therefor we have much code which is not covered by tests. Like I said above we want to motivate our team members to write tests for their components which they are currently developing.

No correct solution

Licensed under: CC-BY-SA with attribution
scroll top