Question

I've been getting requests lately from management to create reports of the number of assertions run by the tests for our software. They want this so they can tell if people are writing tests or not. My inclination is to just tell them "no you can't have that because you don't need it" but that doesn't seem to satisfy them.

Part of the problem is that our teams are writing long test cases with lots of assertions and they want to say they've tested some new feature because they've added more assertions to an existing test case.

So my question is: Does anyone have some good, authoritative (as much as it really can be), resources or articles or books even that describe how testing should be split into test cases or why counting assertions is bad?

I mean counting assertions or assertions per test as a measurement of if people are right tests is about as useful as counting lines of code per test. But they just don't buy it. I tried searching with Google but the problem is no one bothers to count assertions, so I can't really say "this is why it's a bad idea".

Était-ce utile?

La solution

The imagination to take stupid decisions in software management really have no limits, counting assertions??...the problem with testing usually its a quality problem not a quantity problem.

If you want a respected reference, Gerard Meszaros xUnits patterns its perhaps one of the most respected, one of the recommendations its "Verify one Condition per test" (http://books.google.es/books?id=-izOiCEIABQC&lpg=PT111&ots=YIeYejY-mx&dq=meszaros%20one%20assertion%20per%20test&hl=es&pg=PT110#v=onepage&q=condition&f=false)

But... if the problem its that the people its adding "new test scenarios" extending existing test with "more assertions" instead of writing new test, the best your company can do its buying a lot of copies of meszaros book (and kent beck TDD by example, and growing object oriented guided by test) and hiring some experts to give training and guidance before its to late.

Autres conseils

Perhaps the Agile Manifesto says it best:

Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.

If you try to run a project by metrics, you end up getting whatever you measure, e.g., lots of assertions that don't actually test the right things.

Or from a more general management perspective: http://hbr.org/2010/06/column-you-are-what-you-measure/ar/1

The book Code Complete by Steve McConnel covers code and tests quality aspects, including metrics related to your case.

Metrics should stimulate desirable behaviour. Desirable behaviour, in your case, is writing more good tests. So, try to explain to your managers that counting assertions is not linked with desirable behaviour. It actually can cause undesirable behaviour, mentioned in the other answer here.

I agree with the point from Agile Manifesto. However, it can be applied successfully in healthy environment. I observed cases when some engineers refused writing unit tests, because they believed they were "successful" without them last 20 years or so. In this case, it does not matter how much you trust them to get job done. Metrics change behaviour. They generate bias-free data for better decision making. They are useful, but if they are right metrics.

Good luck!

Perhaps the root cause of this is the lack of visibility caused by long test methods. Really there should be one logical assertion written per test (This can be a group of assertions, but should really be as few as possible to test the given scenario), anything more and the test is less readable and it harder for someone to understand what its actually testing. These are easier to maintain too, and will also be easier to change when the system under test changes. Long test methods also tend to be very fragile as they cover too much of the systems behaviour and can require changing every time anything changes.

I can't find an example off hand, but a couple of good resources are Kent Beck and Mark Seemann. This is very relevant to this question.

Metrics themselves can always be tricked, you can get 100% code coverage, with loads of assertions and not actually test anything, you will likely get more value at least initially from cleaning up the tests.

Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top