Pregunta

I'm having a read at J.B. Rainsberger's blog post on integrated tests and wonder in which way an integration test is more harsh with our design?

We write more integrated tests, which are bigger and don’t criticize our design as harshly as microtests do

¿Fue útil?

Solución

Microtests can help lead to good design. By writing good small tests, you are deliberately testing a small amount of code and filling in its gaps with mock objects. This leads to low coupling (things aren't reliant on each other) and high cohesion (things that belong together stay together). That way, when you go back and make changes, it's easy to find what is responsible for what you're looking for and you're less likely to break things in making the change. This won't solve all your design but it can help.

In this context, J.B. Rainsberger is noting that if you're having a difficult time writing a unit test, you likely have an issue with your design that is causing the difficulty, and thus the tests are criticizing the design implicitly. He posits that this is a good thing, because without the small tests help keeping your architecture in line it is easy to stray from good design patterns - which integrated tests will not capture.

Update: as Rainsberger notes below, he did not intend microtests to be synonymous with unit tests. He's also provided a detailed answer that can give you deeper insight into exactly what he was communicating.

Otros consejos

The extremely short version: smaller tests, because they run smaller parts of the system, naturally constrain what the programmers can write, and so this creates an opportunity for sharper (easier to notice/harder to ignore) feedback. Let me add that this doesn't necessarily lead to better design, but instead it rather creates the opportunity to notice design risks sooner.

First, to clarify, when I say "microtest" I mean "a small test" and nothing more. I use this term because I don't mean "unit test": I don't want to become embroiled in debates about what constitutes a "unit". I don't care (at least not here/now). Two people will probably agree more easily on "small" than they would on "unit", so I gradually decided to adopt "microtest" as an emerging standard term for this idea.

Bigger tests, meaning tests that run bigger parts of the system in their "action" part, tend not to criticize the design as clearly nor as completely as smaller tests. Imagine the set of all code bases that could pass a given group of tests, meaning that I could reorganize the code and it would still pass those tests. For bigger tests, this set is bigger; for smaller tests, this set is smaller. Said differently, smaller tests constrain the design more, so fewer designs can make them pass. In this way, microtests can criticize the design more.

I say "more harshly" to conjure up the image of a friend who tells you directly what you don't want to hear, but need to hear, and who yells at you to convey urgency in a way that other people might not feel comfortable doing. Integrated tests, on the other hand, stay quiet and only hint at problems mostly when you no longer have time nor energy to address them. Integrated tests make it too easy to sweep design problems under the rug.

With bigger tests (like integrated tests), programmers tend mostly to get in trouble through sloppiness: they have enough freedom to write tangled code that somehow passes the tests, but their understanding of that code fades quickly the moment they move on to the next task, and others have undue difficulty reading the tangled design. Herein lies the risk in relying on integrated tests. With smaller tests (like microtests), programmers tend mostly to get into trouble through over-specification: they over-constrain the tests by adding irrelevant details, usually by copy/paste from the previous test, and in so doing they relatively quickly paint themselves into a corner. Good news: I find it much easier and safer to remove extraneous details from tests several hours or days after I write them than I find it to pull apart tangled production code months to years after I write it. As mistakes go, over-specifying does more and more-obvious damage quicker, and the alert programmer sees earlier that they need to fix things. I consider this a strength: I notice problems earlier and fix them before those problems strangle our capacity to add features.

He means that good software design is better informed by unit tests than integration tests.

Here's why. Writing unit tests forces you to write code that is unit-testable. Unit-testable code tends to be a better design than code that doesn't have unit tests.

Integration tests don't inform your code in the same way because you're just testing the outer layer of your software, not the inner interfaces that connect your software together.

Licenciado bajo: CC-BY-SA con atribución
scroll top