Question

There are some questions on Code Metrics here, especially this one on goal values. What I'm looking for though is what's "usual" on real life production projects. Maybe it's just me, but no project I ever get put on ever has these things in mind so when I run ReSharper Code Issues or Visual Studio Code Metrics it seems I'm the first one - so the values always surprise me.

Examples from my current SharePoint assignment:

Maintainability | Cyclomatic cmplx. | Inher. depth | Class coupl. | LOC
67              | 6,712             | 7            | 569          | 21,649
68              | 3,192             | 7            | 442          | 11,873

So, the question is, what values do you usually see "in the wild"? Optimal values and best-practice aside, what values are normally encountered?

Was it helpful?

Solution

I assume these values stated are on an assembly level. If so, Cyclomatic Complexity and Lines of Code are most helpful on the method level. Inheritance depth should be looked at on the class level primarily. Class coupling gives more useful feedback when looking first on the method level and then on the class level.

In addition to the guidelines providing in the stack overflow link you included, Code Complete 2nd Edition has this to say about method Cyclomatic Complexity, page 458:

  • 0-5 The routine is probably fine.
  • 6-10 Start to think about ways to simplify the routine.
  • 10+ Break part of the routine into a second routine and call it from the first routine

In "real life" projects, what is acceptable probably will depend on the type of development process you are using. If the team is practicing TDD (test-driven-development) and strives to write SOLID code, then these metrics should be near optimal values.

If TAD (test-after development) or, even more so, code without unit tests, then expect all the metrics to be higher than optimal as the likelihood of having more coupling, more complex methods and classes, and perhaps more prolific inheritance may be elevated. Still, the goal should be to limit the cases of having "bad" metrics, regardless of how the code has been developed.

OTHER TIPS

The fundamental misconception about software metrics is that they're useful when put into a pretty report.

Most people uses the following flawed process:

  • Gather whatever metrics their tooling support
  • Compile a report
  • Compare it against recommended values
  • Start hunting for a question that their new found answer might address

This is wrong, backwards and counterproductive on so many levels it's not even funny. The proper approach to any metrics gathering is to first figure out why. What's your reason for measuring? With that answered you might figure out what to measure and given that you know your why and what you can figure out how to get some information that might guide further inquiry.

I've seen a wide-range of values for the metrics you've listed and to be honest across projects or environments the comparisons really doesn't make a whole lot of sense.

You can be fairly certain that the same team will produce stuff that looks like the stuff they've done previously. But you don't need metrics to figure that out.

You can use the metrics to find "hot-spots" to investigate but if you have quality problems bugs will cluster in problematic modules anyhow and going hunting for them is mostly useless.

Now don't get me wrong. I love metrics. I've written multiple scripts & tools to extract visualize and do all sorts of fancy stuff with them, it's all good fun & might even have been beneficial, I'm not all that certain of the later though.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top