Question

There are various types of quality that can be measured in software products, e.g. fitness for purpose (e.g. end use), maintainability, efficiency. Some of these are somewhat subjective or domain specific (e.g. good GUI design principles may be different across cultures or dependent on usage context, think military versus consumer usage).

What I'm interested in is a deeper form of quality related to the network (or graph) of types and their inter-relatedness, that is, what types does each type refer to, are there clearly identifiable clusters of interconnectivity relating to a properly tiered architecture, or conversely is there a big 'ball' of type references ('monolithic' code). Also the size of each type and/or method (e.g. measured in quantity of Java byte code or .Net IL) should give some indication of where large complex algorithms have been implemented as monolithic blocks of code instead of being decomposed into more manageable/maintainable chunks.

An analyis based on such ideas may be able to calculate metrics that are at least a proxy for quality. The exact threshold/decision points between high and low quality would I suspect be subjective, e.g. since by maintainability we mean maintainability by human programmers and thus the functional decomposition must be compatible with how human minds work. As such I wonder if there can ever be a mathematically pure definition of software quality that transcends all possible software in all possible scenarios.

I also wonder if this a dangerous idea, that if objective proxies for quality become popular that business pressures will cause developers to pursue these metrics at the expense of the overall quality (those aspects of quality not measured by the proxies).

ADDENDUM: Another way of thinking about quality is from the point of view of entropy. Entropy is the tendency of systems to revert from ordered to disordered states. Anyone that has ever worked on a real world, medium to large scale software project will appreciate the degree to which quality of the code base tends to degrade over time. Business pressures generally result in changes that focus on new functionality (except where quality itself is the principle selling point, e.g. in avionics software), and the eroding of quality through regression issues and 'shoe-horning' functionaility where it does not fit well from a quality and maintenance perspective. So, can we measure the entropy of software? And if so, how?

Was it helpful?

Solution

NDepend, at least for .NET code, will provide the best metrics for software quality that we have to date. They have 82 different code metrics. Is this what you are looking for? If you are a .NET programmer, you may find this blog post about NDepend analysis of a very popular/large open source project to be interesting.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top