Question

I would be appreciative if someone could explain to me the difference between the following two pieces of code in terms of Visual Studio's Code Metrics rules. Why does the Maintainability Index increase slightly if I don't encapsulate everything within using ( )?

Sample 1 (MI score of 71)

public static String Sha1(String plainText)
{
    using (SHA1Managed sha1 = new SHA1Managed())
    {
        Byte[] text = Encoding.Unicode.GetBytes(plainText);
        Byte[] hashBytes = sha1.ComputeHash(text);
        return Convert.ToBase64String(hashBytes);    
    }
}

Sample 2 (MI score of 73)

public static String Sha1(String plainText)
{
    Byte[] text, hashBytes;
    using (SHA1Managed sha1 = new SHA1Managed())
    {
        text = Encoding.Unicode.GetBytes(plainText);
        hashBytes = sha1.ComputeHash(text);
    }
    return Convert.ToBase64String(hashBytes);   
}

I understand metrics are meaningless outside of a broader context and understanding, and programmers should exercise discretion. While I could boost the score up to 76 with return Convert.ToBase64String(sha1.ComputeHash(Encoding.Unicode.GetBytes(plainText))), I shouldn't. I would clearly be just playing with numbers and it isn't truly any more readable or maintainable at that point. I am curious though as to what the logic might be behind the increase in this case. It's obviously not line-count.

Was it helpful?

Solution

Having your variables all laid out at the top so you know what's in the function is more "maintainable", at least that's what whoever decides the rules for the code metrics thinks.

Whether that's actually true? Totally depends on the team working on the code. It seems you already know this by the tone of the question, but take almost all code metrics with a grain of salt, they're what someone thinks is best, that may not be true for teams outside of microsoft...do what's best for your team, not what some calculator tells you.

I wouldn't make changes that are detrimental to your and your team's coding performance (unless it's for actual performance or improved error handling, etc) that you think are less readable for getting a few points on the metrics board.

All that being said, if it gives you a very low maintainability, there probably is something worth looking at or breaking down into smaller chunks, as a very low score is probably not so acceptable, for pretty much any team.

OTHER TIPS

This is an old question, but I just thought I'd add that the MI is partially based on Halstead volume, which is based on a count of 'operators' and 'operands'. If declaration of a variable by type is an 'operator', this would mean that Sample 2 has fewer operators, thus changing the score. In general, because the MI is a statistical measurement, it is of limited usefulness when dealing with small sample sizes (like a single short method.)

Because of the increased distance between the declaration of your variables and where they are used.

The rule is to reduce the variable span as much as possible, the span is the distance between the declaration of the variable and where it is used. As this distance increases, the risk increases that later code is introduced that affects the variable without the programmer realising the impact further down in the code.

Here is a link to a good book that covers this and many other topics on code quality. http://www.amazon.com/Code-Complete-Practical-Handbook-Construction/dp/0735619670/ref=dp_ob_title_bk

Myself, I'd rather see return Convert.ToBase64String(sha1.ComputeHash(Encoding.Unicode.GetBytes(plainText))); it's a should rather than a shouldn't. This form has the advantage of concisely expressing the actual data-flow; if you add a bunch of temporary variables and assignments, I now have to read the variable names and match up their occurrences to see what's actually happening.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top