Is it feasible to track or measure the cause of bugs or is this just asking for unintended consequences?

StackOverflow https://stackoverflow.com/questions/3249919

  •  15-09-2020
  •  | 
  •  

Question

Is there a method for tracking or measuring the cause of bugs which won't result in unintended consequences from development team members? We recently added the ability to assign the cause of a bug in our tracking system. Examples of causes include: bad code, missed code, incomplete requirements, missing requirements, incomplete testing etc. I was not a proponent of this as I could see it leading to unintended behaviors from the dev team. To date this field has been hidden from team members and not actively used.

Now we are in the middle of a project where we have a larger than normal number of bugs and this type of information would be good to have in order to better understand where we went wrong and where we can make improvements in the future (or adjustments now). In order to get good data on the cause of the bugs we would need to open this field up for input by dev and qa team members and I'm worried that will drive bad behaviors. For example people may not want to fix a defect they didn't create because they'll feel it reflects poorly on their performance, or people might waste time arguing over the classification of a defect for similar reasons.

Has anyone found a mechanism to do this type of tracking without driving bad behaviors? Is it possible to expect useful data from team members if we explain to the team the reasoning behind the data (not to drive individual performance metrics, but project success metrics)? Is there another, better way to do this type of thing (a more ad-hoc post mortem or open discussion on the issues perhaps)?

Was it helpful?

Solution

A lot of version control packages have things like svn blame. This is not a direct metric for tracking a bug, but it can tell you who checked in changes to a release that has a major bug in it.

There's also programs like http://www.bugzilla.org/ that help track things over time.

But as far as really digging into why bugs exist, yes, it's definitely worth looking into, though I can't give a standard metric for collecting that information. There are a number of reasons why a system might be very buggy:

  • Poorly written specs
  • Rushed timelines
  • Low-skill programming
  • Bad morale
  • Lack of beta or QA testing
  • Lack of preparing software so that it is even feasible to beta or QA test
  • Poor ratio of time spent cleaning up bugs vs getting new functionality out
  • Poor ratio of time spent making bug-free enhancements vs getting functionality out
  • An exceeding complex system that is easy to break
  • A changing environment that is outside the code base, such as the machine administration
  • Blame for mistakes affecting programmer compensation or promotion

That's just to name a few... If too many bugs is a big problem, then management and lead programmers and any other stake-holders in the whole process need to sit down and discuss the issue.

OTHER TIPS

High bug rates can be a symptom of a schedule which is too rushed or inflexible. Switching to a zero defect approach may help. Fix all bugs before working on new code.

Assigning reasons is a good technique to see if you have a problem area. Typical metrics I have seen and encountered are an even split between:

  • Specification errors (missing, incorrec, etc.)
  • Application bugs (inccorrect code, missing code, bad data, etc.)
  • Incorrect tests / no error (generally incorrect expectations, or specifications not yet implemented)

Reveiwing and verifying the defect causes can be useful.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top