Question

This question is a little abstract but I'm hoping someone can point me in the right direction.

My question is what amount of time can one expect to devote to a software project's bugs in relation to the original development time. I realize there are a huge number of determining factors that go into but I was hoping for a typical or average breakdown.

For example, if Project A takes 40 hours to complete and an additional 10 fixing bugs then this project would have a 4:1 ratio.

If another Project (B) takes 10 hours to complete but another 8 on bugs then it would have a 5:4 ratio.

Is this a documented/researched concept?

UPDATE

Thanks for all the informative answers. I understand that it's impossible to put a standard to this kind of metric due to all the variables and environmental factors involved. Before I assign an answer I'd like to know if this metric has an agreed-upon name so I can do further research. I would like to get to a point where I can understand the measurements necessary to generate the metrics myself and eventually come up with a baseline standard for my project.

Was it helpful?

Solution

The equilibrium percentage of total capacity allocated to defect-fixing is equal to the defect injection rate.

Many factors can affect this rate, among them, of course: what kind of product the team is developing, what technologies and technical practices they use, the team's skill level, the company culture, etc.

Considering Team B, if they create on average 8 units of rework for every 10 units of work they complete, then working those 8 units will create new 6.4 units of rework. We can estimate the total effort they will eventually have to expend as the sum of a geometric progression:

10 + 8 + 6.4 + 5.12 + ...

The number of bugs will decrease exponentially with time, but Team B has such a coefficient in their exponent that it will go to zero very slowly. Actually, the sum of the first three terms in the above series is only 24.4; of the first five, 33.6; of the first 10, 45; of the entire series, 50. So, Team B summary: defect injection rate, 0.8; feature development, 10/50 = 20%; defect-fixing, 80%. 20/80 is their sustainable capacity allocation.

By contrast, Team A is in much better shape. Their progression looks like this:

40 + 10 + 2.5 + 0.625 + ...

The sum of this series is 53 1/3, so Team A's feature development allocation is 40/(53 1/3) = 75% and defect-fixing allocation is 25%, which matches their defect injection rate of 10/40 = 0.25.

Actually, all terms in Team A's series after the first three are negligibly small. What this means in practical terms is that Team A can probably squash all their bugs with a couple of maintenance releases, the second release being pretty small in scope. This also creates an illusion that any team can do that. But not Team B.

I thought about this equivalence while reading David Anderson's new book, "Kanban". (The book is on a different subject, but addresses quality concerns, too.) When discussing software quality, Anderson quotes this book, by Capers Jones, "Software Assessments, Benchmarks, and Best Practices":

"...in 2000... measured software quality for North American teams... ranged from 6 defects per function point down to less than 3 per 100 function points, a range of 200 to 1. The midpoint is approximately 1 defect per 0.6 to 1.0 function points. This implies that it is common for teams to spend more than 90 percent of their effort fixing defects." He cites an example provided by one of his colleagues of a company that spends 90% of the time fixing their bugs.

The fluency with which Anderson goes from the defect injection rate to the defext-fixing capacity allocation (failure demand is the term for it) suggests that the equivalence of the two things is well known to software quality researchers and has probably been known for some time.

The key words in the line of reasoning that I'm trying to present here are "equlibrium" and "sustainable". If we take away sustainability, then there's an obvious way to cheat these numbers: you do the initial coding, then move on to code somewhere else, and leave maintenance to others. Or you run up the technical debt and unload it on a new owner.

Obviously, no particular allocation will suit all teams. If we decreed that 20% must be spent on bugs, then, if a team has an ultra-low defect injection rate, they will simply not have enough bugs to fill the time, and if a team had a very high rate, their bugs will continue to accumulate.

The math I used here is way simplified. I neglected things like transaction costs (planning and estimation meetings, post-mortems, etc.), which would affect the percentages somewhat. I also omitted equations simulating sustaining one product and developing another one concurrently. But the conclusion still stands. Do what you can, in terms of technical practices, like unit-testing, continuous integration, code reviews, etc., to reduce your defect injection rate and, consequently, your failure demand. If you can create only one bug for every 10 features, you will have a lot of free time to develop new features and satisfy your customers.

OTHER TIPS

Unfortunately I believe this ratio is highly variable in a given project. It will drastically be affected by your environment, language, tools, team size and experience.

You should spend time on a bug only if what you gain from the fix is greater from what you invest.

Use a matrix like the following (horizontal - time required to fix the bug, vertical - type of bug - impact on users)

              | Few hours | Many hours
--------------+-----------+-------------------------
Minor problem | Might fix | Fix only if time permits
--------------+-----------+-------------------------
Major problem | Fix       | Fix

Example of problems:

              | Few hours                            | Many hours
--------------+--------------------------------------+---------------------------------
              | Window moves 1px every 10            | Windows is painted incorrectly 
Minor problem | times when you open the application. | every 100th time the app is open.
              | Fix is: handle window resize event   | Fix: Change the graphical engine.
--------------+--------------------------------------+---------------------------------
Major problem | Application crashes when opening     | Poor performance when >100 users 
              | SQL connection.                      | are connected (unusable app)
              | Fix: Fix invalid query + add nice    | Fix: change architecture + DB
              | message                              |

The matrix can be more complex with different levels of severity, effort, risks, etc

You can even create a rank for each bug and fix them based on ranking. Something like:

Bug priority = Risk x Severity x Effort

*Might be (1-x) for some operands, depending on which scale you choose :)

So, to answer your question: depends of the type of bugs, available time/budget, etc.

It's highly variable, not only (of course) on the experience and quality of the team, and on the difficulty of the project (it's not the same making another standard web application than a new OS kernel), but also on the management approach you'll use.

For example, on a waterfall model you can set precisely the first bug at the first testing phase, but on an agile environment can be difficult to raise a line saying "from here on, we are correcting bugs", as the features can change (and to me it's not fair counting a feature change as a bug)

From experience, I say that it's something that is ALWAYS underrated, and very easily can spend the same amount of hours than the "original project".

The truly correct answer would be zero hours on bug fixes because your code is perfect. :-)

Realistically, I can't say that I've ever heard someone ask for or offer that type of ratio. That's not to say that some companies don't track the time for both development and maintenance. But the development of an application is such a short timeframe when compared to the maintenance that most companies don't go back and calculate that ratio. They're probably more concerned about learning why an app requires maintenance and applying those findings to new applications.

Having too broad of a criteria for what is a bug can almost double your time. An over-zealous manager that thinks a client's request to make a button larger (they have mouse issues) is a great way to increase the number of bugs we fixed. It will only take a few seconds to fix because there is no need to consider, testing, recompiling and distributing a patch. Oh, and it gets double-counted as a new feature.

The biggest determining factor for this is whether you're working with a new technology or an existing one. If you're working with something new and developing something that hasn't been done or has been done a few times in different circumstances, you're going to spend a lot of time on bugfixes and getting your project to work the way you want. Frequently bugs will be the result of working yourself into a corner, and you'll need to do significant amounts of work to restructure what you've done. Additionally, many bugs will result from an incomplete understanding of user expectations and developer's unawareness of edge cases.

If you're working on an established technology, most problems will have been dealt with by the libraries or by practices in the community, and you should be able to google, buy, or ask your way out of any bugs you run into.

On critical sofware, A 1:1 ratio is not unusual. For unit testing alone, I've seen indicators mention 1 day of unit testing for every 10 lines of code.

I think that this question is biased : it starts from the presupposition that correcting bugs is a phase similar than developing new functionalities. This is not the case.

A good developer will not spend a lot of time debugging code as his code will be bug-free from the start. A bad developer will spend a lot of time debugging his code because he can't create suitable abstractions to solve real problems.

Note that developers should unit test their own code themselves. It's their job responsibility to deliver bug-free code. So it's hard to separate coding from debugging.

It's also a matter of priority. When developing, the time necessary to correct a bug is related exponentially to the time that has past since the moment you've inserted the bug in the code. So correcting bugs should be of greater priority than developing new functionalities.

So instead of talking about "time spent on bugs", you should talk about "time spent on tests" (integration tests, user acceptance tests...)

I think you're right - you're not going to get any meaningful metric due to sher number of influencing factors.

If it helps I can tell you projects I work on (enterprise space, large complex systems, lots of integration to other systems) have a ratio of about 3:2. Most of these are not faults with the code - more usually faults with the interfaces. For example, system A and B talk to each other through interface X. The developers of system A interpret interface X slightly differently than the developers of system B. Comedy ensues.

One observation to make is that development of code and testing/bug fixing of code shouldn't be two distinct phases. If you test as you develop the "cost" of bug fixing is less.

I take a purely practical point of view: What's impeding the practical usefulness of the project more? If it's bugs in existing functionality, you should fix bugs. If it's missing features, you should do original development, then go back and fix the bugs once the most severe missing features are implemented. This requires familiarity with your use cases. A bug that crashes the program in some odd corner case may be a lower priority than minor usability enhancements that affect everyone. A small nuisance bug in the most commonly used functionality may be more important than a feature that only benefits people who are pushing your software to the extremes.

Licensed under: CC-BY-SA with attribution
scroll top