Question

I have heard from a former colleague that not all bugs need to be fixed, because as you go down the priority list of bugs, the use case which causes that bug becomes more obscure, or the customer satisfaction gained gets lower. But you still have to spend considerable time on fixing that bug.

In an effort to convince our product owner about this concept, I could not find any good resources. All I could find was discussions on whether there is a marginal cost in software development or not.

Is there really marginal benefit in fixing bugs? Is there a different term that explains this concept?

Was it helpful?

Solution

From a business perspective, a bug fix is no different than a feature request. It has a certain cost in development time, and it has a certain value for customers. If a bug is non-critical, it can totally make good business sense to prioritize a valuable feature above the bugfix.

But from a technical perspective, bugs may be more critical, because they indicate an error in a foundation which other code might use/build on, in which case the error is "contagious" and adds cost to future maintenance. So not fixing a bug is a technical debt which require management, while not implementing a feature does not really have an ongoing cost. But the level of technical debt incurred by a bug very much depends on the nature of the bug.

All these factors should be taken into consideration when prioritizing.

As for whether there is a marginal benefit to fixing bugs: This is a given. Since not all bugs are equal in severity, you naturally prioritize the most important bugs first. So the more bugs you fix, the lower the marginal value of fixing the next. But whether it ever reaches the level were fixing the bug is not worth the effort, is a business decision rather than a technical decision.

OTHER TIPS

Here is a good reference

http://www.joelonsoftware.com/articles/fog0000000043.html

Do you fix bugs before writing new code? The very first version of Microsoft Word for Windows was considered a “death march” project. [...] because the bug fixing phase was not a part of the formal schedule [...]

Microsoft universally adopted something [...] the highest priority is to eliminate bugs before writing any new code [...] In general, the longer you wait before fixing a bug, the costlier (in time and money) it is to fix.

You can be sure that the longer those bugs will be here, the longer it will take to fix them once they become the priority. So instead of having a raw benefit right now, you are avoiding a costlier loss in the future.

A good way to manage that would be to define an amount of time allocated for handling backlog issues. This wouldn't push as mush as Microsoft did, but it will ensure an amount of resolving future problems, if they're not already your problem even if the client don't really care.

In an effort to convince our product owner about this concept, I could not find any good resources.

Assuming you're working for a commercial organisation, there will surely be someone there who is aware of Cost-Benefit Analysis.

Your organisation has a finite amount of developer resource, and an infinite list of beneficial things to do. Those beneficial things include both adding new features, and removing existing bugs - removing a bug improves the software, just as adding a new feature does.

So obviously there are decisions to be made about how to allocate this finite resource against this infinite list, and it's not particularly surprising that the result is that some bugs don't get fixed right now, or next week, or next year, or in fact ever.

If you're looking for a more structured approach here, you could try the PEF/REV system that assigns numbers to the User and Programmer views of a bug, as a starting point for deciding what gets fixed - and what doesn't.

See also these two posts here on Software Engineering:

Solving which bugs will give greatest cost benefit

Almost every reported bug is a high-priority bug

Not all unintentional or undesirable aspects of software behavior are bugs. What is important is to ensure that software has a useful and documented range of conditions in which it can be relied upon to operate in useful fashion. Consider, for example, a program which is supposed to accept two numbers, multiply them, and output the results, and which outputs a bogus number if the result is at more 9.95 but less than 10.00, more than 99.95 but less than 100.00, etc. If the program was written for the purpose of processing numbers whose product was between 3 and 7, and will never be called upon to process any others, fixing its behavior with 9.95 wouldn't make it any more useful for its intended purpose. It might, however, make the program more suitable for other purposes.

In a situation like the above, there would be two reasonable courses of action:

  1. Fix the problem, if doing so is practical.

  2. Specify ranges in which the program's output would be reliable and state that the program is only suitable for use on data which is known to produce values within valid ranges.

Approach #1 would eliminate the bug. Approach #2 might make the progress less suitable for some purposes than it otherwise might be, but if there is no need for programs to handle the problematic values that might not be a problem.

Even if the inability to handle values 99.95 to 100.0 correctly is a result of a programming mistake [e.g. deciding to output two digit to the left of the decimal point before rounding to one place after, thus yielding 00.00], it should only be considered a bug if the program would otherwise be specified as producing meaningful output in such cases. [Incidentally, the aforementioned problem occurred in Turbo C 2.00 printf code; in that context, it's clearly a bug, but code which calls the faulty printf would only be buggy if it might produce outputs in the problematic ranges].

In a loose sense, yes, not all bugs need to be fixed. It's all about analyzing the risk/benefit ratio.

What generally happens is the business will have a meeting with technical leads and stakeholders to discuss bugs that are not obviously in the 'need to fix' pile. They will decide whether the time(=money) invested into fixing the bug will be worth it for the business.

For example, a 'minor bug' could be a slight spelling/grammar error in the Terms and Conditions section of a website. The individual who raised this may think it too minor to change, but the business would recognise the potential harm it could cause to the brand, and the relative easiness of fixing a few characters.

On the other hand, you could have a bug that seems important, but is hard to fix and only affects a negligible amount of users. E.g. a minor button link is broken for users using a legacy version of Google Chrome and also happen to have Javascript disabled.

Other reasons for the business NOT fixing a bug could be that the time invested would knock the project back an unexpected amount of time, or that the developer's time would be better spent working on other fixes/coding. It could also be that the bug is minor enough that it can go live, and then be fixed at a later date.

Hope that explains the concept a bit better! I would certainly steer away from thinking about this in general terms - every bug is unique and should be treated as such.

Because as you go on the priority list of bugs, the use case which causes that bug becomes more obscure, or the customer satisfaction gained gets lower.

So their "argument" is actually

If you ignore the bug long enough, the User will forget what the problem was or find some way to work around it.

Bugs should be prioritised and dealt with "in order" just like new Feature Requests (but, arguably, over and above all of the latter).

Licensed under: CC-BY-SA with attribution
scroll top