Question

We are developing an application that goes through many testers before reaching our client.

Finally, when it reaches the client, they find some more bugs and report to us, and this has become a tedious process. There are some bugs which I personally can't fix, because it requires me to modify most of the inner code, and I am not even sure that it might work.

Questions:

  • Why do bugs get reported even after going through so much testing? Is it our requirements' issue?

  • Our client doesn't seem happy for anything we provide. Are we doing something incorrect?

  • Has anyone developed any application that were totally bug free? What is the process? Why can't we deploy the application with minor bugs? Are we supposed to be perfectionist?

  • Is the current scenario the correct process of development and testing? If not, what is an efficient way where developers, testers and client get the maximum benefit together?

Was it helpful?

Solution

The closest you get to a bug-free application, the more expensive it gets. It's like targeting 100% code coverage: you spend the same amount of time and money getting from 0% to 95%, from 95% to 99% and from 99% to 99.9%.

Do you need this extra 0.1% of code coverage or quality? Probably yes, if you're working on a software product which controls the cooling system of a nuclear reactor. Probably not if you're working on a business application.

Also, making high quality software requires a very different approach. You can't just ask a team of developers who spent their life writing business apps to create a nearly bug-free application. High quality software requires different techniques, such as formal proof, something you certainly don't want to use in a business app, because of the extremely high cost it represents.

As I explained in one of my articles:

  • Business apps shouldn't target the quality required for life-critical software, because if those business apps fail from time to time, it just doesn't matter. I've seen bugs and downtime in websites of probably every large corporation, Amazon being the only exception. This downtime and those bugs are annoying and maybe cost the company a few thousands of dollars per month, but fixing it would be much more expensive.

  • Cost should be the primary focus, and should be studied pragmatically. Let's imagine a bug which is affecting 5 000 customers and is so important that those customers will leave forever. Is this important? Yes? Think more. What if I say that each of those customers is paying $10 per year and that it will cost almost $100 000 to fix the bug? Bug fixing now looks much less interesting.

Now to answer your questions specifically:

why do bugs get reported even after going through so much testing? Is it our requirements issue? Our client doesn't seem happy for anything we provide? are we doing something incorrect?

Lots of things can go wrong. By testing, do you mean actual automated testing? If not, this is a huge problem on itself. Do testers understand the requirements? Do you communicate with the customer on regular basis—at least once per iteration, at best the customer representative is immediately reachable on-site by any member of your team? Are your iterations short enough? Are developers testing their own code?

Similarly to They write the right stuff article linked above, take a bug report and study why this bug appeared in the first place and why was it missed by each tester. This may give you some ideas about the gaps in your team's process.

In important point to consider: is your customer paying for bug fixes? If not, he may be encouraged to consider lots of things to be a bug. Making him pay for the time you spend on bugs will then considerably reduce the number of bug reports.

Has anyone developed any application that were totally bug free? What is the process? Why can't we deploy the application with minor bugs? Are we supposed to be perfectionist?

Me. I've written an app for myself the last weekend and haven't found any bug so far.

Bugs are only bugs when they are reported. So in theory, having a bug-free application is totally possible: if it's not used by anyone, there will be nobody to report bugs.

Now, writing a large-scale application which perfectly matches the specification and is proven to be correct (see formal proof mentioned above) is a different story. If this is a life-critical project, this should be your goal (which doesn't mean your application will be bug-free).

Is the current scenario the correct process of development and testing? If not what is an efficient way where developers,testers and client gets the maximum benefit together?

  1. In order to understand each other, they should communicate. This is not what happens in most companies I've seen. In most companies, project manager is the only one who talks to the customer (sometimes to a representative). Then, he shares (sometimes partially) his understanding of the requirements with developers, interaction designers, architects, DBAs and testers.

    This is why it is essential either for the customer (or customer's representative) to be reachable by anyone on the team (Agile approach) or to have formal communication means which authorize a person to communicate only with a few other persons on a team and to do it in a way that the information can be shared to the whole team, ensuring that everyone has the same information.

  2. There are many processes to do development and testing. Without knowing precisely the company and the team, there is no way to determine which one should be applied in your case. Consider hiring a consultant or hiring a project manager who is skillful enough.

OTHER TIPS

Not all bugs are created equal so you need to sort out the wheat from the chaff.

Expectations

Many bugs are raised simply due to a shortfall in what the software does and what the end user is expecting. This expectation comes from many areas: using other software, incorrect documentation, over-zealous sales staff, how the software used to work etc etc.

Scope creep

It goes without saying that the more you deliver, the greater the potential for bugs. Many bugs are simply raised on the back of new features. You deliver X & Y but the customer says that on the back of this it should now also do Z.

Understand the problem domain

Many bugs come about for the simple reason that the problem domain was poorly understood. Every client has their own business rules, jargon and ways of doing things. Much of this won't be documented anywhere - it will just be in people's heads. With the best will in the world, you can't hope to capture all this in one pass.


So...what to do about it.

Automated unit tests

Many bugs are introduced as an unexpected side effect of some code change or other. If you have automated unit tests, you can head off many of these issues and produce better code from the outset.

Tests are only as good as the data supplied - so make sure you fully understand the problem domain.

Code coverage

This goes hand in hand with automated unit testing. You should ensure that as much code is tested as is practical.

Learn the lessons

Madness is doing the same thing again and again and again and expecting different results

Do you understand the causes of the last failure? Do you? Really? You may have stopped the problem occurring but what was the true root source? Bad data? User error? Disk corruption? Network outage?

Nothing annoys clients more than encountering the same problems again and again without progress towards some form of resolution.

Defects have existed from the beginning of software development. It's hard to tell from your question to what extent and what severity the defects effect the usability or functionality.

Defect-free programs exist, but just about any non-trivial system will have defects.

You will have to decide upon some sort of prioritization and likely will have to do some study of the cause of the defects - where they were introduced. There is far too much to discuss about such things in a simple Q&A post.

Entire books have been written about causal analysis and fixing process for an organization that has quality problems.

So my recommendation is to: (in no particular order)

  • Implement a defect tracking system if you have not found one already
  • Determine a way to classify the severity of defects
  • Figure out why you are not meeting customer expectations (is it the developers, the QA, the customer, etc)
  • Learn about some exercises like the 'Five whys' and do similar investigation into some of the causes of your defects.

Depends on what you call an application.

If you mean, an interactive program where you need to be certain that the real-time behaviour is exactly such and such under any given circumstances, then it's basically impossible to proove there aren't any bugs in it. I suppose it would be possible if you could solve the halting problem, but you can't.

However, if you restrict yourself to a statement of "such and such input will eventually yield such and such final state", then your chances of a "bug-free proof" are better, because you can use invariants. That, and only that, allows a correctness proof to be broken down in subproblems, each of which can relatively easy be prooven to work correctly under all circumstances of the remaining program (though you generally can't be very accurate about how much time & memory it might take).

Such techniques are basically possible in any programming language (though some esoteric ones like Malbolge try to disproove that!), but in all imperative languages it gets messy very quickly, because you have to meticolously keep track of a lot of implicit program state. In functional languages1, the proofs tend to look much nicer (pure languages, or the purely-functional subset of a language). Still, in particular with dynamic types, you will need to write out a lot of requirements about what inputs are permitted. That's of course one of the main benefits of strong static type systems: the requirements are right there in the code!
Well, ideally, that is. In practise, O'Caml or even Haskell programs tend to contain nontotal functions, i.e. functions that will crash / hang / throw for certain inputs, despite the correct type2. Because even though these languages have very flexible type systems, it's sometimes still not feasible to use it to fully restrict something.

Enter dependently-typed languages! These can "calculate" types precisely as needed, so everything you define can have exactly the type signature that prooves all you need. And indeed, dependently-typed languages are mostly taught as proof environments. Unfortunately, I think none of them is really up to writing production software. For practical applications, I think the closest you can get to completely bug-proof is writing in Haskell with as thoroughly total functions as possible. That gets you pretty close to bug-proof – albeit, again, only with regard to the functional description. Haskell's unique way of handling IO with monads also gives some very useful proofs, but it generally doesn't tell you anything about how long something will take to finish. Quite possibly, something might take exponential time in particular circumstances – from the user's POV, that would likely be as severe a bug as if the program completely hangs.


1Or more generally, descriptive languages. I haven't much experience with logical languages, but I suppose they can be similarly nice in proof regards.

2If it's not the correct type, the compiler will never allow it in those languages; that already eliminates a lot of bugs. (And, thanks to Hindley-Milner type inference, it actually makes the programs more concise as well!)

Licensed under: CC-BY-SA with attribution
scroll top