Question

I used to work for a software development company, and there they had following way of solving bugs and the according testing:

  • The bug ticket is analysed by the developer.
  • The developer solves the bug.
  • The developer describes the testing procedure.
  • QA (Quality Assurance) automates the testing procedure and adds it in a big testing plan for all impacted versions (there are more than 100 versions of just one product).

On several occasions, I've mentioned not to agree to this way of working: in my opinion, the correct procedure should be:

  • The bug ticket is analysed by the developer and by QA.
  • QA describes the testing procedure.
  • The developer solves the bug.
  • The developer does module testing of the bug, based on the testing procedure, delivered by QA.
  • QA automating the general testing procedure => this stays the same.

Every time I proposed this, I received an answer like "Yes, you're right, this would be better from a quality point of view, but based on the need of extra QA personnel for implementing this, I'm afraid management will never agree to this."

So, as far as I can judge, my idea was correct, but as I'm about to apply for a job as a software tester, I'd like to have a funded answer: is my idea indeed correct (but not done because of local personnel/money reasons), or is my idea so expensive in general that this is implemented nowhere, or are there other things I'm not thinking of?

Thanks in advance

Was it helpful?

Solution

Tricky. When you have a bug and need to fix it, you proceed through the following steps:

  1. QA describes desired behaviour and actual (different) behaviour.
  2. If the behaviour is not always different, QA or developer finds a way to reproduce it - if you can't reproduce the bug, then you can't really say you fixed it.
  3. Developer figures out the cause of the bug.
  4. Developer fixes the source of the bug.
  5. QA verifies that the bug is gone.

To verify that the bug is gone, it's obvious to repeat the previous tests, especially if they have been made reproducible, and make sure the bug is not present anymore in those cases. But you want to do better, which is why you create a test plan: You make a list of situations where you think the bug might also have happened, and test those as well.

And here we have a problem: Sometimes it's obvious in which other situations the bug would have occurred, but sometimes it's not. Normally it would be better if QA decides what to test, because they are probably better at finding test cases that will happen in real life. But sometimes the developer finds that the some fault in the code would have caused misbehaviour in a very clear set of situations, that would have been impossible to figure out without analysing the code, and therefore impossible for QA to find.

The pragmatic approach: If the developer knows what should be tested to verify that the code is fixed properly, then use that as the basis for creating test cases. If not, let QA do it.

OTHER TIPS

The answer to your question depends entirely on the way each company runs their business, how much money they're willing to devote to QA efforts, the kind of software being written, and other factors. There is no "correct way."

Your method adds substantial overhead, requiring close coordination between the QA and development teams. We don't do it this way where I work, and I daresay we are doing just fine, but we work very closely with our stakeholders to make sure the software behaves in ways they expect it to work.

The way that software testing works should be tailored to the nature of the software itself. Not every testing procedure for every software project is going to be the same. The testing and Q&A process for a line-of-business application is going to be very different than the testing and Q&A process for spacecraft software, financial calculations or medical devices.

As the person who understands the actual code our team has written, I'm probably in a better position to craft tests for a specific bug than the QA team is. Teams that are concerned with Quality Assurance and stakeholder acceptance generally focus more on "is the software meeting its stated requirements" than "has this specific bug been fixed."

But if I were part of a Q&A team that reported a bug, I would certainly make it my business to see that the bug I reported was fixed, whether the developer had written tests for it or not.

Licensed under: CC-BY-SA with attribution
scroll top