Question

While trying to advocate more developer testing, I find the argument "Isn't that QA's job?" is used a lot. In my mind, it doesn't make sense to give the QA team all testing responsibilities, but at the same time Spolsky and others say you shouldn't be using the $100/hr developers to do something a $30/hr tester could be doing. What are the experiences of others in a company with a dedicated QA team? Where should the division of work be drawn?

Clarification: I meant QA as a validation and verification team. Devs should not be doing the validation (customer-focused testing), but where is the verification (functional testing) division point?

Was it helpful?

Solution

It's the difference between "black box" testing (where you know what the code is supposed to do, but not how it works), and "white box" testing (where knowing how it works drives how you test it). "Black box" testing is what most people think of when you mention Quality Assurance.

I work for a company where the QA team are also software developers. (That narrows the field a lot if you care to guess the company.) I know Joel's opinion, and my experience leads me to partially disagree: for the same reason that a "white hat" hacker is more effective finding security holes, certain kinds of errors are more effectively found by white box testers who know how to write code (and therefore what the common mistakes are - for example, resource management issues like memory leaks).

Also, since QA-oriented developers are part of the process from the initial design phase, they can theoretically help to drive higher-quality code throughout the process. Ideally, for each developer working on the project with a mental focus on functionality, you have an opposing developer with a mental focus on breaking the code (and thus making it better).

Seen in that light, it's less a matter of using developers for testers than it is kind of disconnected pair-programming where one developer has an emphasis on controlling quality.

On the other hand, a lot of testing (such as basic UI functionality) frankly doesn't need that kind of skill. That's where Joel has a point.

For many businesses, I could see a system where programming teams trade off code review and testing duties for each others' code. Members of the Business Logic team, for example, could spend an occasional tour testing and reviewing code for the UI team, and vice-versa. That way you're not "wasting" developer talent on full-time testing, but you are gaining the advantages of exposing the code to (hopefully) expert scrutiny and punishment. Then, a more traditional QA team can take up the "black box" testing.

OTHER TIPS

When appropriate, Quality Control teams should be able to conduct Security, Regression, Usability, Performance, Stress, Installation/Upgrade testing and not Developers

Developers should do unit testing with code-coverage for the code being written as a minimal goal.

IN between, there is still quite a bit of testing to be done

  • full code path testing
  • Component Testing
  • Integration Testing (of components)
  • System (integration) testing
  • etc

The responsibility for these are mixed between QA and Development based on some mutual agreement on what makes most sense. Some component testing can only be done by unit testing, others are 'sufficiently' tested during integration testing etc.

Talk to each other, find out what everyone is most comfortable doing. It will take some time, but it's well worth it.

There should always be some developer testing. If a developer is producing too many bugs, then he/she is wasting time later on fixing those bugs. It is important that the developers don't develop the attitude which says, oh well if I leave a bug, it will be caught and I will get a chance to fix it.

We try to keep a threshold for bugs produced. If this threshold is crossed during testing then the developer is answerable for it. It is up to you to decide what this threshold is (for us it can vary from project to project).

Also, all unit testing is done by the developers.

I have only been in the industry for a year, but in my experience dev's are responsible for unit testing their features, while QA is responsible for testing scenarios. QA would also be expected to test any boundry conditions.

I'm pasting my answer to a question on our internal forum. If you have an hour or so.. take a listen to Mary Poppendieck's Competing on the basis of Speed video. Recommended

Note(By Testers - I refer to the QA Team)

Developer / Unit tests ________=_______ Usability testing & Exploratory testing

'==================================================================

Acceptance / Customer tests ___=_____ Property testing

Imagine that to be a square with four quadrants. :)

The left half should be automated.

  • Developer tests verify that the code works as the coder wanted it to. Tools: NUnit / xUnit / whatever home-made tool
  • Customer tests verify that the code works as the customer wanted it to. The tests should be very easy to write, shouldn't require the customer to learn .NET/Java. Else the customer wont write those tests (although he may require some help of a developer). Fit for example uses HTML tables that can be written in Word. Tools: FIT Regression tools also lie here. Record-replay.

The right half better utilizes the time & effort of good testers. e.g. No automated test can tell you whether X dialog is usable. Humans are better at this than machines.

  • Usability. Try to Break the System down,( catch unhandled failure scenarios, enter null values ). Basically catch things that the developer missed.
  • Property testing again requires humans. Here you check customer mandated properties that are required by your system. e.g. Performance - does your search dialog meet the 2 sec response time ? Security- can someone hack into this System ? etc. Availability - is your system online 99.99% of the time ?

Testers shouldn't be spending time executing test-plans on the left half. That is the developers responsibility to ensure that the code works as the customer and the developer intended it to. The testers can infact help the customer formulate the acceptance tests..

Testing should be as automated as possible, which turns it back into dev work if the testers are writing code that gets added to the automated test suite.

Also, I've found that we get a lot of QA done in code review, as people will suggest extra edge and corner cases they want to see added to the unit tests that are being reviewed (along with the code they test of course).

My general stance is that testers should never find unit level bugs (including boundary cases). The bugs testers find should be at the component, integration, or system level. Of course, at first testers may find "happy path" bugs and other simple bugs, but these anomalies should be used to help developers improve.

Part of your problem could be using $100 dollar per hour developers and $30 per hour testers :}. But regardless of the cost, I think knowing that bugs found earlier in the development cycle are inevitably cheaper, you'd probably still save money by having the developers own more testing. If you have a highly paid dev team and hack testers, you will probably find a lot of the big obvious issues, but you'll miss a lot of the more obscure bugs that will come back to haunt you later.

So, I suppose the answer to your question is that testers should test as much as you want them to. You can fire all of your testers and have the developers do all of the testing, or you can hire an army of testers and let the developers check in whatever they want.

There are 2 types of qa groups, those who want to maintain status quo. We always did so. They naturally hate and get rid of those who try to make things more efficient and hence go beyond their comfort zone. That happened to me more than once. Unfortunately qa managers are as incompetent as their qa teams. So a qa manager who has been managing for last 6 years will kill any automation, will introduce lots of processes just to justify their existence.This is an upper management responsibility to recognize that. There are somewhat technical qa people who know tools. Unfortunately a programming language is not a tool, but a vision. Working with those people really depend on how much they are willing to learn and how much the management is willing to take a risk of changing things. The tests should be written the same way as a main code is written object oriented structure easy to maintain. I do think, that developers should review qa tests. Most of the time I found the automation not testing anything. Unfortunately qa work is considered the lower class, so developers do not bother. I myself get a luck, when I get a support from an influential developer in a group, who is willing to explain my efforts to a manager. It works only half of the time, unfortunately. Testers IMHO should report to dev manager. And all team should take a responsibility for what qa tests actually test.

Here are some ways that developer testing is the most efficient / highest payoff:

  • Developer modifies a shared library while working on a feature - dev has insight into possible side effects that QA / validation don't
  • Developer is unsure of performance of library call and writes a unit test
  • Developer discovers path of use case not considered in spec that code must support, writes code, updates spec, writes test

It's arguable how much test duty should be carried out by the dev in the third example, but I argue that it's most efficient for the dev because all of the related minutiae from many layers of documentation and code are already in her short-term memory. This perfect storm may not be attainable by a tester after the fact.

Are we talking about QA or validation? I think of QA along the lines of inspection checklists, code standards enforcement, UI guidelines, etc. If we're talking validation, it doesn't make sense for devs to spend a lot of time authoring and executing formal test cases, but devs must provide all of the rationale and design documentation needed to author good tests.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top