Question

What ratio of [senior] developers to testers do people think is best?

Obviously this will depend partly on development/maintenance throughput, but is there a rule-of-thumb that a new company/project might work from?

Also, would you use 'pure' testers, or would you combine testing with other roles (e.g. documentation, user training, etc)?

Obviously, answers may depend on company strategy / development models used, so please specify if you're answering in general, or if for a specific style of product/release, and so on.

Was it helpful?

Solution

First of all, developers to testers is a good rule of thumb, but it's a bad rule.

What you need to consider is how many use case your application has. Applications which will be interacted with by users in an uncontrolled manner (I.E. web applications or desktop applications) will require more testers than a similar console application.

An application that takes a single file and detects regex patterns in it will require fewer testers than a new OS.

While those are general maxims, the practical advice would be to use some sort of approximate formula based on these factors

1) How many (compartmentalized) use cases are there?

I say compartmentalized use cases because if you include state changes and persistent variables, then seemingly unrelated parts of a program can turn out to be related. I.E. 2 + 2 = 4 (1 use case) 2 * 2 = 4 (2nd use case). That's two simple operators so, two classes of use cases. However, if you can add then multiply, then you can't check ONLY add and multiply individually, you must check them in all their possible permutations.

When examining the number of use cases, make sure you include the use cases that involve chaining of commands.

2) How long does it take to test each one?

This doesn't mean (to extend the calculator metaphor) only adding 2 + 2 and looking at the answer. You must include the time it takes to recover from a crash. If the answer is incorrect, you would expect the tester to log the bug with a screenshot and specific instructions on how to recreate the bug. If you don't give them time for this kind of administrating work, then you are baking into your plan the assumption that you have no bugs. And if we're assuming that, then why have testers at all ;)

A complex project will have both a high number of use cases, and a high number of developers, but they are not a guaranteed correlation. You are better off examining your use cases thoroughly and making an educated decision about how many testers will be required.

Added Bonus. Once you've broken down the application so thoroughly, you might find some use cases that were never considered in the design phase, and fix them before a tester finds them.

OTHER TIPS

Joel makes a good argument for 1 tester for every 2 engineers, as well as covering the excuses people use for not having those testers.

I blogged about this once here. The most relevant excerpt is below.

"I've seen high quality products produced on a 10:1 dev:test ratio, and horrible products created with a 1:1 ratio. The difference is in attention and care for quality. If everyone (including management) on the team deeply cares about the product quality, it has a good chance of happening regardless of the ratio. But if quality is something that is supposed to be tested into the product, by all means have at least 1 tester for every developer - more if you can get them."

There was a recent, relevant article on InfoQ that you might find interesting.

This string is apparently quite old. But the answers seemed to me all missed the point.

1). The question of a ratio of developer and tester is a valid one, as the more complex the requirements, the more developers are needed and therefore the more testers are needed. Quite a few of the replies seemed to dismiss this.

2). Regardless of application domains, a good ratio that works out in the real world for 'high quality' software is 2:1. You may live with 4:1, but it's really stretch it. Of course, there are many variables in this estimate, not only the complexity of the requirements, the systems / environments to deploy, but also how productive the developers are, and how tight the delivery schedule is.

HTH

In my opinion, a good metric to use in determining the number of testers needed is the complexity of the requirements, not the number of developers. If I were hiring testers, I would take a look at the list of requirements (or break the design document into a list of requirements if necessary), and think about how much testing time each requirement would need to verify that it was working correctly. I'd use that initial analysis to hire a base of testers, and then add testers later if the workload turned out to be too high for my initial base.

If you're putting together a budget and hiring testers later isn't an option, you might want to budget in slightly more testing resources than your analysis indicates. Too much is always better than not enough.

Whether to use "pure" testers is another question that's really dependent on how many testing resources you need. I've found that a good compromise is to hire testers who are capable of other jobs, and use them in other capacities at times when the testing load is light.

Edit: If you're lucky enough to have a set of acceptance tests early on, please substitute "acceptance tests" for "requirements list" above. :-)

There is no generalized "good" ratio.

Clearly, the time required to test something is contextual - it depends on factors that may have little or nothing to do with how long it took to develop that feature.

Also consider:

  • what counts as Development?
  • what counts as Testing?
  • If we were going to perform regression testing anyway, does that count as "zero" additional testing hours?

see: http://www.sqablogs.com/jstrazzere/150/What+is+the+%22Correct%22+Ratio+of+Development+Time+to+Test+Time%3F.html

I would say (depending on the speed you need things tested) with automation you could have like 1 or 2 testers for each 5 developers.

Why:

  • With automation they just need to worry about testing the new modules
  • Regression tests will take care of the older ones.
  • 1 or 2 testers can easily cover all the work 5 developers will do for example week/week
  • A good ratio I've been taught was that for every 10 hours of developing, the quality assurance team will take arround 3 or 4 hours to track most of the defects those 10 hours generated.

Hope it will help :)

Also, would you use 'pure' testers, or would you combine testing with other roles (e.g. documentation, user training, etc)?

It depends on the type of testing, but I would not burden testers with other roles. Competent test engineers are worth their weight in gold (the same as competent software engineers). If you give them tasks outside of their domain of expertise, you're going to slow them down and p*ss them off. Do software engineers like doing documentation or user training? Usually not. Neither do testers.

However, there's nothing wrong with supplementing your test team with people from other areas, especially for usability testing, acceptance testing, quick reviews, etc.

One thing is for certain. The number of testers should be greater than the number of developers. For every feature created by a developer, a tester must exercise the feature under various types of tests: functionality, usability, boundary, stress, etc. Although the exact ratio will depend more on the number of test cases and how long a test cycle should be (1 week, 3 days, 1 day or half a day), a single developer will generate enough testing activity for multiple testers. In addition, there may be scenarios that require multiple testers to simulate two or more users working concurrently on the system.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top