Question

We have a ton of developers and only a few QA folks. The developers have been getting more involved in qa throughout the development process by writing automated tests, but our QA practices are mostly manual.

What I'd love is if our development practices were BDD and TDD and we grew a robust test suite. The question is: While building such a testing suite, how can we decide what we can trust to the tests, and what we should continue testing manually?

Was it helpful?

Solution

The first dividing line is -- what is substantially easier to test manually, and what is substantially easier to test in an automated fashion?

Those are, of course, pretty easy to figure out, and probably you're going to be left with a big pile of guck in the middle.

My next sieve would be -- user interface issues are among the hardest to test in an automated fashion, although some projects are making it easier. So I'd leave those to the QA folks for a while, and focus your automated tests on small units of back-end code, slowly expanding to larger integration tests across multiple units and/or multiple tiers of your application.

OTHER TIPS

My advice is, automate everything you can possibly automate. Let humans do what they are good at, such as answering the question "Does this look right?" or "Is this usable?". For everything else, automate.

Take a look at Mike Cohn's article on the Test Automation Pyramid. Specifically, consider what part of the UI really need to be tested that way. Corner cases, for example, are often better tested through the service layer.

+1 to Jim for recommending manual testing of UI elements; it's relatively easy to use a UI automation tool to create tests, but it takes a lot of thought and anticipation to design a test framework that's robust and comprehensive enough to minimize maintenance of the tests.

If you need to prioritize, a couple of techniques I've used to identfiy non-UI areas that would benefit most from additional testing are:

  1. Look at the bug reports for previous releases, especially the bugs reported by customers if you have access to them. A few specific functional areas will often account for a majority of the bugs.
  2. Use a code coverage tool when you run your existing automated tests and take note of areas with little or no coverage.

Manual testing can do the following, unlike automated testing:

  • GUI testing
  • Usability testing
  • Exploratory testing
  • Use variations when running tests
  • Find new, not regression bugs
  • Human eye can notice all problems. An auto-test verifies only a few things.

Automated testing can do the following, unlike manual testing:

  • Stress/Load testing
  • You even can use an automated test suite to test performance
  • Configuration testing (IMHO this is the most benefit). Once written, you can run the same test over different environment with different settings and uncover hidden dependencies that you've never thought about.
  • You can run the same test over thousands of input data. In case of manual testing, you have to select the minimal set of input data using different techniques.

Also, to make a mistake in an auto-test is easier and more likely then to make a mistake during manual testing. I recommend you to automate the most valuable functionality, but nevertheless run the tests (at least sanity) manually before an important release.

It won't hurt to test any new functionality manually to make sure it works to the requirement and then add it to the automation suite for regression. (Or is it too traditional?)

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top