Question

Anyone know, how many users are valid to use the prototype of the software that we have made? I have conducted research on the reference to the "Software Engineering A Practitioner's Approach book Roger S. Pressman". There is no mention for valid users of software testing?

Was it helpful?

Solution

In Usability testing there is a sort of rule, based on an article by Jacob Nielsen, that tells you that 5 users is usually enough to find most issues on a given system. Beyond 5, the new finds are almost negligible and you need to go to 15 users in total to be able to find all problems.

Granted, we are talking about usability and not functional testing, but given that any test on functionality has to follow a set of requirements to know if the system complies or not with the acceptance criteria (in usability you'll have some general heuristics), you can extrapolate the notion that you may not need a lot of users to find most issues within a system.

Of course, if you are testing a gigantic application, using more people could probably help you find bugs faster, but that doesn't necessarily mean that they'll find more.

OTHER TIPS

There is no fixed amount of valid users of a prototype. Because it's not about the number of users but their expectations.

Using prototype-grade software has many potential problems: It

  • will be buggy,
  • might lead to data loss, and
  • will change frequently.

However, having actual users interact with a prototype can be super valuable because that lets the developers see what is really required and whether the software works as expected. This requires some amount of cooperation between users and developers, e.g. the users should be able to send bug reports if they counter a problem.

So the number of users and the openness of a pre-release program needs to balance the potential risks of using prototype-grade software on one hand with the benefit of being able to deliver more useful software more quickly.

A few example scenarios:

A new accounting software is developed. One accounting team of seven people is trained with a prototype software in advance so that they can test it in practice. Risks are mitigated by only transitioning one team as a kind of pilot study. Also, the previous software will still be available if something goes wrong, so bugs won't be business-critical. Developers interview the accounting team members to learn about their experiences and needs.

This kind of approach is great for agile development as directly interacting with individual users lets you discover their real requirements more quickly.

A software as a service web app regularly releases new features. To test new features, they are first deployed to a beta server. Users can opt in to using the beta server. This allows the users to select their acceptable level of risk and acceptable rate of changes. Many organizations need to test new releases before they may be deployed widely. The developers get feedback via analytics, automatic crash reports, and an online support system. Here, the group of beta testers may include many thousands of people.

This kind of approach is useful where you already have concrete features in mind but need to test whether they work in practice, using statistical data. Well known examples of this strategy include the Windows Insider program or the Firefox Beta release channel. Firefox also used to have the Testpilot program where new experimental features could be installed in the browser, without being connected to the browser's release schedule. And here on Stack Exchange, the most recent site redesign was opt-in for a couple of weeks before it became the default.

Licensed under: CC-BY-SA with attribution
scroll top