Question

When defining software requirements, should automated testing (unit and integration) be specified? I have not seen any guidance on this, and "testing" is not a functional requirement for the software. But, from a CI/CD perspective, it's pretty much a "must do".

Perspectives/opinions are appreciated, but direction to articles/guides would be greatly appreciated.

Clarification/Elaboration

1 - The requirements (high level) are being written to support a bid/proposal process.

2 - There is a desire/intent to incorporate CI/CD to facilitate shorter delivery/deployment cycles.

3 - Achieving #2 is next-to-impossible without automated testing, both unit-level and integration.

4 - The intent is not to identify specific tests to be performed against specific requirements, but to establish a requirement that development of automated tests is included as part of the successful bidder's software development process.

5 - I get that this should, ideally, be in other documents/processes. But, those tend to be developed/delivered after the award; ergo, the question.

And, last but not least, simply requiring automated tests to be developed does not mean that the tests will adequately test the developed software. So, if such a requirement were imposed on the successful bidder, it would also require some objective measure of defining what constitutes an acceptable test. These exist, but this all has the potential to become a wormhole - so I'm trying to balance the pros/cons.

Was it helpful?

Solution

In most cases, I would suggest that no, a requirement for automated testing would not be considered a software requirement.

My first thought, which I brought up in a comment, is that part of this process seems backwards. Requirements are not written by a vendor to support a proposal process. The requirements or objectives are expressed by the customer and vendors propose how they will satisfy those requirements or objectives and the schedule and budget in which they will do so. In presenting their proposal, the vendor may propose how a certain process methodology (such as a lean-agile approach or a DevOps approach, which may take advantage of automated test frameworks CI/CD pipelines) would impact risk, schedule, and budget. And to that end, the vendor could get into the detail of allocating time and money for creating the appropriate test harnesses and CI/CD pipelines in their proposal.

I can see some cases where a customer may express requirements around automated testing. One example is where the customer would ultimately take ownership of the delivered software, including ongoing maintenance and support, either directly or by passing off maintenance and support as a follow-on contract which may go to a different vendor. However, I still wouldn't consider this to be software requirements, but rather contractual obligations. I would consider the software requirements to be the definition of the functional and quality attributes, without specifying how those attributes are met.

The customer, when presenting the requirements to vendors, should minimize the extent to which "how" is specified in the proposal process, but focus on the what, and specifically what the delivered products and services need to do. The vendors should be allowed to specify various methods of meeting those needs and justifying their bids with respect to overall cost, schedule, and risk.

OTHER TIPS

First, there is a general assumption that the team who will work on the actual project will be a team of professional developers, which implies that they will do their job professionally, which includes testing their code, refactoring it on regular basis, etc. The fact that many teams don't do testing is not indicative of the problem with the requirements which weren't too detailed, but rather with the processes and approaches which allowed the team to work in a non-professional way.

Second, in large projects, there is generally a separate document describing precisely how tests will be performed, what type of tests are required, who will be in charge of the different types of tests, how the conflicts between the test team and the developers would be handled, and lots of other things. Given that, it is unnecessary (and even harmful) to clone part of this document in the requirements document. They will drift away, and the requirements one will usually be outdated very quickly. You don't want that in your project.

I've never included detailed testing information in the requirements document. That is to be addressed in the test plan document.
The SRS document might have references to certain types of testing or maybe some corner cases, but nothing specific.

On the other side, your test plan, or verification and validation, document(s) should contain all the testing required, be it manual or automated, with their associated test cases and scenarios.

In order for a requirement to be a true requirement it must, in some way, specify a metric or test that can be applied that will objectively determine whether the requirement has been satisfied. This avoids the problem of the customer scope-creeping the requirement by saying "you're not finished yet." If the metric is satisfied, the requirement is declared completed.

A Real Requirement:

As an authenticated user, I shall have the ability to enter a street address into an invoice.

Not a Real Requirement:

The application shall have a user-friendly interface.

If eliminating ambiguity in the requirement requires automated tests, then by all means specify exactly what those tests should be. Usually, however, things like unit tests are meant to test implementation details, not the behavior of the system from a user story perspective. While such tests are important, they don't necessarily need to be called out in the requirements unless they directly inform the requirement itself.

No, software requirements should not address the specifics of testing of the software.

Software requirements should describe the task(s) a given system should perform, the inputs and outputs, algorithms and data structures, the capacity and responsiveness.

You seem to be asking, should software testing be mandatory? I think so, but still it's outside the realm of functional or non-functional requirements. In fact, the more requirements focus on outcomes and not how to get there, the easier they are to meet (and test).

Do you tell the car mechanic to take a test drive when you take in your car? Do you tell a chef to clean his kitchen? A software engineer to take security training? No, there is a process to getting these things done, to produce the intended level of quality.

After some consideration, and discussion with people handling "the paperwork"...

  1. The place where requirements for automated tests is "best expressed" is in the statement of work (contract), where the items-to-be-delivered are identified. So, things like source, user guides, SDKs, etc., and...automated test code/scripts.

  2. Which, of course, means that the requirements can focus on the performance requirements for the capability being contracted for.

This makes sense (to me), and keeps things "where they should be".

This is what your organisation's Definition of Done is for.

Quite simply, at some point the organisation (developers, product owners, management) need to decide on what they mean when they say something is "Done", and if it doesn't meet that definition, it isn't finished yet.

So, one of the aspects of your Definition of Done should be that a sufficient level of automated testing has been implemented.

This all, of course, relies on the requirements being testable (see Robert Harvey's answer).

As part of designing the bid / proposal you have choices. You could define some software requirements, and request companies bid to deliver you some software that met the stated requirements. Normally, these would include functional items relating to use cases and business / user objectives for using the software. It would also be wise to include non functional requirements. Non functional requirements could include timescales for doing specific tasks, or capacity constraints such as the website must be able to handle 2,000 concurrent user sessions, with 1% actively retrieving a page at any moment in time.

However, you could define the bid / proposal differently. You can define the bid proposal as a bid to implement a project to deliver the software.

The difference here, is that by defining the project requirements as well as the initial software requirements, it enables you to:

  1. Place some degree of influence on the processes and quality controls that will be implemented in developing the software. Companies bidding for work, will be looking for delivering sufficient quality for the lowest effort. By defining the project requirements obout engineering quality controls, you are retaining control of key quality controls in the project.

  2. By defining the bid as a project, you are not tied quite so tightly into the initial specification, and are more likely to think about how to manage spec clarification as the project and its implementation details become better understood.

  3. This would allow you to put in requirements around the automated testability of the software i would also think carefully about the level of testability that is required and how the complexity of the product will still enable quick deployment cycles without the automated test process taking 3-4 hours to complete.

Finally, as a note - If you look at software consultancy websites, their sales pitch is never about how cheaply they pay their developers, it's always about their craftsmanship and how they have superior developmental processes - their sales team will be very capable in explaining how their project processes will deliver an exceptional customer experience that meets your project brief

Licensed under: CC-BY-SA with attribution
scroll top