Question

I am a product owner on an agile team. I when doing PO acceptance testing I usually make note to try some edge cases. Its not uncommon for me to discover something and then I pass it back to the the devs. I am getting push back from one of the developers when I reject his stories. He says its unfair since I don't specify the edge cases and how the program should respond in the acceptance criteria, as he tends to code for only what I describe in the story. I've encouraged him to ask me as he bumps into any edge cases while coding, but he thinks its not his job to think through the edge cases, its mine and I should make new stories for the next sprint.

In my defense I don't know his design for the story until after he implements it, so its hard to iterate through all the possibilities (will config be in a DB or properties file?). For simplicity's sake, lets say we have a story to add division to a calculator app. In the ideal SCRUM world, would it be incumbent on me to add a "handle divide by zero scenario" to the acceptance criteria or should he be working through those cases as he develops so the app doesn't implode on 5/0? To be clear, in this case I wouldn't accept if the app crashed hard on 5/0, but I would pass if it logs, prints DIV0, or any other way to handle the error... just so long as it doesn't crash.

Was it helpful?

Solution

I think the answer is you both should be thinking about your own set of edge cases. He as the dev should handle edge cases that are data specific such as does the app crash from any given user input, 5 / 0 certainly falls into this part of the spectrum. The dev should ask about you what you think would be an appropriate error message when the input given as part of the user's interaction leads to something invalid.

Your portion of the spectrum is the business side of things. How should the calculator behave if the user's account is not allowed to use the divide button? How should it behave when the account is allowed to use the Mod operation but doesn't have access to the division feature?

The important message I think that you need to convey, and have acceptance from all team members for, is that you are all on the same team. If the product is not complete the product is not complete and the team is to blame, not any given member.

OTHER TIPS

The team needs to work together as opposed to having a "Not my job, not my responsibility" type of attitude/mantra.

Acceptance criteria comes in the form of:

  • Business Acceptance
  • Quality Assurance Acceptance

Typically the business acceptance is usually answering the question:

  • Does the feature that has been implemented do what I want it to do?

The feature will have a number of requirements that are business oriented, like if I press this button I expect this action to occur. It will list out the expected business scenario(s) and expected behavior but it will not cover all possible cases.

It is expected that business requirement should be defined prior to an iteration so that quality assurance can develop any technical on non-business requirements. Quality assurance should develop destructive cases as well as edge cases as needed.

Both sets of requirements should be reviewed prior to starting any story work so that a formal estimation and commitment can occur for the unit of work. Once this is done, the feature/stories can be worked on. At this point everyone is clear on what is to be delivered both from a business and technical standpoint.

The story reaches final acceptance once the business and quality assurance team members sign off on the story. This should happen during the iteration for both business acceptance and quality assurance acceptance. This is the definition of done (DoD) which signals additional story work can be started.

Any new findings may be logged as defects or additional story spikes. In a perfect world this would never happen, but in reality there is usually some amount of "discovery" that occurs when working on a feature/story. This is natural.

The team should work together (business, QA, developer) to hash out any nebulous discovery type of requirements. If this is agile, they all should be sitting at the same table to foster communication and quick resolution to any questions that may arise. It should go something like this:

QA:

"Hey, Developer we should handle this particular scenario. I've discovered that if I input this data I get an error."

DEV:

"That wasn't covered in any requirement, but we can add some additional functionality to cover this. OK, Hey Business Person, how would > you like the application to behave for this case?"

BUSINESS:

"Let's show our standard error message and let the user try again for this scenario. How much additional effort will then be?"

DEV:

"It will be easy, only an extra hour or two. I can commit to for this iteration. QA please update your acceptance criteria for this scenario, we don't need an additional story for this. Thanks!"

Or if it's a lot a work, a new story is added to the backlog. The team can still accept the original story as it is meeting all the original requirements, and then pick up the spike story in the next iteration.

Writing software that behaves in a robust manner in the face of incorrect or ambiguous input is an essential part of a software developer's job.

If your developers don't see it that way, include additional non-functional requirements in the requirements specification that state this requirement explicitly, and provide your developers with an example of your testing process so that they can apply that process themselves before submitting their final code for review.

Acceptance tests should be a vital part of any requirements document anyway. If a requirement doesn't also state its criteria for acceptance, it's not really a requirement; it's a wish.

What has happened here is that you've discovered value. The input value was not thought of when the story (and acceptance criteria) was written or when the code was written. If it's not part of the acceptance criteria, you don't really have a basis to reject the story.

What we would do on my team is:

  1. Create a Bug detailing expected and actual behavior.
  2. Update the acceptance criteria so that the new found requirement is documented.
  3. Prioritize the Bug along with all the other Stories and Bugs in the next iteration.

The benefit here is that you're forced to consider whether or not fixing this bug is the next most important thing to do. It may or may not be important enough to fix, but it is important that its value is considered.

Of course, you still need to find a way to encourage developers (and yourself) to explore these edge cases up front. If your dev team isn't spending time breaking down stories, encourage them to have a detailed planning session prior to beginning work on them.

Some observations:

...when I reject his stories

I don't know your work culture or process, but to me rejecting a story is a severe step. If I were the dev, I would also generate push back on that as it is a recorded action that reflects badly on me and on the team.

He says its unfair since I don't specify the edge cases.

It's unfair of him to expect you to know all the edge cases. But at the same time, it's unfair for you to expect that of him. Every change has risk, and as issues are discovered y'all need to work together as a team to address them.

I don't know his design for the story until after he implements it

You should not have to know the design. It can be helpful to know the design in order to make initial educated guesses as to which stories are easier or harder for backlog management. But avoid trapping the developer into your design when you write stories. It sucks all the fun out of it when you are simply a voice-activated keyboard for the PO.


It sounds like you guys should work on process improvement and do some team building. Some things I might suggest for process:

  • Suggest that the dev include time in the story to the cover fixing discovered edge cases. Heck, make it part of each user story. This is easily defensible via the goal of 0 new bugs introduced. The problem is that the dev is not planning for it currently. And he's out of time when you discover issues. It's going to take time either way, so put it in the story where it is visible during planning.
  • After your testing (and thank you for testing by the way!), send the dev a list of discovered issues. The fixing of those issues will go against the "fixing edge cases" condition of satisfaction.
  • If anything remains unfixed or is discovered too late, decide whether the story needs to be pushed based on whether the use case can be fulfilled. Known issues and work-arounds happen. Disclosed them in release notes and create new stories to fix them.
  • If there is a particular rough spot in the process that generates pushback, then change your process! After all, process improvement is part of Scrum. For instance, if your dev gets upset when you reject the story, then suggest to the team a change in process so that rejection doesn't trigger fixes. Do the testing and fixes before Done and Rejected.
  • Work with the team and what they have produced and make the best use of it you can. They don't do perfect work and neither do you. So plan for that. My teams have usually been devops, so we have an Unplanned Support user story each sprint for emergent issues... planning for the un-plan-able.

The requirements should be clear and concise. If they are not, then happens exactly what happened to you. It is your fault, and the worst thing you can do when specifying requirements is to assume things.

You specific example, about division by zero. If you didn't specify that you want to log the error, then don't complain if the developer prints 100 as result.

But in such cases, I would just fill missing gaps and pass them to the developer. After all, bugs in requirements do happen.

Licensed under: CC-BY-SA with attribution
scroll top