Question

Are there any formal/informal measures of comparing completed functionality vs initial requirements of a project. Specifically, my goal is to identify any missed requirements early on in a project. Having gone through many agile/scrum methodology articles and books, one way to do this would be a requirements review during a "sprint review" but I was wondering if there were any other techniques/tools out there.

Thanks,

Was it helpful?

Solution

Are there any formal/informal measures of comparing completed functionality vs initial requirements of a project.

The word(s) you are looking for is "Done Criteria". It has a more deeper meaning than the words itself, in the Agile world. It is often the first thing to be fixed in an Agile Organization, if it is found to be missing. Below (at the end) is a link to an article which explains it in more detail.

Most Agile Teams use User Stories as their "initial requirements". The user story would be your initial requirements which would be just enough to get the Team started. The measure used, should be what most Teams call, "the Done Criteria". Every User story should have a done criteria. For eg. In order to call a backlog done, these list of things need to be Done. While setting this we do not worry about How it would be done, only What needs to be done.

During the Sprint Review, the Team would do a show and tell of the working software, and if it meets the done criteria, the PO should approve it to be officially marked done.

Off course, sometimes User stories have changing Done criteria, especially for new Teams or Projects, but that is perfectly normal, because a sign of a good user story is that it is negotiable. The Done Criteria can be slightly modified after getting approval of Team. Teams rarely disapprove these, unless the change causes a dramatic increase in complexity of the work to be done.

So to summarize:

Initial requirements i.e. User Stories need to have a "what" needs to be Done Criteria". If something is missed and discovered during the Sprint the PO may change the Done Criteria of a User Story after getting an approval from the Team.

During Sprint Reviews the working software can be measured against the Done Criteria, and if it measures up, the User Story can be called done.

http://scrumalliance.org/articles/105-what-is-definition-of-done-dod

OTHER TIPS

In an agile approach, changes in requirements are expected and considered healthy. Responsiveness to change is considered more important than following a plan.

A sprint review is one place to gather feedback and new requirements. Usability tests also help. But what helps the most is heavy use of the software by a QA team and/or actual users.

If you happened to be using JIRA and GreenHopper for managing your requirements (as stories), you might find helpful a search for stories created after a certain date. Finding modified requirements would be more interesting.

Is software ever complete? Obviously the real benchmark for completeness is someone's mind's eye view of what the software should do.

Trying to measure against a person's mental image is ultimately going to be challenging and no formal method will ever really do it well. The only thing you can measure against are the requirements they give you. You can look at un-addressed requirements, but you can never measure the gap of what they didn't tell you.

The message I have gotten from the agile school of thought is that measuring completeness is kind of a waste of time - it's really the wrong question.

For example, with scrum, you make a prioritized backlog of all the requirements and just start working down the list. When the money/desire runs out... you stop.

If you're going the agile/scrum route as you imply, then generally you'll want to break up the project into small discrete units of effort. A project contains epics (or is an epic), an epic contains stories, a story contains tasks. (A task should ideally be 4-8 hours of work. Something that somebody can do in a work day.)

As each story is completed, it should be tested and verified. This generally isn't done for tasks because often a single task can't be tested by a user until other tasks for the story are complete. A user can't be expected to test "Write a method to persist an order to the database" but would instead test "When this button is clicked, the order is persisted to the database and the user is shown an updated shopping cart to include re-calculated taxes and shipping."

This testing/verification is not done by the developer. It should be verified by whoever is in charge of the product/project or a delegate thereof. The developer will naturally test it the way he or she wrote it, expecting it to work that way. If anything was misinterpreted in the requirements, it would just be misinterpreted again.

As each story is verified as complete, it's a discrete and measurable step towards project completion. (Measurable by how many tasks it involved and therefore how much work was completed towards the sum total.)

Keep in mind that any such measurements can change from one sprint to the next. If upper management is looking for a single road-map with completable steps along the way all the way to the end of the project, they may be misunderstanding a fundamental concept in agile development. The stories further down the line haven't been fully defined yet. They may involve more or less work than originally estimated, based on development done on (and changes made to) the immediate stories.

One way to try to approach the concept of fluid stories and changing requirements is to not think in terms of "projects" but just epics and stories. These discrete units should be wholly workable and testable on their own (though some will of course have others as pre-requisites). Changing priorities can move the stories around at will. A "project" doesn't need to be "put on hold" if priorities change, its stories are simply moved to the backlog in lieu of other stories.

The idea is that management is steering where you go next, not just giving you a list of destinations and hoping you'll arrive at them in the right order.

In this sense, the "completeness" of a "project" almost entirely loses its meaning. How much is "complete" is up to whoever owns the product/project. They can add to it or remove from it at will, shift priorities easily, etc. If they want to know "when will we arrive at destination A?" then that's up to them. You've given them estimates on how much work is involved in each step along the way, it's up to them to steer in what they think is the best direction to get there while you provide the work.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top