Question

I have now worked on two different teams that use the Agile/Scrum approach in the last two years and both teams were eager to improve the way they approach software development. In the first team, we could easily convince our product owner to get time for internal things like improving the build system, setting up better integration tests, having a better release strategy etc. Right now the PO is also willing to give us time, but he is more pushing back, which is reasonable as he also must get his things done.

Anyway my question is now, how do other teams handle this? Do you create an improvement story and put it on the table during planning or do you keep a "bucket" of time around for such things? How difficult is it in your experience to convince the product owner to do get time for improving? After all these kind of improvements will benefit the team, but not directly or immediately the prodcut owner/business.

Was it helpful?

Solution

Great question. I think there are several flavors of "action items" from retrospectives that deserve different approaches.

1) technical tasks to address things like technical debt or infrastructure improvements - like "We should ensure we have no database calls in the view layer of our application, cuz that caused us to waste time in this past iteration... somebody should do a search through the code to make sure we're not doing that someplace else."

2) process improvements (e.g. "folks aren't coming to the standups on time... lets start a $1 for charity donation whenever someone's late".)

The first category can be significant work, or it might be straightforward. The example I showed was pretty easy... but might generate other tasks that need to be scheduled (e.g. removing the database calls in the 5 locations where they were discovered).

The second category should be handled/driven by the iteration manager, project manager, scrum manager, etc. I (as a Scrum Master or Project Manager) usually list them on a project wiki and talk about them in retrospectives, check them off when they're addressed, and report to the team on status. I keep the fire lit.

I think the mistake in the first category - technical tasks - is that we don't define acceptance criteria. Your examples included "improving the build system, setting up better integration tests, having a better release strategy". These are non-deterministic and need to be enumerated in crisp terms (using spikes if necessary). So - improving the build system might start with a technical task or a spike to assess options.

We also need to break-down and prioritize technical tasks (e.g. maybe "better integration tests" could start with a technical task of defining the current integration coverage, or assessing the percentage of bugs that could be blamed on integration failures to build the case for investment there.

Once you have your priorities set, then you can convey the value of the high priority items and negotiate with the product owner for time to spend on them. I'm not a big fan of predefined buckets to spend on anything... but having the conversation with the product owner with crisp requirements, ROI, and acceptance criteria is key.

OTHER TIPS

Improvements should be part of the sprint the same way new features are. It is up to the team to demonstrate to the Product Owner that those improvements are necessary for the upcoming sprint. This may slow down the rate at which new features are produced but it is useful for the product in the end.

On the other hand, I have issues with sprints that only contain improvements. Every sprint should produce output that can be demonstrated to the Product Owner.

Crystal Methods has the concept of the Reflection Workshop as a means to tune your development process. Teams meet periodically (less frequently than your development cycle, perhaps) to discuss improvements and status of the process. Come up with 0-3 things we tried this time that worked and we'll keep, 1-3 things that aren't working, and 1-3 things to try next time. The idea is to have incremental improvement in process as well as in product.

Last year I worked for one of the very earliest Agile (xp) adopters/consultants/trainers. He had a good approach I think.

We met every Friday and just discussed what worked and what didn't. We would write them on two large pieces of paper (He was really into the paper & easel instead of whiteboard because it was more permanent and could be repositioned more easily).

The things that worked could be very simple--we interacted well as a team, pairing went smoothly, etc.

The things that didn't work were just as simple and random. Some people might be resisting paring, or even "The boss didn't take us out on his boat as promised".

Every week we would also revisit past "Didn't work" and see if we fixed them--If so, they would always be listed in this weeks "did work" column.

Although we would discuss specific solutions, just bringing the problems out in the open tended to have a very positive effect. If they remained on the "Didn't work" list for 3 or 4 weeks, we would discuss different/better solutions and make more of a deliberate attempt to implement them.

After an item spends a week or two in the "Worked well" column, we'd drop it since it more or less had become expected (unless it continued to improve).

It also made Friday afternoons a little more interesting since it was a fairly fun meeting everyone could participate in.

I would use a 'spike' to for such things. An internal/process improvement could not be a user story but it would make a perfect spike.

I do not have much to add here however I feel that one should have a resource dedicated towards these environment improvement related issues and the tasks should not be included on the burn-down charts. If it is an additional hardware required for this project then it should have been added and budgeted in advanced. So ultimately these should not impact the allocation of hours on the scrum, however any work affected as a result of these issue should be accounted for justification.

No, one shall not create technical user stories: theoretically, and since they generally bring no direct value to the customer, they have very few chances to be selected in an iteration. Convincing the Product Owner is an option to alleviate this, but there is another tool that can be used here : slack.

Slack is a small portion of your iteration time set aside to those improvements tasks. If everything went well during the iteration, then you will be able to use that time for those improvements. On the other hand, you will get another chance to meet your commitments in case the team over-committed for the iteration (or a task were underestimated, harder as expected...)

Another benefit of using slack is that this will lower the variation of the velocity as you will likely meet your commitments more often, without resorting to overtime.

Refer to Tom DeMarco Slack: Getting Past Burnout, Busywork, and the Myth of Total Efficiency (Amazon link) - ISBN 0767907698.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top