Question

Our business team want long periods between deployments and once a build has been staged for their testing it must not be updated unless a fix is required.

This goes against our development team's agile process, which works on two week sprints and want to push changes forward as much as possible.

There is a business justification for isolating the build, it's code and only changing it for fixes.

Is there an agile approach which lets this business requirement be achieved while still pursuing the fast-moving agile spirit?

Of course, we have something at the moment, but the two sides say, basically:

a) The business want to have a build in place and test it. Any changes or fixes required must be separate from any other work going on.

b) The dev team want to avoid the headache of having to resolve conflicts made between the post-build fixes and the other, separate, ongoing work.

Note: Of course, we understand that the agile process is highly flexible, but I'd like to know if anyone has gotten around this particular nuance before.

Was it helpful?

Solution

One solution is to adopt the "LTS" (Long Term Support) model used by Linux and browser vendors.

At the end of a sprint, if you have a version fit for release, you create a release. Give it a version number, label it in git (or whatever version control you are using, but I'm assuming git here), update the docs etc. You then effectively deploy that release internally only.

When the business wants a new version deployed to "staged for testing", you give them your latest release. At that point, you also label that release as "LTS". Then you carry on with your sprints and new releases.

If they find a problem during their testing, you branch from the LTS version you gave them from git, make the fix, re-label and re-release to them. You also apply the fix to the latest version if applicable.

It's not a perfect solution, and you may find that when they want a new version, they'll want only some of the features you have added in the meantime. The use of "feature branches", where every feature is - as much as is practicable - kept on its own branch can help, as you merge just those features back into master to create a release for them.

OTHER TIPS

Rather than feature branches, I'd recommend using feature flags. Essentially, these are just conditionals in your code that determine whether a particular portion of the code gets run or not. You control the status of the feature through configuration, which can be as simplistic as an app setting in your Web.config or as complex as a third-party service like LaunchDarkly, that gives you granular control over which users get access to which features at which time.

The one downside to feature flags is that they're inherently technical debt. Essentially, you have code that may not ever run in your codebase, and that code grows over time. Some companies actually embrace this, though. Facebook, for example, makes heavy use of feature flags and never clean them up. Personally, I think that's a little insane, but I suppose it works for them. For the rest of us, the common recommendation is that when a feature flag is introduced, you also add a PBI to clean it up. Then, once you have fully deployed this feature (and maybe given it some time to assure there's no major issues), you add the PBI to a sprint to get rid of the technical debt. It's not ideal, but it is manageable.

Technical debt aside, feature flags can give you enormous freedom. You can actually merge the code into master, but choose not to actually deploy the feature. That way, you don't have to worry about handling merges with code that's weeks or months out of step with master, but the actual product hasn't changed.

Better, you can simply patch to enable features, which is much less involved than full upgrades. For example, let's say you're following the typical monolithic release cycle. You've got a whole batch of new features, some of which require fundamental changes to the core product. The upgrade process has to touch a lot of files and make a lot of changes to enable all these new features. Then, there's a problem, a big problem, a problem that you can't just push a quick fix for. You've now got to handle downgrades, and figure out how your going to get your users back to a usable state. That's a beast of a problem.

Now, let's look at the feature flag approach. Since all the code for these features are behind flags. You simply do a release that turns them on. Of course, if they weren't actually in the last release at all, you'll also need to push down the code, which may require the same monolithic upgrade. However, where things get much better for you is when things explode. This time, you don't have to downgrade your users and plan a new monolithic release. Instead, you simply issue a patch that disables the feature(s) causing the issue. The application now reverts back to the way it did things before. When you fix the issue, you can roll forward, pushing only the fix and turning the feature back on, instead of having to re-release the whole product.

I'm sure this answer will have its detractors but whatever. The answer to your question is that it's not possible for both sides to get what they want as described. So you have a couple of choices. First, try to find out what the business really wants. If they can't articulate a business reason then you should take them to task for that. For example they may just want to control the development process. Or they may be following a cargo-cult belief about project management and release planning that they read somewhere and think that this is just the way it's done.

A legitimate business reason might be that they believe that a QA cycle on a frozen codebase in a controlled environment will be able to catch more bugs and result in fewer defects released to production. That's really the only reason I can think of that approaches legitimate. However in that scenario the core ask is "fewer defects in production".

Ask them to qualify and quantify this. How do they know that there will be fewer defects? What is the defect rate now? Is it possible to achieve a similar defect drop with an agile process? Have them show their work.

An alternative approach is continuous integration with automated unit and integration testing and feature flagging to control when features appear to users.

The benefits of this latter approach are significant and extend beyond the reduction in SCM complexity. Additional benefits include faster time to market, being able to stage feature rollouts to certain user subgroups, easier refactoring, fewer defects, higher development team autonomy and satisfaction, and overall higher company valuation.

One thing I've seen is basically a periodic sweep; if things are marked ready for deploy, someone is assigned to deployment duty and they review the code, look for anything they will need to do before/after deployment (such as monitoring certain graphs or doing a database migration), and then deploy it.

This is similar to the ideas mentioned in the other answers of having a release. The stable release branch (or tags) lives separately from the unstable branch.

In this way, development can continue as fast as the Agile team wants and feedback from the results of that release are delayed depending on when the release is actually deployed. If the team is relying on that feedback, the development cycle will slow down; in which case it starts to make sense to have a product owner or customers sitting in on the team or to lengthen the sprint.

Licensed under: CC-BY-SA with attribution
scroll top