Question

We're beginning to run into a problem as we get bigger, where features make it to staging for testing, but by the time everything is tested and approved new features are on staging for test.

This is creating an environment where we can almost never push to production because we have a combination of tested and untested features. I'm sure this is a common problem, but I haven't found any good resources for us yet.

Some Specifics:

  • GIT on BitBucket
  • Jenkins for scripted deployment to Azure

What I'm hoping for is a way to isolate features as they move through environments and only push what's ready to prod.

Was it helpful?

Solution

It sounds like you have a few problems here:

1. Identifying features for a specific release

This is a project management issue, and a coordination issue. Will this feature be released before, at the same time as, or after this other feature? If releases want to happen one feature at a time, then identify that. If features are going to be grouped into releases, then figure out what the groupings are, and enforce it with the devs and the decision-makers. Use your issue tracking or ticketing system to tag releases. Make it clear that if one feature of a specific release is a no-go, then all of them are.

2. Branching strategies

Git-flow is the easy answer for issues like these, and often people use a variant of git-flow even if they don't know what it is. I'm not going to say that it's a catch-all for all problems, but it helps a lot.

It sounds like you're running into an issue with non-deterministic release strategies, where features are approved scattershot and something that started development a long time ago might be released after something that started more recently - leap-frog features.

Long-lived feature branches or simultaneous release branches are probably the best answer for these kinds of issues. Merge (or rebase, if you're comfortable with it) the latest from master into your long-running branches. Be careful to only merge in features that are already live, otherwise you'll run into the issues that you've been having now (too many mixed up features on one branch).

"Hotfix" or "bugfix" branches are an essential part of this process; use them for small one-off fixes that have a short QA cycle.

From your description, it might even be better to not maintain an offical 'development' branch. Rather, branch all features off of master, and create merged release branches once a release is identified.

3. Environments

Don't match up git branches to your environments, except for production == master. The 'development' branch should be assumed broken. Release branches are pushed to test environments, whether that's a QA environment or a staging envirement. If you need to, push a specific feature branch to an environment.

If you have more than one feature branch that need to be released separately but are being tested at the same time..... ¯\_(ツ)_/¯ .... spin up another server? Maybe merge them together into a throw-away branch... commit fixes/changes to the original branches and re-merge into the throw-away branch; do final approval and UAT on individual release branches.

4. Removing non-approved features from a branch

This is what the above thoughts are trying to avoid, because this is without a doubt the most painful thing to try and do. If you're lucky, features have been merged into your development or test branches atomically using merge commits. If you're unlucky, devs have committed directly to the development/test branch.

Either way, if you're preparing for a release and have unapproved changes, you'll need to use Git to back out those unapproved commits from the release branch; the best idea is to do that before testing the release.

Best of luck.

OTHER TIPS

Here's an idea, Stop using release branches. Instead, start building in feature toggles and manage it via configuration. That way, you're always merging feature branches into master and there should never be a question about what version is in test or prod. If you have a question about what features/implementations are active in an environment, just check the config file.

This should be a simple matter of coordination between test and production. If you're using feature branches in Git, simply stop pushing completed feature branches to Test during a testing cycle, and resume when testing is complete.

If you need better control than this, separate Test into a Development server and Acceptance Testing server, and coordinate those branches that will be pushed to the Acceptance Testing server with the testing team. Someone can then be responsible for kicking off the final deploy from Acceptance Test to Production.

Work piles up

This is a universal problem in my experience. I address it with:

  • Strong management of feature releases by the product owner
  • Ensure that branches are deleted when they are merged
  • Limit work in progress (with column limits in Jira)
  • Quarterly review of old tickets that are languishing, both bugs and features
  • Retrospectives to discuss components of the issue
  • Constant encouragement for code reviews by all
  • Pairing opportunities to tackle long-standing tickets and issues
  • Quarterly meetings to review and clean old tickets up
  • Team approach to get dev, product and QA/QE working tightly together
  • Good reporting and tools to make new product features and the backlog obvious
  • Review sessions to go through old branches and delete them

Branches

You need some branches to control that process:

  • feature: this branches are born from master. Use some project management application to identify each feature branch with some task. Per example, if you use TRAC, you will end if branches like: 1234-user-crud, 1235-bug-delete-catalog, etc. Identify your commits with the task number too, this will help you a lot when you have problems in merges (you will).
  • test: all feature branches that are done will be merged to the test branch. You never merge the test branch into some feature branch, because you don't want code from another features that aren't in the production (master). The same is valid for the release branch.
  • release: when you decide what tested features can be in the production, you merge this branches (again...) in this branch. You need to test all features again, because this merge can bring new problems. When the release is tested and done, you merge this branch to master and create a tag on master for the version.
  • master: contain only the production code.

See the git flow:

                              |FEAT_2|
                                  |
                             .---C06<-------.---------.
                            /                \         \
                           /   |FEAT_1|        \         \
                          /       |            \         \
                         /    .--C07<--.--------\---------\---------.
                        /    /          \        \  |TEST| \         \
                       /    /            \        \    |    \         \
                      /    /        .-----`--C09<--`--C10    \         \ |RELEASE|
                     /    /        /                          \         \    |
    <v4.6.0>        /    /        /                       .----`--C11<---`--C12<--.
       |           /    /        /                       /                         \
C01<--C02<--C04<--´----´--------´-----------------------´---------------------------`--C13
 |           |                                                                          |
<v4.5.0>  <v4.6.1>                                                                   |MASTER|
                                                                                        |
                                                                                     <v4.7.0>

Environments

Very simple:

  • test: this environment use the test branch.
  • release: this environment use the actual release branch.

The developers work in his machine, each one using his own database. If is not possible each developer has a individual database (because of licenses, size of database, etc), you will be a lot of problems sharing a database between the developers: when someone delete a column or a table in his branch, the others branches still counts with this column/table in the database.

Problems

The biggest problem in this process is the merges.

You need to remade the same merges in test and release. This will be painful if some good refactor are made in the code, like delete a class, move/rename methods, etc. As you can't get code from test (or release) branch into feature branch, the merge commits can be resolved only in the test (or release). So, you end up resolving the same conflicts in two different branches, probably producing different code in each merge and, in the future, you will discover that the test team will need to test the features twice: in the test and release branches, because each merge can result in different bugs.

Another problem is the test branch. You will need to "recycle" this branch (delete and create a new one from master) from time to time, because some old branches (or old merges, merged branches that were deleted) there can bring a lot of problems for the new code, diverging much from what is in master. In this moment, you need the control of what branches you would like to merge again in the test.

The really best solution is the business team knows what needs to be delivered in the next version and everyone work in a unique branch (develop branch). It's good for them the possibility to choose what "done" feature they would like to be in the next version any time they want (I think this is your scenario), but this is a nightmare for the developers and (I believe) for the test team.

Sounds like you are merging changes from your integration branch into your production branch, which IMHO is not a good practice, exactly for the reasons you mention. As soon as a production branch for a certain release is pulled from the main integration branch the integration branch can, at any moment, diverge (after all it's supposed to evolve into the next release). Merging from the integration branch into the current release branch can bring in changes incompatible with that release.

IMHO a proper process would be:

  • pull a production branch from the integration branch only when it's deemed to be close enough to the desired level of quality, so that only a handfull of changes would further be expected to complete the release. In other words feature completion should be evaluated (continuously) on the integration branch, prior to pulling the production branch.
  • after the production branch is pulled only cherry-picked changes are brought to it, treated as standalone/point-fix changes - i.e. verified that they actually work as expected (just because a change works in one branch doesn't necessarily mean it also works in another branch).

Personally, this sounds like it could be a process issue more than a tooling issue. A few things I'd suggest here:

  • I'm not sure if you have separate Dev and QA groups. If you do, make sure that both Dev and QA sit in on sprint planning and estimation meetings. At one of my previous companies, we made sure that the number of story points we assigned a story accounted for both development and testing effort. (You could also theoretically have two separate estimates for dev and QA effort, but either way you need for your estimate to include both; the time required for a story is the time required to actually deliver it). Even if you don't have a separate QA group, still make sure you include testing effort in your estimates.
  • Along a similar vein to above, agree in advance on how many stories you're going to include in a particular sprint. The number of story points you accept is based on the amount your developers can finish in their sprint and the number of items that QA can test in their sprint. (I'm assuming, of course, that QA sprints are behind Dev sprints, but you can adapt this to your process). If your developers can finish 200 story points but your QA can only finish 150 story points, obviously you can only do 150 story points before work starts to "pile up" and you end up with a case like what you describe. (In a case like this, you might want to investigate the cause of the roadblock to try to mitigate it).
  • No one pushes anything to QA until everything currently in QA is tested and delivered.
  • A complete feature is one what has been tested and delivered. If it's not delivered it's not done.
  • Obviously, you want to try to do this on some kind of a fixed schedule. One of the whole ideas behind Continuous Integration and Agile is iteration. By definition, iteration entails frequent delivery. Frequent integrations and delivery minimizes the risk of each one.

Honestly, I think the biggest thing will be discipline about when you're delivering and how many tasks you can actually completely finish in a given timeframe.

To summarize: only deliver to QA when you're done testing and delivering the old features.

When "everything is tested and approved", deploy that which was tested and approved to production. That could be a particular commit, or it could be a particular build artefact generated by Jenkins.

It shouldn't matter that later commits on the same branch are not yet tested.

Licensed under: CC-BY-SA with attribution
scroll top