Question

My question is a little bit about branching, versioning and agile development in general, but the heart of all three is the version number I think.

Currently, I'm using internal version numbers (e.g. 1.0.4). This is also what the QA gets. But what if we would need a hotfix? I can't use the third number anymore, because it is already used. So I need another mechanism. But how do you tag or create your branches for each sprint? What is the version you use?

I need a replicable version number/branch/tag for QA, but I also need to bug fix it on the same version if necessary. The customer should never see the internal version numbers. Instead he gets e.g. 1.1.

A secondary issue is how to prevent waiting time between sprints. E.g. the developer has already finished, but the QA is still testing. Does the developer begin with tasks from the next sprint? Or what is if the developer and the tester have already finished their work. They shouldn't wait one week for the beginning of the next sprint. Or if a feature can't be finished in one sprint or the bugfixing thereof ...

Is continous testing and creating a working release in a sprint a requirement for agile development? Is the build number the only thing what the testers get? I can't use the build number, because Jenkins only stores the last 5 builds (and later it is not possible anymore to restore this version, except if it is somehow tagged). Should I use commit ids instead? Also no new features should be added, when giving the QA a bugfixed version.

Was it helpful?

Solution

I've used the notion of a release candidate for that purpose.

I assume you need a unique version number for every set of deliverables that you release towards QA. Lets do an example:

You are starting sprint 5. The version you are working off of in sprint 5 is 1.0.0. You expect this release to be a feature release so the version for the software after sprint 5 would be 1.1.0; therefore, the team is currently working on 1.1.0-RC1.

The team finishes the last story in sprint 5. You assign the Version 1.1.0-RC1 to that commit/revision. The deliverables for 1.1.0-RC1 go to QA and they come back with 3 bugs which can be fixed within the remaining 2 days in sprint 5.

However, other team members have already begun working on sprint 6. You need a new VCS branch here! Since you expect sprint 6 to be another feature release, sprint 6 would yield version 1.2.0; so these team members are working on 1.2.0-RC1.

The bugfixes (that resulted from testing 1.1.0-RC1) have to be done in another branch. The version they go into is 1.1.0-RC2.

Work on 1.1.0-RC2 is done (that means: all known issues are fixed). The deliverables for 1.1.0-RC2 pass QA. The same commit as 1.1.0-RC2 in your VCS (or a new one if you keep the version number under version control) becomes 1.1.0. That version is what you use towards the customer / end users.


You basically keep increasing the -RC number until QA is satisfied with the deliverables. The most recent RC is the same code version that you can ship without the -RC suffix.

If you expect more than 10 RCs for a sprint (which you shouldn't; that'd be too many bugs if you ask me) you should use leading 0s for the RC number. That makes sure the version numbers are sorted chronologically, e.g. 1.1.0-RC01 through 1.1.0-RC26.


As to when the work from a sprint is not to be released to end users: i'd just keep on versioning as if you would release. This will of course create gaps in the public version history but thats totally fine IMHO.
If management wants a consistent public version number i'd still keep on labelling my versions as if i released at every iteration; maybe with an -INT suffix. And at the same time keep a mapping of internal and public versions. Somewhat like this:

 o @1.1.0-INT @1.1.0
 |
...
 |
 o @1.2.0-INT
 |
...
 |
 o @1.3.0-INT
...
 |
 o @1.4.0-INT @1.2.0

If you choose that route make sure that everyone involved (development team, management, support staff, marketing people, ...) are aware of this and know how to look up the interal<->external mapping whenever they feel like it. Otherwise, ... you can imagine.


On branching

I strongly suggest that you branch RC2 off of RC1 and continue with 1.2.0 in the main branch (e.g. develop in git flow). When 1.1.0 passes QA you can merge it back into the main branch. Here is a bunch of git graphs showing how you could implement what i described above with git flow:

All stories for 1.1.0 complete; you tag the head of develop with 1.1.0-RC1. Work on 1.2.0 continues in develop

 o branch develop @1.0.0
 |
... implement stuff
 |
 o @1.1.0-RC1
 |
 o
 |
 o

When the QA result for 1.1.0-RC1 comes in you branch off for 1.1.0-RC2. Work can then continune in parallel. You can merge the final RC of 1.1.0 back into your main branch once QA for that passes (or ealier if there is another urgend need).

 o @1.0.0
 |
...
 |
 o @1.1.0-RC1
 |\
 o o branch release/1.1.0-RC2
 | |
 o o
 | |
 o o @1.1.0-RC2 @1.1.0
 |/
 o merge release/1.1.0-RC2 into develop
 |
 o @1.2.0-RC1

In Git flow you'd not tag the version in the realease/* branch but rather merge that into master and tag that commit with 1.1.0.

OTHER TIPS

Most of your questions go to your sprint process and it will depend on (be up to) your Scrum Master to define this for the team, with the teams input.

But how do you tag or create your branches for each sprint?

This is quite varied, one approach I have used that works well, is to branch per PBI from a stable main (talking Git here) and have branch policies on main to stop it becoming unstable. Then you swarm on the PBI and PR it back to main when the PBI is Done. This means you only have a few PBI's open at a time (based on a "standard" scrum team size of 6+3 people. The key factor is to keep PBI's small, so you can iterate more quickly and have a smaller impact when going to/from main.

What is the version you use?

Generally the build number from the build system being used, which also tags the public version numbers when needed within the build/release/deploy pipeline. Build systems are typically highly configurable. I don't use Jenkins but I am sure you could customize it to suit your needs whilst maintaining a high level of automation.

how to prevent waiting time between sprints

This goes directly to process. To not have a large lag between developement and testing both roles (and I am not saying this is two different teams) need to be swarming on the PBI at the same time. Testing is development, whereas the artifact of testing & development is a condition of the PBI meeting DoD and being Done, which would ultimately be decided by automated testing ... and possible some other manual considerations. Ultimately the PBI's undertaken within a sprint should be shippable at the end of that sprint - this doesn't mean you are shipping them, but you should be able to. If testing (the act of writing tests and having their output readily available) is not treated as a first class citizen in the process then your sprints will seldom be finished. If you are testing in a coming sprint what was achieved this sprint then both sprints are broken.

Is continous testing and creating a working release in a sprint a requirement for agile development?

No, but your life is much more pleasant if you adopt a customer-first, automation-first strategy to development. The first focusing on requirements, the second focuses on getting those requirements delivered faster ... for validation by stakeholders.

Continuous testing helps with not getting an explosion of RC's. If you test (and fail fast) you can address any defects that the team has created within the current sprint (from the PBI implementation) during that same sprint, after all if you create defects whilst implementing a PBI you could hardly call the PBI done until the defects are resolved. Some defects, may be that your new implementation have uncovered 'old' defects in other (or the same) area and they would have to be taken to the PO and SM immediately for a decision on fix or prioritize. Ideally (but not always) you would hope to uncover this during the PBI design and planning phase prior to the sprint ... of course, that doesn't always happen.

Is the build number the only thing what the testers get?

This sounds like you want to code something then throw it over a fence to a tester. If this is the case, you might want to rethink your strategy (and discuss this with your SM). The developer writing the tests should be working closely with the developer writing the functional implementation ... and they should be in the same room (if possible) working closely with each other.

TDD: Test is written, pushed to VCS, developer pulls test, writes code and runs test both on their PC and on build server when he pushes the code back.

NON TDD: Same as TDD but the test may not be written first.

In both cases, the test developer, implementation developer and build server should all be capable of running tests and validating the requirement at any point in time ... and the PBI is not marked done until this is all harmonious.

If your team is geo-located you can setup your pipeline to automate emails to team members based on conditions, like; a build failed, code linked to a PBI I am working on was committed.

Does the developer begin with tasks from the next sprint?

Not without clearing this with the PO and the SM. Ideally not, there is always something to do. Log checking, cleanup, documentation proofing. If all of that is too uninteresting or there is more time then spare jobs to do, then you would normally drag a PBI into the current sprint that is decided by the team ... and ideally can be Done within that sprint so you are not breaking the sprint. If your team thinks in terms of "developers" and "testers" then you can always get your "developers" to help the "tester" write and validate tests.

As for bug fix, this can also be handled numerous ways. One way is to have your "normal" sprints run on implementing PBI's and already prioritized bugs. Then during planning bugs/defects are triaged and prioritized into the backlog for coming sprints. The triaging of the bugs can be done using a Kanban and either having another team perform this task (if you have a large number of bugs) or you can rotate 1 (or more) members out of the sprint to run the kanban, giving everyone a turn on the kanban instead of one poor sole undertaking bugfix for the rest of his natural life.

Bugs are well suited to Kanban as you cannot always estimate the time needed to fix something and once you are neck deep in the investigation it is just as easy to keep going. Bugs that can have their fix easily identified or ones that can be easily prioritized can either be taken immediately for high priority EBF or left in the backlog for low priority with easy or almost 0 investigation required.

EBF's (Emergency Bug Fix) should be branched from a stable main and PR'ed back to main ASAP to minimize merge conflicts of PBI's under development that are potentially in the same area of code.

@marstato beat me to explaining RC's so I won't mention that here.

Hope that helps.

All your problems would go away if you could change this rule:

The customer should never see the internal version numbers. Instead he gets e.g. 1.1.

Use GitFlow. release branches goto QA and the fourth (third for apple) version number is used for hot fixes. ie 1.1.5677

Perhaps you could post process the binaries to change the version for the customer after QA sign off?

Licensed under: CC-BY-SA with attribution
scroll top