Question

My company is evaluating adopting off-the-shelf ALM products to aid in our development lifecycle; we currently use our own homegrown solutions to manage requirements gathering, specification documentation, testing, etc. One of the issues I am having is understanding how to move code between stages of development. We have what we call a pipeline, which consists of particular stops:

[Source] -> [QC] -> [Production]

At the first stop, the developer works out a solution to some requested change and performs individual testing. When that process is complete (and peer review has been performed), our ALM system physically moves the affected programs from the [Source] runtime environment to the [QC] runtime environment. This movement of code is triggered by advancing the status of the change request to match the stage of the pipeline.

I have been searching the internet for a few days trying to find how the process is accomplished elsewhere -- I have read a bit about builds, automated testing, various ALM products, etc. but nowhere does any of this state how builds interact with initial change requests, what the triggers are, how dependencies are managed, how the various forms of testing are accommodated (e.g. unit testing, integration testing, regression testing), etc.

Can anyone point me to any resources detailing specific workflows or attempt to explain (generically) how a change could/should be tracked and moved though the development lifecycle? I'd be very appreciative.

Note: I've cleaned up the question to hopefully make it easier to understand. Also, I found another question (which I can't find now) that referenced this book, which sounds like it might be exactly what I am looking for -- not sure if I want to shell out the cash for it, though.

Additional details per @GlenH7's suggestion:

Our products are built with a proprietary language and set of translation tools -- the ultimately runtime environment is similar to a web site architecture, in that we have a bunch of 'code' files sitting out on a server, which our client app accesses on-demand (not unlike a GET request) and then interprets to provide the user experience. We sell an integrated product which is made up of a set of modules (most of which are optional), each module has a team of ~4 programmers and 2 QC testers devoted to the development/bug maintenance. There are around 30 modules per product line (and 3 product lines) -- some teams maintain multiple products lines for their given module. In total, we have 700 employees devoted to development and ~200 programmers. Certainly adopting any new processes or systems is going to involve buy-in, etc. but I'm tasked with simply understanding how the code will be managed so that we can speak intelligently about things. My first task was to learn how to adopt version control into our process (we do not yet use formal version control for most of our product lines, we rely instead of keeping around multiple copies of the entire codebase).

We follow the waterfall method pretty closely (which sucks, but the company has been producing software since 1969, so it was probably the natural fit back then and until now, there has been no need to change anything). Some folks are intrigued by the various offerings of the agile methodologies, but as has been noted by @Chad below, we're likely going to need to take incremental steps towards any such end goals (though if we adopt an ALM system, it might be the time to just force a whole new way to do things on everyone).

Now for a specific example of our current process, let's say we have a product called HIS, which has an MIS and ITS module (what these are/could be is not important -- think of them as just a collection of files) -- in our current process, if someone requests a change to ITS, a coder will modify some files (effectively) on the development server. When that change needs to be tested, transitioning the status of the change request from 'development needed' to 'qc needed' will trigger our tools to copy the affected files from the development server to the QC server. Now when QC testers signon to the QC testing environment, they will be able to see the effect of the changes. Once they signoff on the change, advancing the status of the request to 'production ready', our tools again copy the code files to the next stage. Once signed off there, a final copy is done to the holding location used for deliveries to customers.

We manage delivery of our releases by a major release number (e.g. 6.0) followed by a service release designation (e.g. 5) and a 'priority pack' number (e.g. 1) -- customers take delivery of priority packs, e.g. 6.0.5.1. As stated above, we simply push code from place to place on multiple servers, so we end up with a project folder for each major release and service release throughout the pipeline and end up with a final resting place for each of the ppacks, e.g.:

dev6.0
qc6.0
production6.0

dev6.0.5
qc6.0.5
production6.0.5

ship6.0.5.1

I was thinking that under version control, we could simulate this kind of scheme as:

trunk
branches
--6.0.5
tags
--6.0.5.1

but that seems to cover only half of the locations, i.e. where do the qc and production sources live? Would I create a separate trunk for each major and minor release of the product? Or would I rely on something outside of version control (e.g. the ALM system) to track the revision(s) interesting for testing done at the QC and production stops? We'd need a runtime environment for each, still, but that could be 'built' on-demand as bug fixes and features are made available -- which reinforces the idea that I need not keep any special 'copy' of these stages under version control (they are simply references to revision history of the trunk or branch).

Another thing that has thrown me for a loop as that a colleague is pretty adamant that you should not persist anything in branches, i.e. the branch folders should be ephemeral and used only during large changes that can't be committed directly to the trunk (with the expectation that the branch will eventually be merged into the trunk). In his scheme, the minor releases would necessitate a separate top-level with it's own set of trunk, branches and tags, e.g.:

product6.0
trunk
branches
--silly_new_feature
tags
--6.0.5

product6.0.5
trunk
branches
--crazy_bug_fix
tags
--6.0.5.1

(sorry for the delay on adding these details -- got pulled away to meetings all day)

No correct solution

Licensed under: CC-BY-SA with attribution
scroll top