문제

We have an on-premises BitBucket server with a git repo for an embedded device. We use a multi-branch pipeline in Jenkins to:

  • Build and run tests
  • Build the firmware (and a little supporting PC tool)
  • (In the near future) run hardware-in-the-loop integration tests
  • Upload artifacts to Artifactory (if the branch name meets certain requirements)

Now, for a spin-off project which uses the same hardware, I have created a fork of the repository in BitBucket, of which I start to doubt the added value now (elaboration at the end of my post). I will be the main contributor of this, just like in the other repo.

To have good CI practices in the spin-off too, I cloned the VM which contains the Jenkins instance and changed its pipeline to use the fork. To not have duplicated build time of the main branches of the original project, I configured some branch name filters. I created a secondary develop branch (let's call it spinoff-develop) in the fork and set this as default, to allow integrating the spin-off features together, while keeping develop clean, allowing receiving upstream updates (which are automatically synced from the original by BitBucket, this would otherwise break). So far so good, the Jenkins clone now runs builds for any branch not named master or develop.

Now I run into the issue that there is some logic in my Jenkinsfile to determine when to upload a build to Artifactory. Of course I can change that logic in the fork, but when I eventually decide to merge useful changes from the fork into the original project, those will be changes which I can't accept there. Should I start playing around with Jenkins environment variables to get around this? I don't prefer that, as I want to keep as little configuration as possible in Jenkins, because I regard it "volatile" and it should not take too much time to set it up again if anything bad happens to it (there are also initiatives in the company to automate this more with a kind of "Infrastructure as code" approach but I don't use them yet).

As mentioned above, I doubt the direction where I am going with this, because of the increased complexity. The reason for the fork is that I am not 100% certain yet whether the spin-off contributions will be eventually merged back into the original project, but I want easy integration of upstream changes into the spin-off. Also, if another team would ever take over the spin-off, it's easy to just grant them full access to the fork in BitBucket and give them the VM.

What are your thoughts about this?

EDIT: I could add Jenkinsfile-spinoff and have the clone use that. But my main concerns do still apply, and is this a good solution in the end? Suppose there were 100 forks, would there need to be 101 Jenkinsfiles?

도움이 되었습니까?

해결책

The best solution seems to be something which I suggested some time ago already (but was turned down by colleagues): split the repository into a common (or libs) and application part. The spin-off project can reuse the common repository (as a fork) and have its own application repository with its own build.

This has the added advantage of having a separate repository for the PC tool as well (the initial reason why I wanted to do this).

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 softwareengineering.stackexchange
scroll top