문제

통합 서버를 설정하는 동안 여러 작업을 사용하여 빌드를 완료하는 것과 관련된 최선의 접근 방식이 확실하지 않습니다.하나의 큰 직업으로 모든 것을 설정하는 가장 좋은 방법은 아니면 작은 종속적 인 직업을 만드는 것입니까?

도움이 되었습니까?

해결책

당신은 분명히 작업을 나누고 싶습니다.다음은 각 단계마다 다른 대상 (작업)이있는 CruiseControl.NET 구성의 좋은 예입니다.또한 약간의 사용자 지정으로 프로젝트간에 공유 할 수있는 common.build 파일을 사용합니다.

http : // code.google.com/p/dot-net-reference-app/source/browse/#svn/trunk

다른 팁

Nant 빌드 스크립트와 함께 TeamCity를 사용합니다.TeamCity를 사용하면 CI 서버 부분을 쉽게 설정할 수 있으며 nant 빌드 스크립트를 사용하면 보고서 생성과 관련된 여러 작업을 쉽게 수행 할 수 있습니다.

CruiseControl.NET에서 CI를 사용하는 방법에 대해 작성한 기사가 있습니다. 주석에는 프로젝트간에 재사용 할 수있는 nant 빌드 스크립트가 있습니다.

CruiseControl과의 지속적인 통합

The approach I favour is the following setup (Actually assuming you are in a .NET project):

  • CruiseControl.NET.
  • NANT tasks for each individual step. Nant.Contrib for alternative CC templates.
  • NUnit to run unit tests.
  • NCover to perform code coverage.
  • FXCop for static analysis reports.
  • Subversion for source control.
  • CCTray or similar on all dev boxes to get notification of builds and failures etc.

On many projects you find that there are different levels of tests and activities which take place when someone does a checkin. Sometimes these can increase in time to the point where it can be a long time after a build before a dev can see if they have broken the build with a checkin.

What I do in these cases is create three builds (or maybe two):

  • A CI build is triggered by checkin and does a clean SVN Get, Build and runs lightweight tests. Ideally you can keep this down to minutes or less.
  • A more comprehensive build which could be hourly (if changes) which does the same as the CI but runs more comprehensive and time consuming tests.
  • An overnight build which does everything and also runs code coverage and static analysis of the assemblies and runs any deployment steps to build daily MSI packages etc.

The key thing about any CI system is that it needs to be organic and constantly being tweaked. There are some great extensions to CruiseControl.NET which log and chart build timings etc for the steps and let you do historical analysis and so allow you to continously tweak the builds to keep them snappy. It's something that managers find hard to accept that a build box will probably keep you busy for a fifth of your working time just to stop it grinding to a halt.

We use buildbot, with the build broken down into discrete steps. There is a balance to be found between having build steps be broken down with enough granularity and being a complete unit.

For example at my current position, we build the sub-pieces for each of our platforms (Mac, Linux, Windows) on their respective platforms. We then have a single step (with a few sub steps) that compiles them into the final version that will end up in the final distributions.

If something goes wrong in any of those steps it is pretty easy to diagnose.

My advice is to write the steps out on a whiteboard in as vague terms as you can and then base your steps on that. In my case that would be:

  1. Build Plugin Pieces
    1. Compile for Mac
    2. Compile for PC
    3. Compile for Linux
  2. Make final Plugins
  3. Run Plugin tests
  4. Build intermediate IDE (We have to bootstrap building)
  5. Build final IDE
  6. Run IDE tests

I would definitely break down the jobs. Chances are you're likely to make changes in the builds, and it'll be easier to track down issues if you have smaller tasks instead of searching through one monolithic build.

You should be able to create one big job from the smaller pieces, anyways.

G'day,

As you're talking about integration testing my big (obvious) tip would be to make the test server built and configured as close as possible to the deployment environment as possible.

</thebloodyobvious> (-:

cheers, Rob

Break your tasks up into discrete goal/operations, then use a higher-level script to tie them all together appropriately.

This makes your build process easier to understand for other people (you're documenting as you go so anyone on your team can pick it up, right?), as well as increasing the potential for re-use. It's likely you won't reuse the high-level scripts (although this could be possible if you have similar projects), but you can definitely reuse (even if it's copy/paste) the discrete operations rather easily.

Consider the example of getting the latest source from your repository. You'll want to group the tasks/operations for retrieving the code with some logging statements and reference the appropriate account information. This is the sort of thing that's very easy to reuse from one project to the next.

For my team's environment, we use NAnt since it provides a common scripting environment between dev machines (where we write/debug the scripts) and the CI server (since we just execute the same scripts in a clean environment). We use Jenkins to manage our builds, but at their core each project is just calling into the same NAnt scripts and then we manipulate the results (ie, archive the build output, flag failing tests etc).

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top