Question

For most of my development work with Visual C++, I am using partial builds, e.g. press F7 and only changed C++ files and their dependencies get rebuilt, followed by an incremental link. Before passing a version onto testing, I take the precaution of doing a full rebuild, which takes about 45 minutes on my current project. I have seen many posts and articles advocating this action, but wonder is this necessary, and if so, why? Does it affect the delivered EXE or the associated PDB (which we also use in testing)? Would the software function any different from a testing perspective?

For release builds, I'm using VS2005, incremental compilation and linking, precompiled headers.

Was it helpful?

Solution

Hasn't everyone come across this usage pattern? I get weird build errors, and before even investigating I do a full rebuild, and the problem goes away.

This by itself seems to me to be good enough reason to do a full rebuild before a release.

Whether you would be willing to turn an incremental build that completes without problems over to testing, is a matter of taste, I think.

OTHER TIPS

The partial build system works by checking file dates of source files against the build results. So it can break if you e.g. restore an earlier file from source control. The earlier file would have a modified date earlier than the build product, so the product wouldn't be rebuilt. To protect against these errors, you should do a complete build if it is a final build. While you are developing though, incremental builds are of course much more efficient.

Edit: And of course, doing a full rebuild also shields you from possible bugs in the incremental build system.

The basic problem is that compilation is dependent on the environment (command-line flags, libraries available, and probably some Black Magic), and so two compilations will only have the same result if they are performed in the same conditions. For testing and deployment, you want to make sure that the environments are as controlled as possible and you aren't getting wacky behaviours due to odd code. A good example is if you update a system library, then recompile half the files - half are still trying to use the old code, half are not. In a perfect world, this would either error out right away or not cause any problems, but sadly, sometimes neither of those happen. As a result, doing a complete recompilation avoids a lot of problems associated with a staggered build process.

I would definitely recommend it. I have seen on a number of occasions with a large Visual C++ solution the dependency checker fail to pick up some dependency on changed code. When this change is to a header file that effects the size of an object very strange things can start to happen. I am sure the dependency checker has got better in VS 2008, but I still wouldn't trust it for a release build.

The biggest reason not to ship an incrementally linked binary is that some optimizations are disabled. The linker will leave padding between functions (to make it easier to replace them on the next incremental link). This adds some bloat to the binary. There may be extra jumps as well, which changes the memory access pattern and can cause extra paging and/or cache misses. Older versions of functions may continue to reside in the executable even though they are never called. This also leads to binary bloat and slower performance. And you certainly can't use link-time code generation with incremental linking, so you miss out on more optimizations.

If you're giving a debug build to a tester, then it probably isn't a big deal. But your release candidates should be built from scratch in release mode, preferably on a dedicated build machine with a controlled environment.

Visual Studio has some problems with partial (incremental) builds, (I mostly encountered linking errors) From time to time, it is very useful to have a full rebuild.

In case of long compilation times, there are two solutions:

  1. Use a parallel compilation tool and take advantage of your (assumed) multi core hardware.
  2. Use a build machine. What I use most is a separate build machine, with a CruiseControl set up, that performs full rebuilds from time to time. The "official" release that I provide to the testing team, and, eventually, to the customer, is always taken from the build machine, not from the developer's environment.
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top