Unfortunately the source code for the Delphi compiler and its dependencies is closed source, so this question calls for speculation. However, if you would be interested in avoiding having Indy units recompile, you could of course place only the DCU files in the library path, and remove the Indy source files from your search path (project level) and library path (global IDE level) configuration. If you are indeed compiling source code across mapped network drives, then I think you're crazy. I recommend you check out mercurial and this cool feature where you can clone repos and keep copies of all your source code completely on the same computer you are building from. It's cool.
Anyways, the compiler... What I have observed so far is that:
What looks like a recompilation to you (and to me) is in fact, actually possibly only a scan of the Interface section, and is in fact the way that a tree of dependencies is built. Unlike a C (makefile) or Java (ant or maven) style build environment, setting the compiler loose at least on the interface section of your code is the only way to find out all the dependencies of a particular module. I think that even if you remove the Indy units from your search path, you may still see compiler progress showing a "compile" of
IdIOHandler.pas
, but in this case, what it would be doing is loading up a compiled representation of the interface section of your pascal unit from the DCU file, and then deciding what other files to read. Sometimes the IDE progress will show "compiling" on units but not modify the DCU file at the end.When fighting a large battle to refactor or fix a large project after some breaking change I have observed the following: a. That it is very difficult if not impossible to guess where the next syntax error will break compilation. b. That the compiler breaking on fatal error in unit A, then halfway through unit B, then on unit A again, then on unit C, then on unit B again, suggests that the compiler is nothing like what your mental model of a C compiler based on makefiles would be like: First we compile unit A, then we compile unit B, then we compile unit D. All the evidence I have suggests that the real process is far more convoluted than this, and that inside what someone at the compiler team would call a "single pass" over file A, there are "scans" of other files contents or scans of other files' internal representations in memory (rather than a parse of the physical file content) that leads to "compilation" not being an all or nothing or all at once thing, and rather a big in-memory free for all, and that the compiler's order of unit evaluation is driven by some in-memory tree built by the interface, and then implementation clauses of each unit. Predicting the behaviour of this large cyclic graph in a real world application is beyond my skill.