If you want to get the most out of this feature, you need to understand how your projects can be structured to make good use of them. The best way is the slow, hard process of manually reducing build times. Sounds really stupid at first, but if all builds going forward are 5 times faster and you know how to structure your projects and dependencies moving forward -- then you realize the payoff.
You can setup a continuous integration system with your targets to measure and record your progress/improvements as your changes come in.
I have a huge project, something about 150 000 LOC of C++ code. Build time is something about 15 minutes. This project consists of many sub-projects of different sizes.
Sounds like it's doing a lot of redundant work, assuming you have a modern machine.
Also consider link times.
All my precompiled headers is not very large, every file is something about 50Mb.
That's pretty big, IMO.
I'm not talking about dependency analysis or something advanced.
Again, Continuous Integration for stats. For a build that slow, excessive dependencies are very likely the issue (unless you have many many small cpp files, or something silly like physical memory exhaustion is occurring).
I was unable to get any subproject build notably faster, maximum speedup that I get is 20%
Understand your structures and dependencies. PCHs slow down most of my projects.
It seemed that it is easier and cheaper to buy faster machine with solid state drive than to optimize build time on linux with GCC.
Chances are, that machine will not make your build times 20x faster, but fixing up your dependencies and project structures can make it 20x faster (or whatever the root of the problem ultimately is). The machine helps only so much (considering the build time for 150KSLOC).
Your build is probably CPU/memory bound.