#include all .cpp files into a single compilation unit?

C++Visual StudioBuild

C++ Problem Overview


I recently had cause to work with some Visual Studio C++ projects with the usual Debug and Release configurations, but also 'Release All' and 'Debug All', which I had never seen before.

It turns out the author of the projects has a single ALL.cpp which #includes all other .cpp files. The *All configurations just build this one ALL.cpp file. It is of course excluded from the regular configurations, and regular configurations don't build ALL.cpp

I just wondered if this was a common practice? What benefits does it bring? (My first reaction was that it smelled bad.)

What kinds of pitfalls are you likely to encounter with this? One I can think of is if you have anonymous namespaces in your .cpps, they're no longer 'private' to that cpp but now visible in other cpps as well?

All the projects build DLLs, so having data in anonymous namespaces wouldn't be a good idea, right? But functions would be OK?

C++ Solutions


Solution 1 - C++

It's referred to by some (and google-able) as a "Unity Build". It links insanely fast and compiles reasonably quickly as well. It's great for builds you don't need to iterate on, like a release build from a central server, but it isn't necessarily for incremental building.

And it's a PITA to maintain.

EDIT: here's the first google link for more info: http://buffered.io/posts/the-magic-of-unity-builds/

The thing that makes it fast is that the compiler only needs to read in everything once, compile out, then link, rather than doing that for every .cpp file.

Bruce Dawson has a much better write up about this on his blog: http://randomascii.wordpress.com/2014/03/22/make-vc-compiles-fast-through-parallel-compilation/

Solution 2 - C++

Unity builds improved build speeds for three main reasons. The first reason is that all of the shared header files only need to be parsed once. Many C++ projects have a lot of header files that are included by most or all CPP files and the redundant parsing of these is the main cost of compilation, especially if you have many short source files. Precompiled header files can help with this cost, but usually there are a lot of header files which are not precompiled.

The next main reason that unity builds improve build speeds is because the compiler is invoked fewer times. There is some startup cost with invoking the compiler.

Finally, the reduction in redundant header parsing means a reduction in redundant code-gen for inlined functions, so the total size of object files is smaller, which makes linking faster.

Unity builds can also give better code-gen.

Unity builds are NOT faster because of reduced disk I/O. I have profiled many builds with xperf and I know what I'm talking about. If you have sufficient memory then the OS disk cache will avoid the redundant I/O - subsequent reads of a header will come from the OS disk cache. If you don't have enough memory then unity builds could even make build times worse by causing the compiler's memory footprint to exceed available memory and get paged out.

Disk I/O is expensive, which is why all operating systems aggressively cache data in order to avoid redundant disk I/O.

Solution 3 - C++

I wonder if that ALL.cpp is attempting to put the entire project within a single compilation unit, to improve the ability for the compiler to optimize the program for size?

Normally some optimizations are only performed within distinct compilation units, such as removal of duplicate code and inlining.

That said, I seem to remember that recent compilers (Microsoft's, Intel's, but I don't think this includes GCC) can do this optimization across multiple compilation units, so I suspect that this 'trick' is unneccessary.

That said, it would be curious to see if there is indeed any difference.

Solution 4 - C++

I agree with Bruce; from my experience I had tried implementing the Unity Build for one of my .dll projects which had a ton of header includes and lots of .cpps; to bring down the overall Compilation time on the VS2010(had already exhausted the Incremental Build options) but rather than cutting down the Compilation time, I ran out of memory and the Build not even able to finish the Compilation.

However to add; I did find enabling the Multi-Processor Compilation option in Visual Studio quite helps in cutting down the Compilation time; I am not sure if such an option is available across other platform compilers.

Solution 5 - C++

In addition to Bruce Dawson's excellent answer the following link brings more insights onto the pros and cons of unity builds - https://github.com/onqtam/ucm#unity-builds

Solution 6 - C++

We have a multi platform project for MacOS and Windows. We have lots of large templates and many headers to include. We halfed build time with Visual Studio 2017 msvc using precompiled headers. For MacOS we tried several days to reduce build time but precompiled headers only reduced build time to 85%. Using a single .cpp-File was the breakthrough. We simply create this ALL.cpp with a script. Our ALL.cpp contains a list of includes of all .cpp-Files of our project. We reduce build time to 30% with XCode 9.0. The only drawback was we had to give names to all private unamed compilation unit namespaces in .cpp-Files. Otherwise the local names will clash.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionSteve FollyView Question on Stackoverflow
Solution 1 - C++MSNView Answer on Stackoverflow
Solution 2 - C++Bruce DawsonView Answer on Stackoverflow
Solution 3 - C++ArafangionView Answer on Stackoverflow
Solution 4 - C++spforayView Answer on Stackoverflow
Solution 5 - C++onqtamView Answer on Stackoverflow
Solution 6 - C++StephanieView Answer on Stackoverflow