100K+ LoC, ~400 source files, multiple subdirectories, building libraries, executables, etc.
Most of the slowness of Make is due to people creating recursive Makefiles. Running endless shell programs is very slow. Make by itself is surprisingly fast.
The other benefit of non-recursive Make is that the builds can now go in parallel. What used to be a ~90s build is now ~7s on a multi-core machine.
I also replace libtool (shudder) with jlibtool. That was taken from Apache, and modified to fix bugs work better, etc. That sped up the build a lot, too.
Maybe I'm crazy, but these tools have worked across all Unix-y platforms I've tried. And I needed a migration path from recursive Make / libtool to something better. Dropping the build system on the floor and hoping for something better wasn't an option. This tool suite works much like the old one, but it's simpler, faster, and doesn't require additional packages (python, etc.) to do the build. Everything outside of Make is self-contained, and is included in the FreeRADIUS distribution.
If anyone wants to do non-recursive make for big projects, I would take a look at what the Android platform does. They have this elaborate Android.mk makefile fragment system.
At the beginning of the build you go and find all the fragments, then process/concatenate them into one huge makefile, and do a single make invocation. It results in a very parallelizable build.
It seems kinda crazy and hacky but it definitely works better than recursive make.
For some reason the best docs seem to be on this Korean site:
Yeah definitely if you were starting from scratch, I would consider other build systems.
But if you already have a huge mess of recursive make, it might be easier to port it to a non-recursive Android-like system then completely rewrite it with gyp/gn/cmake + ninja.
It would speed up your full build times; not sure what it would do to incremental build times. It is true that the Android.mk thing has a large startup cost. Then again, you're building a whole OS with one make invocation, which is kinda cool.
I really hope someone takes the job offer, fixes the problem however possible, and then publicises the process.
This is what free software is supposed to be about: you can hire anyone to fix it. Let's put theory to practice now and actually hire someone to do it.
Edit: You know what, I'm now determined to now headhunt this person. It's the principle of the thing. Once I get the original poster's permission, I am reposting this ad on freedomsponsors.org, bountysource.com, and on the GNU job list: http://www.fsf.org/resources/jobs
> What we see is that if we have more than 30 or so clients connected to the
build Make is often unable to generate tasks fast enough to keep them busy.
[…]
> Our management doesn't want us taking time to learn the internals of make
and see if we can improve it ourselves, but they are willing to throw money
at the problem, either by offering some sort of donation/bounty for
performance improvements or directly hiring an expert on a contract basis;
any improvements achieved would of course be submitted back to the
community as patches.
I think that avoiding complexity in the build system is the priority rather than avoiding a product that has cruft in it. Make can be simple, elegant and productive even on large projects if you think before you do something.
For example SCons, CMake suffer from the same number of problems that manifest themselves differently.
This comes from many years of experience of watching people hang themselves with make, then hang themselves with the next best thing, then hang themselves with another thing, then say "all build systems are shit" and write their own which is inferior to make to start with. The problem is always the product, not the build tool.
All the useful features of make can be learned in a couple of hours.
For ref, I'm going to get shot about "cross platform builds" and stuff in reply to this comment. You're right. Use nmake on windows and make on *nix platforms.
Writing non-trivial functionality in cmake's language is possible but you're fighting the language the entire way. It would have been so much better if they'd just started with something like Lua and extended it instead of building their own barely-adequate scripting language. Some of the grossest code I've ever written is in cmake macros.
On the other hand, the results you get from cmake are fantastic. Want to use ninja instead of make? Just pass a compile flag. Need Visual Studio project files? No problem. Xcode? Kdevelop? You're covered.
Honestly I couldn't imagine ever going back to autotools at this point.
Having worked with CMake, it seems more expressive than Make (meaning you don't have to contort it to make it do what you want), but it seems quite complex I the sense that I don't have a clear picture of its semantics, i.e. what's going on behind the hood.
cmake is a replacement for the autotools suite, not make itself. You're right that it can be tricky to use.
One advantage cmake has is that you can tell it to emit files for ninja instead of traditional Makefiles. ninja is very well optimized for the problem of parallel building.
If they're building C/C++ with it then this is a good option: https://github.com/Zomojo/Cake - which is really just a front end to make, but produces really fast makefiles.
It was initially developed to allow interactive speed iteration of compiling large C/C++ projects for data analysis.
It does require that your source code comply with a few basic rules (just that c/cpp files and headers have the same name). But beyond that it runs automatically and quickly.
You might actually be right. Even though I didn't use ninja, given their description a specialized build system for fast build that circumvents the problems they mention that make their build slow might be the solution.
But are they not giving way not enough information to seriously assess their situation? That - combined with the shortness of your comment and that people could think it's a joke - would explain your downvotes.
GNU Make architecture is a way too complicated. The very idea behind Ninja is to have the simplest possible representation which allows fast dependency analysis, leaving all the complexity for the front-ends. It is still possible, of course, to implement a compiler from GNU Make to Ninja (or any similar low-level representation), but in most cases it would be more practical to simply switch to something like CMake, which already translates to Ninja.
I do think it would be interesting to implement a Make -> Ninja compiler; the devil is in extracting all the correct dependencies.
But... it sounds like the OP isn't really concerned about CPU efficiency (CPU is cheap compared to all the other resources they're considering). I wonder if it would therefore be possible to convert a _single_ make execution to ninja (perhaps by instrumenting make). Then, each build would entail running the converted ninja file, while simultaneously re-running the make->ninja conversion. If the make->ninja conversion produces a different execution plan, throw away the ninja build and start again.
This is based on my guess that the build execution changes comparatively rarely: new files or header includes are much rarer than 'simple' code changes.
Edit: I think ninja is also smart enough to know when an artifact has been built using a different command / dependency, so even the 'cache miss' case where we restart the ninja build should be fast.
The craziest thing I did was realize that the rules were being evaluated recursively, and calling "info" (i.e. "print) was just as easy as calling "eval".
30 minutes later, I had:
$ make bsd.mak
It would spit out a 6000 line BSD-style Makefile. Just variable declarations (organized and commented!) along with rules.
It's probably not hard to change the framework to spit out ninja files. It's just text.
In such scenario, ninja (or alike) would be the same thing to make as ccache to cc. But, the biggest single performance improvement of ninja comes from the fact that it's a single source for the whole tree - while make calls itself recursively, slowing down the dependency discovery and defying the caching potential.
I don't think I've properly explained the idea... The goal is to get the same performance as ninja, in the common case when the dependencies/build steps haven't changed. We somehow use the makefile to generate the ninja file; likely by executing make and capturing the dependencies & commands (without executing the build commands). But we don't encode the full complexity of the makefile in the ninja rules; if someone adds a file or changes a flag or even adds a #include directive, our ninja rules will be out of date.
So we are able to run the build entirely in ninja, but we may not be building what the makefile would build. So we have to 'waste' some CPU time running the make -> ninja conversion in parallel to verify that our ninja build is still correct, but as long as 'total ninja build time' > 'make to ninja conversion time', we come out ahead in wall clock time.
I've never used ninja, but ISTM the ninja setup would just need a few dependencies on the Makefiles themselves in order to build its own Ninjafiles (or whatever they're called). After all, if it's a build system, it surely knows whether or not depended-upon files have changed! Then there would be no need for "parallel" runs or error-prone diff'ing.
Make only calls itself recursively if you build the Makefiles that way. It's possible to have flat, all in one Makefiles and use include files for different targets etc.
It's a GNU Make framework which allows you to create non-recursive Makefiles. I've re-worked it a bit, and used it in my pet project: https://github.com/freeradius/freeradius-server/
100K+ LoC, ~400 source files, multiple subdirectories, building libraries, executables, etc.
Most of the slowness of Make is due to people creating recursive Makefiles. Running endless shell programs is very slow. Make by itself is surprisingly fast.
The other benefit of non-recursive Make is that the builds can now go in parallel. What used to be a ~90s build is now ~7s on a multi-core machine.
I also replace libtool (shudder) with jlibtool. That was taken from Apache, and modified to fix bugs work better, etc. That sped up the build a lot, too.
Maybe I'm crazy, but these tools have worked across all Unix-y platforms I've tried. And I needed a migration path from recursive Make / libtool to something better. Dropping the build system on the floor and hoping for something better wasn't an option. This tool suite works much like the old one, but it's simpler, faster, and doesn't require additional packages (python, etc.) to do the build. Everything outside of Make is self-contained, and is included in the FreeRADIUS distribution.
My projects:
https://github.com/alandekok/boilermake
https://github.com/alandekok/jlibtool
https://github.com/alandekok/boilermake