> This page compares make to tup. This page is a little biased because tup is so fast. How fast? This one time a beam of light was flying through the vacuum of space at the speed of light and then tup went by and was like "Yo beam of light, you need a lift?" cuz tup was going so fast it thought the beam of light had a flat tire and was stuck. True story. Anyway, feel free to run your own comparisons if you don't believe me and my (true) story.
The completely unprofessional tone here really turns me off to the entire system. If you write like a typical teenager, you probably code like a typical teenager, and I don't want a typical teenager writing my goddamn build system.
Besides: who the hell is bottlenecked on the build system? The compiler and linker (or the equivalent for your favorite language) do all the work. Anyone who believes this article makes a different is completely ignorant of Amdahl's Law.
Many projects are bottlenecked on the build system. You can benchmark this by timing a null build (running 'time make' after building everything). Some examples from my machine are the Linux kernel (28 seconds), and Firefox (1m 23 seconds). Some of this time is from unnecessarily recompiling things, but that is a separate issue from the inherent lack of scalability in make.
Suppose I want to change a single C/C++ file in one of these projects - the total turnaround time from when I type 'make' to when the build finishes can be described as:
T(total) = T(build system) + T(sub-processes)
Ideally T(total) would be zero, meaning we get an instant response from when we change the file to when we can test the result. Here, T(build system) is the null build time, and T(sub-processes) is the time it takes to run the compiler and such. Using the Linux kernel as an example again, compiling fs/ext3/balloc.c takes 0.478 seconds. In comparison to the null build of 28 seconds, there are significant gains to be had by optimizing T(build system).
Amdahl's Law is a little tricky to apply since tup is not parallelizing T(build system), but rather changing it from a linear-time algorithm to a logarithmic-time algorithm. So you can set P easily based on the relative values of T(build system) and T(sub-processes), but S is not a simple "count-the-cores" metric. The speedup is effectively N/log(N), where N is the number of files. This is much better than simple parallelization - T(build system) for tup with these projects is only about 4ms. The total turnaround time for the balloc.c file in the Linux kernel is 1.1 seconds (which includes compilation and all the linking steps afterward), in comparison to make's total turnaround time of 29.5 seconds.
For very large projects the build system can quite easily become the bottleneck when just changing a single file, which also happens to be the most important use-case for developers. In extreme cases a no-op build with make can easily get to 15+ seconds.
> In extreme cases a no-op build with make can easily get to 15+ seconds.
I have never seen cases so extreme, but my opinion on the matter is that this is a "build smell". If the Makefile has to resolve a DAG this large, that means that developers have to worry about compile- or link-time interactions this large, as well. 100k source files all linked into a single executable is more complex than 10k source files split across 10 executables, and a handful (say <100) of headers which represent "public" APIs. Because if you have 100k source files and your developers haven't all killed themselves already, then there are some firewalls separating various modules already. Formalize it at an API level and split apart the builds, so that it's _impossible_ for anything outside of the API itself to trigger a full rebuild.
Typically this shows up in recursive make projects with lots of sub projects—it doesn't take that much time to stat every file in question but reinvoking make sixteen times can be quite slow.
I don't deal with this by not using make, I deal with this by not writing recursive makefiles.
> it doesn't take that much time to stat every file in question but reinvoking make sixteen times can be quite slow.
Yes, reinvoking Make repeatedly tends to force redundant `stat` calls. But I have worked in environments where heavily-templated code was hosted over a remote filesystem, and every `stat` call was something like 10msec. That adds up _extremely_ fast, even with non-recursive make. Ugh.
> In extreme cases a no-op build with make can easily get to 15+ seconds.
Most developers will never see such a system. Optimizing for that kind of scale at an early stage has all the problems of any other premature optimization. It's most important to just get the build system out of the way so you can get your real work done, and you do that by writing makefiles, since makefiles are universally understood.
Now, when a project does grow to the proportions you mention, you can start looking at alternatives --- but I'd argue that these alternatives should amount to more efficient ways to load and evaluate existing makefile rules, not entirely different build paradigms. Make's simplicity is too important to give up.
You dislike his humor, that's fine. Calling it unprofessional is subjective. Some professional environments with great professional output appreciate humor.
Also, I am bottlenecked on my build system at my workplace, which takes ~45 seconds to realize nothing needs to be done (It isn't "make", because "make" does not support our build process).
The completely unprofessional tone here really turns me off to the entire system. If you write like a typical teenager, you probably code like a typical teenager, and I don't want a typical teenager writing my goddamn build system.
Besides: who the hell is bottlenecked on the build system? The compiler and linker (or the equivalent for your favorite language) do all the work. Anyone who believes this article makes a different is completely ignorant of Amdahl's Law.