Hacker News new | past | comments | ask | show | jobs | submit login

The problem there is that it would break makefiles that do not handle their dependencies correctly. With non-parallel builds, the order of execution is deterministic, and so you can get away with sloppiness in your dependencies. If it works the first time that you test it, it will continue to work.

I can understand why they would be hesitant to change the default, as they would rather that old, tested scripts continue to work without modification.




> The problem there is that it would break makefiles that do not handle their dependencies correctly.

Serious question: In what way are Makefiles that are not specifying their dependencies correctly ever to be considered non-broken?


As said: their behavior is still deterministic in non-parallel execution.

And it looks like gnu make developers aren't like compiler developers that exploit any ambiguity as an excuse to really mess up somebody's day.


It's the exact same argument that comes up whenever gcc improves their optimization algorithms by exploiting undefined behavior, making some code no longer work. In both cases, the original code was fundamentally broken from the start, and the change in tooling only revealed the brokenness, not causing it.

I would completely see such makefiles as being broken.


The impact of those changes is rather different. In the case of make, your build would probably break. In the case of gcc, your program's behaviour would silently change.


In the case of make, with missing dependencies, it can result in a file not being re-compiled when it should be. If you are compiling C, this can result in the definition of a function definition being different in two different compilation units. When one of those compilation units calls a function defined in the other, your program's behavior breaks. All due to a change in the build tool.


That's true. I qualified my statement with probably because there are exceptions. Protecting against those sorts of errors is why my release candidates are done with a clean build and newly fixed bugs are reverified on that package before release.


Of course, they are broken. If dependencies are specified incorrectly, there will be files that, if modified, do not lead to correct rebuilds.

But that doesn't mean there aren't stable workflows using those broken files that are not sensitive to the brokenness.


This is exactly why it should be the default. Defaulting to parallel builds would stop the epidemy of broken makefiles from growing.

Users of legacy makefiles would have to explicitely use "make -j1", or specify ".NOTPARALLEL:" in the makefile - no big deal.

(As a beneficial side effect, defaulting to parallel builds would create an incentive for fixing sloppy makefiles).


callously harming existing and long-term users in the pursuit of some narcissistic ideological goal seems risky.


It will also not work with recursive make invocations, e.g. when top level makefile simply invokes makefiles of several independent projects.


This actually works fine in GNU make too. The parent make acts as a job server. The implementation is ingenious, see http://make.mad-scientist.net/papers/jobserver-implementatio...




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: