The problem there is that it would break makefiles that do not handle their dependencies correctly. With non-parallel builds, the order of execution is deterministic, and so you can get away with sloppiness in your dependencies. If it works the first time that you test it, it will continue to work.
I can understand why they would be hesitant to change the default, as they would rather that old, tested scripts continue to work without modification.
It's the exact same argument that comes up whenever gcc improves their optimization algorithms by exploiting undefined behavior, making some code no longer work. In both cases, the original code was fundamentally broken from the start, and the change in tooling only revealed the brokenness, not causing it.
I would completely see such makefiles as being broken.
The impact of those changes is rather different. In the case of make, your build would probably break. In the case of gcc, your program's behaviour would silently change.
In the case of make, with missing dependencies, it can result in a file not being re-compiled when it should be. If you are compiling C, this can result in the definition of a function definition being different in two different compilation units. When one of those compilation units calls a function defined in the other, your program's behavior breaks. All due to a change in the build tool.
That's true. I qualified my statement with probably because there are exceptions. Protecting against those sorts of errors is why my release candidates are done with a clean build and newly fixed bugs are reverified on that package before release.
I can understand why they would be hesitant to change the default, as they would rather that old, tested scripts continue to work without modification.