> Has reimplementing make for the 50th time really improved things?
Considering the authors did so and find it to be an improvement in terms of maintainability and usability... I'm going to say "yes". Do you think you know more about the project than they do?
I used to work on GHC. The build system is complex. Hadrian is quite an improvement in power and expressiveness (and is now capable of doing things we wouldn't have been able to implement easily with Make, since extending the prior system was too hard).
> The fact is, building software that requires 500 dependencies and 500 sub-steps and 500 configuration options is going to be complicated. It's complicated in the same way that implementing an operating system is complicated. There's no way around it. The complexity is there because it's inherent in the problem.
I get the feeling you're going to use this random truism as a springboard to make suggestions despite the fact you've never been involved in the project?
> But it doesn't have to be. Instead of spending 300 hours implementing Shake, or Rake, or Bake, or Cake, or Jake, or Take, why not spend those hours cutting down the complexity at the source? Trim your dependencies. Stop putting so many sub-steps and configurations into your build systems. Is that the sane way to do things?
That would be nice if everyone had endless time and everything was always done exactly perfectly up front. It would also be nice if you could work completely on your own and never have to interact with any other software in the world.
Binary tarballs, source distributions, upstream library dependencies, cross compilation, thousands of tests, tracking all dependencies correctly (this one alone is ridiculously hard), autogeneration tools (to save errors on tricky parts). Feature detection at compile and runtime (because your users work on some old CentOS machine and no `pthread_setname` is not available), profiling builds, running documentation generators, handling out-of-source builds, handling relocatable builds. I can just keep listing things, honestly. All of these -- more or less -- come back to your build system.
In fact, GHC goes quite out of its way to expressly use as few non-Haskell dependencies as possible. Why? Because the ones it already has are often burdensome and complex, and we have to pick up the slack for them for every user. Nobody using your project cares if Sphinx or their rube-goldberg Python installation (spread over 20 places in /usr) was the reason doc building failed; your build failed, that's all that matters. You've still got to figure out what's wrong, though, for your user. And not wanting new dependencies has been a common reason to reject things -- I myself have rejected proposals and "features" to GHC on this basis alone, more or less. ("Just use libuv!" was a common one that sounded good on paper and never addressed any actual issues we had that it claimed to 'solve'.)
As a side note, it really just amazes me the amount of people who immediately see any amount of non-trivial work in some project and immediately question "well, why don't you just do <random thing that is completely out of context and has no basis in the projects' reality>". Seriously, any time you think of this stuff, please -- just give it like, 10 more seconds of thought? You'd be surprised at what you might think up, what you might think is possible. It's not the worst thing in the job, but being an OSS maintainer and having to deal with analysis' that are, more or less, quite divorced from the reality of the project is... irritating.
> Considering the authors did so and find it to be an improvement in terms of maintainability and usability... I'm going to say "yes". Do you think you know more about the project than they do?
Every single self-described make replacement project makes the exact same claim, verbatim. Yet, when these projects start to see some use in the real world... Queue all the design shortcomings and maintainability and usability problems.
We're about 4 decades into this game. Perhaps this time everything is different. Who knows. Odds aren't good, though.
I think that being the build system for the Glasgow Haskell Compiler - which is the most commonly used compiler for Haskell - counts as "some use in the real world." I downloaded the source, did `git ls-files | xargs wc -l > wc.out`, then `grep "total" wc.out`, summed the totals, and it comes to 1051451. That's an overestimate in lines of code, as there's certainly documentation in there, but there's about 620,000 lines of Haskell.
Great, someone managed to get a build system to work for a project. That's nice. I'm sure there are a bunch of cases where even hand-written makefiles are being used to the same effect. Does that mean that any of those tools are free from any design issue or maintainability problem?
That's a good question! I think a good way to figure that out is to publish a paper about what they did, how it solved their non-trivial problem, and then invite others to try to use their tool and techniques to solve their problem.
In other words, you're asking a question that criticizes the mechanism that would answer that question. It's a legitimate question, but a poor reason to disregard what they did.
Maybe my reference to "parent" was unclear. I was referring to the top-level comment.
For those who don't know, aseipp is/was a major contributor to GHC and will be intimately familiar with the build system of GHC. His observations are on point.
Relatedly, I'm currently also fighting about 3-4 different build systems which are "classics" of the genre and yet are broken in subtly different and interesting ways.
Software is always complex, it’s just that the complexity is gradually hidden from the developer by the use of libraries.
Until those libraries get baked into the standard library of whatever you’re using, you’re going to have to implement complexity yourself, or use a dependencies.
Unless you’re scripting, doing something entirely within your languages or OS framework, or implementing everything yourself (hello complexity), you’re going to hit complexity and dependencies very vey early.
The only time I’ve seen this avoided is in the embedded space where you physically don’t have enough bits to get complex.
Well that's just it. Embedded forces people to make different design decisions. We only have this mountain of shitty code because we've given ourselves enough rope to hang from.
We got to the moon with a computer less powerful than my microwave. My old smart phone worked just fine without 4 gigs of RAM and 32 gigs storage, and now this monstrosity in my hand is running out of resources? It doesn't have to be this way.
> We got to the moon with a computer less powerful than my microwave.
Can that computer show a GUI with multiple videos playing simultaneously surrounded by UI elements where multipile peripherials (mouse, touchscreen) can control their display area, all the while running two compilers (C++, Scala), and indidentally also running a Virtual Machine, etc. etc?
"Get to the moon" is an absurdly simplistic way to view complexity and it does your argument no favours.
(That's not to underplay getting to the moon. It's an amazing achievement, but if you look at the resources/humans poured into the project, it's actually not that amazing that it was possible.)
Considering the authors did so and find it to be an improvement in terms of maintainability and usability... I'm going to say "yes". Do you think you know more about the project than they do?
I used to work on GHC. The build system is complex. Hadrian is quite an improvement in power and expressiveness (and is now capable of doing things we wouldn't have been able to implement easily with Make, since extending the prior system was too hard).
> The fact is, building software that requires 500 dependencies and 500 sub-steps and 500 configuration options is going to be complicated. It's complicated in the same way that implementing an operating system is complicated. There's no way around it. The complexity is there because it's inherent in the problem.
I get the feeling you're going to use this random truism as a springboard to make suggestions despite the fact you've never been involved in the project?
> But it doesn't have to be. Instead of spending 300 hours implementing Shake, or Rake, or Bake, or Cake, or Jake, or Take, why not spend those hours cutting down the complexity at the source? Trim your dependencies. Stop putting so many sub-steps and configurations into your build systems. Is that the sane way to do things?
That would be nice if everyone had endless time and everything was always done exactly perfectly up front. It would also be nice if you could work completely on your own and never have to interact with any other software in the world.
Binary tarballs, source distributions, upstream library dependencies, cross compilation, thousands of tests, tracking all dependencies correctly (this one alone is ridiculously hard), autogeneration tools (to save errors on tricky parts). Feature detection at compile and runtime (because your users work on some old CentOS machine and no `pthread_setname` is not available), profiling builds, running documentation generators, handling out-of-source builds, handling relocatable builds. I can just keep listing things, honestly. All of these -- more or less -- come back to your build system.
In fact, GHC goes quite out of its way to expressly use as few non-Haskell dependencies as possible. Why? Because the ones it already has are often burdensome and complex, and we have to pick up the slack for them for every user. Nobody using your project cares if Sphinx or their rube-goldberg Python installation (spread over 20 places in /usr) was the reason doc building failed; your build failed, that's all that matters. You've still got to figure out what's wrong, though, for your user. And not wanting new dependencies has been a common reason to reject things -- I myself have rejected proposals and "features" to GHC on this basis alone, more or less. ("Just use libuv!" was a common one that sounded good on paper and never addressed any actual issues we had that it claimed to 'solve'.)
As a side note, it really just amazes me the amount of people who immediately see any amount of non-trivial work in some project and immediately question "well, why don't you just do <random thing that is completely out of context and has no basis in the projects' reality>". Seriously, any time you think of this stuff, please -- just give it like, 10 more seconds of thought? You'd be surprised at what you might think up, what you might think is possible. It's not the worst thing in the job, but being an OSS maintainer and having to deal with analysis' that are, more or less, quite divorced from the reality of the project is... irritating.