Make is great, and I wish more people would use it in place of whatever monstrosity is en vogue this week.
However, there is one thing which Make absolutely cannot handle, and that is file names with spaces. If you have any risk of encountering these without any possibility of renaming them, you’ll sadly have to give up on using Make; it just won’t work.
Spaces in filenames break most of make's builtin functions such as $(sort), and break the $?, $^ and $+ automatic variables. But they're OK in target names as long as you escape the spaces with backslashes. In some cases you can also use them in source file names -- you have to hard code the names in the build rules since $^ won't work (but for targets built from a single source file, $< still does work).
This applies to GNU make; not sure about AT&T make.
Make is my default. However, as the project grows, I find myself wanting to organize modules by directory and I find it cleaner to switch to CMake which then generates the Makefiles for me.
Just off the top of my head, it could be object oriented (rules could be subclassed), the language could have more sophisticated statements, it could have debugging, etc
Why is old bad? I can't tell you how often one of my team shows me some clever, non-trivial Python script that I solve in a couple of lines of awk/sed/perl. Or a small makefile. Why in the world would I want an OOP make? What do I need 'more sophisticated' in make statements that wouldn't really mean that I needed to be using a different tool that undoubtedly wasn't solving the problem make solves?
> I can't tell you how often one of my team shows me some clever, non-trivial Python script that I solve in a couple of lines of awk/sed/perl
I see you have decades of experience with those tools (awk/sed/perl), so they are second nature to you. I can sortof help my way through them but I find them very cryptic and hacky. The information density is very high, lots of implicit things happening, many abbreviations and symbols. It tenses my mind up.
It heavily depends on what you "grew up" on. I've been using Python for about 15 years now and when I switch to Python, I can feel the freedom to express things in a straightforward way without worrying about edge cases or cryptic syntax. Line by line it just does what it says, in English words. A few lines of list comprehensions .split(), sum() etc. can do a lot while still being crystal clear. In so many cases I've had some issue with Bash / Unix tools, looked on StackOverflow, found high activity questions with many answers that boil down "well, it's not really possible in a simple way" and then some hacky workaround that nobody would understand later unless I link to the SO page as a comment.
Now sure, sed, grep, awk etc. all have their uses, but for me that ends at about 100-150 characters or perhaps a bit longer ones for one-off use. Anything longer or for permanent use, I find it clearer to write it out explicitly in Python. After this, suddenly lots of powerful ways of extending it open up, which would be horribly cryptic and convoluted with the Unix tools.
I get it. And I don't disagree, per se. I confess to writing way too many baroque shell scripts that should have been Python or some such, and a couple of mission-critical awk scripts that were way longer and outside the appropriate use case than they should have been. You can really go down a rabbit hole there.
But haters (kidding) should really understand what they can do with a couple of hundred characters of sed, awk, perl, etc. before they discount them. And if it's "not really possible in a simple way", that's pretty much a good indication that they need to do it in Python, Ruby, etc. It's not either-or, black-or-white.
I wish it had a little python in it. After using pathnames in awk/sed/perl I find using os.path in python gets rid of all the special cases due to quoting and escaping. Python has nice lists and dicts and sets too.
For OO, I wish rules were a bit like python classes.
It would be wonderful for say 10 c files compiling one way, and the 11th having a different option. Or overriding a few options everywhere for the debug build.
I also think if rules had some sort of class, maybe the nuts and bolts underneath could be changed. You could be explicit about some things that are very hard with make. For example, what if you could write your own custom "out-of-date" check. Instead of "result must be newer that dependencies", maybe you could do something with checksums or source control or something and say "ok, don't rebuild this, or here is the cached result"
The pathname thing doesn't bother me probably because the quoting and escaping is muscle memory after 30 years, but fair point and an easy source of sometime hard to find mistakes.
The 11th file thing as pointed out below can be worked around easily, but yeah mixing implicit and explicit rules isn't 'elegant'. Dunno know if 'subclassing' the rule or whatever is the answer; maybe if there's a lot of exceptions. Interesting to think about.
I think the answer to your last example is one of 1) 'Unix philosophy' the problem by invoking separate scripts/tools to set things up for what 'make' expects so you get the result you want ("if the checksum of the source file changes, 'touch' it to make sure it gets rebuilt" or something) or 2) maybe 'make' isn't the right tool, because it's definitely not always the right tool.
Because that's really the key: right tool, and 'it's old' by itself is rarely the reason not to use a tool.
Yeah, I wasn't specific here. Tabs and spaces in Make are different specific things. I also like Python's significant whitespace.
It's easy to mess up whitespace in Make and it might not complain about it. Mixed whitespace in Python will either work without problem, or it will tell you when it doesn't.
I agree, and the solution to this problem is to forbid filenames with spaces. The convenience of make and similar tools is much more important than spaces in filenames. File names with spaces should not be allowed in modern filesystems. When the user types a filename with spaces, the GUI should encode the space as a non-breaking space character, that does not cause havoc in scripts.
Filenames in most Unix derivatives is an arbitrary bag of bytes only excluding '/' and '\0', they don't even have to have a valid encoding. If you can handle that spaces are the least of your concerns. Tools should be able to handle them by now, one would think.
The human friendly name should really be metadata on the file anyway. Regular users don't need to deal with raw filesystem internals, we can just say that filenames are identifiers that are always lower case, in the C locale, encoded as utf8 or 16. Any whitespace is not allowed.
Yeah, I know it's not always intentional, and I understand the need to be able to distinguish between such files. It's just annoying that there isn't a better solution than "all filesystems, forever, must support a problematic use case that serves no good purpose". To be clear, naive solutions would likely be a slippery slope w/ worse side effects. So, on a certain level I feel obliged to apologize for complaining given I can't think of what a solution I'd prefer might look like.
A way around that is to convert any spaces in a filename to non-breaking spaces (if you can). That will not only fix problems in Make, but also ease use in the command line.
For very low values of "ease use", in my opinion. Spaces in filenames are a bad idea; non-breaking spaces would be an anti-pattern or an attack, in my opinion.
How do you portably type a non-breaking space, if you don't have tab-completion?
Recently I joined an environment that uses Makefiles as the facade in front of pretty much everything, from git submodule update shortcuts to building code and running local development servers.
Surprising myself, I’ve quickly grown to appreciate working Makefiles. That said, since the syntax somewhat encourages terseness, when I need to fix a non-trivial target it tends to look like black magic—nothing reading a few man pages can’t fix, but it takes extra time.
It’s not my first choice overall, I prefer to leave out the extra layer and document direct command-line calls in a README. If a commonly used tool changes its call in a new version, with README it’s a documentation issue, but with Makefile it’s broken software.
Perhaps I did not express that well. This is how things may go:
(1) I need to build this, but the Makefile is broken. (2) I invoke the build directly by asking colleague for help. (3) Can I be bothered to update the docs? Maybe. Can I be bothered to fix Makefile targets? Much less likely.
What I love about Makefiles is that they just use the CLI tools. Full build tools like Gradle or Bazel require installing specific plugins and learning a new inferior syntax, making them a nightmare to use if you need to use a non implemented feature of the underlying tool. The biggest pain point is also that they don't even bother to print the actual command being executed!
I recently used make in a side project[1] to implement a "full" continuous delivery pipeline and it really was refreshing, despite the syntactic quirks.
make works well when you are targeting a single platform with a decent shell and the project is not too complex (e.g., no auto-generated source code that requires its own automatic dependency tracking). Once that no longer holds, make becomes a real liability.
Note that the author of that paper (a friend of mine) wrote another build system, the dead-simple-but-awesome make.py. I have a mirror/fork of it[0], since it's been unmaintained for a while (but it mostly doesn't need any maintenance).
The entire build system is a single Python script that's less than 500 lines of code. Rather than trying to fit complicated rules into Make's arcane syntax, rules are specified with a Python script, a rules.py file (see [1]). But the script should be thought of more as a declarative specification: the rules.py file is executed once at startup to create the dependency graph of build outputs, and the commands to build them.
Yet, despite the small size, it's generally easier to specify the right dependencies, do code generation steps, and get full CPU utilization across many cores.
At some point I'd like to write more about make.py and try to get it used a bit more by the public...
If you haven’t seen Bazel, you should take a look.
It’s definitely not as minimalist, but it has a very, very similar model for specifying the build. In me experience, it’s pretty easy to get going, and it makes it pretty hard to screw up any of the important features of the build.
Yeah, I know about Bazel, but only at a high level--I haven't used it.
I generally think the hermetic build concept is a very good one, but IMO Bazel goes about it the wrong way, and is overengineered. Rather than needing custom-built infrastructure for every type of language supported, I'd prefer build systems to use lower level OS facilities for discovering dependencies and controlling nondeterministic behavior. That is, build rules would use something like the rules.py files of make.py, specifying any arbitrary executables to run, but without needing to specify the input dependencies of each rule. Each command run would get instrumented with strace (or the equivalent for non-Linux OSes), and filesystem accesses detected. If a file is opened by a build step, that path would be checked for other build rules. If one exists, and it's out of date, the first build step gets paused while the input file gets built, then resumed. All of this happens recursively for the whole build graph, starting from the first requested build output. Other potentially nondeterministic system calls (timestamps, multi-threading/-processing, network access, etc) would be restricted/controlled in various ways yet to be determined.
That said, I haven't actually built anything like that (or know of anyone else that has). Maybe there's some complicated issues that this couldn't deal with but Bazel could. For example, there might be sources of nondeterminism that don't involve syscalls, like vDSO, I don't know for sure though. Portability between OSes would definitely be an issue. But overall I feel that, barring any major unforeseen issues, something like this could be built in a fairly minimalist fashion; maybe a few thousand lines of Python, possibly a small C module.
There are build systems that use strace to find dependencies. For instance tup [1][2] and Fabricate [3]. Also see this post by Waf which discusses some issues with this approach [4]
Ah, thanks for the references. Now that you mention them, I realize I definitely knew about tup and fabricate before (and possibly waf?), but had forgotten about them. I haven't really thought much about trace-based build systems in years, until this subthread.
And looking through that waf blog post, I realize that I meant ptrace instead of strace--I want full fine-grained syscall interception, not just a text report afterwards. That gets around a lot of the overhead/parsing problems mentioned, and is required for the "pause build command so its input file can be built" case.
> Rather than needing custom-built infrastructure for every type of language supported
My understanding is that bazel is moving away from this, so that you can define toolchains by saying "here is a binary that serves the job of linking/compiling stuff".
The challenge with your idea is that you're basically saying "hey, we should sandbox and introspect <any number of fairly arbitrary and complex binaries> to intercept and modify their filesystem and network (at a minimum) accesses, across any number of versions and uses". Even just handling conditionally rewriting file writes/reads based on guessing whether something is an input or re-used output isn't that easy in general.
> My understanding is that bazel is moving away from this, so that you can define toolchains by saying "here is a binary that serves the job of linking/compiling stuff".
How do they ensure determinism in that case? Is it just an easy escape hatch so that new languages can be easily supported, with no actual guarantees of hermeticity?
> The challenge with your idea is that you're basically saying "hey, we should sandbox and introspect <any number of fairly arbitrary and complex binaries> to intercept and modify their filesystem and network (at a minimum) accesses, across any number of versions and uses".
I think my approach would certainly use a syscall whitelist. Any unsupported syscall would be a build failure, and presumably a bug report if it's a legitimate use. I suspect most build commands can get by with a pretty minimal set of syscalls (mainly basic filesystem access). At some point though, if you start supporting more and more syscalls, you start re-implementing VMs/containers, which sucks. This build system only stays simple if people don't try to do a bunch of wacky things with it :)
Network accesses would probably get whitelisted by the user on a rule-by-rule basis for cases like "download these packages", with the outputs treated as always dirty. The tool would be responsible for running efficiently even if no work actually needs to be done.
One weird/hard thing to support would be soft/hard linking. I'm not sure exactly what should be done there, but that might not be needed for early versions.
> Even just handling conditionally rewriting file writes/reads based on guessing whether something is an input or re-used output isn't that easy in general.
I'm not sure I understand this. One thing I should note is that in my scheme, you still have to specify output files for rules--you only get to skip specifying the inputs.
> Is it just an easy escape hatch so that new languages can be easily supported, with no actual guarantees of hermeticity?
Yes, I mean in general some level of compiler hermeticity is assumed. You can verify it by checksumming everything (which bazel does), but you can easily destroy the performance/caching of bazel by modifying clang to have intentionally unpredictable results.
> One weird/hard thing to support would be soft/hard linking. I'm not sure exactly what should be done there, but that might not be needed for early versions.
I think bazels solution here is to just always make fat binaries. Or at least that's how it works if you're using blaze.
> I'm not sure I understand this. One thing I should note is that in my scheme, you still have to specify output files for rules--you only get to skip specifying the inputs.
Ah this helps somewhat, I think there are still potential ambiguities, one thing that bazel can do is statically determine all of the input sources without actually executing the build. You of course can't do this. This has a few nice properties: you get missing input errors before doing any actual building, you have a build graph you can statically analyze (deps queries are magic), and all the sandboxing can be done ahead of time (with symlinks to create a shadow filesystem) instead of ad-hoc, you can parallelize everything, meaning you can saturate all your machines the entire time.
With strace, you have performance issues from the tracing itself, and you can't precompile all the dependencies beforehand you have to discover them as you go (this likely also hurts on memory since you'd need to keep all the partial compiler information in memory).
I also find it easier to reason about stuff with explicit deps, but that's just me.
> I think bazels solution here is to just always make fat binaries. Or at least that's how it works if you're using blaze.
Oh, I should've specified soft/hard filesystem links--they'd make handling a virtual filesystem rather complicated. Dynamic linking is another strange case, but it looks like dylib loading happens in userspace, through an open()/mmap() (at least in Linux), that would be caught through the normal filesystem hooks.
> Ah this helps somewhat, I think there are still potential ambiguities, one thing that bazel can do is statically determine all of the input sources without actually executing the build. You of course can't do this. This has a few nice properties: you get missing input errors before doing any actual building, you have a build graph you can statically analyze (deps queries are magic), and all the sandboxing can be done ahead of time (with symlinks to create a shadow filesystem) instead of ad-hoc, you can parallelize everything, meaning you can saturate all your machines the entire time.
OK, I started out thinking that there would be very little parallelism impact from my scheme, but I realize now the problem that dependencies for a build step are only discovered serially, as each one is built and the dependent process continues.
I could imagine you could work around this with a speculation engine that uses the dependencies from the last run (possibly cancelling the speculated commands mid-flight if the dependent process doesn't end up opening their outputs, and if the speculated commands hasn't started writing files or anything). This would generally work fine, but would waste some work whenever you remove dependencies from a build step (e.g. delete an #include). But it does start to get messy!
Another note: as far as not specifying inputs, I mean inputs beyond the command line (which obviously needs to be specified up front). So you would only need speculation like the above for things like auto-generated headers. The usual cases like "build this executable out of these objects which are each built from these source files" can be parallelized just fine with no speculation.
Static analysis-type queries could be done after a build, but not in a clean tree. Not sure how much of a setback this would be--I'm not generally working in massive projects, and haven't yet felt the need for such queries.
strace performance is a good point (I don't know what sort of overhead it has), and so is memory overhead.
> I also find it easier to reason about stuff with explicit deps, but that's just me.
I could go either way on this one--I generally prefer explicit to implicit, but I don't like repeating the explicit instructions twice (build rules and source code), with the failure mode being unreliable incremental builds. If I'm going to the trouble of explicitly listing dependencies, though, I'll probably go with the ridiculously simple make.py instead of messing with Bazel...
Wow, that's kind of incredible. That bug has been open for years, and is only starting to see some progress. Lends some credence to my unsupported claim of Bazel being overengineered...
I also wrote a make replacement in python (2.7) [1]. I'm proud of it but it doesn't do parallel builds and there are so many other build tools that are better tested. But I'll put it here in case it gives anyone any ideas.
That's pretty cool! It's really not too different from make.py. Using decorators and python functions is likely cleaner in a lot of cases, though I'm not sure if that would fit well with make.py's pseudo-declarative model. It probably wouldn't be too hard to add support for parallelism, either.
What I typically do is use make only for what it is good: as a dependency resolution back-end.
All the build logic for my projects is written in Python, in an executable file stored in the project root directory and called "make" (I have "." in my PATH).
The Python script, when it runs, generates on the fly a clean, lean, readable, unrolled, Makefile and feeds it directly to /usr/bin/make via a pipe.
Works like a charm:
Python (a sane and expressive programing language) to express the high-level logic needed to build the project.
Make as a solid back-end to solve the "what needs to be rebuilt" problem (especially the parallel version with -jXX)
There's at least one more usecase IMO: definition of common development lifecycle steps in a shared Makefile across services. At my current workplace, instead of having a bunch of bash scripts in every service, I just give every service repo a Makefile that usually is a oneliner where common.mk is included. this just wraps docker-compose and gives us commands like make, make run, make stop, make lint, make test, make help etc.
This way we can e.g. have repos using completely different technology stacks but the interface to them is the same - whether it's our database, a node.js webservice, a python data analytics tool etc. and the definition of these lifecycle commands in common.mk are totally trivial, they're just .phony. one-liner rules.
Yes, but a Makefile is a sensible place for documentation on how to build and run everything (that happens to be executable). If there's a bunch of scripts all over, I'm not going to know which one I'm supposed to run. And it's probably not going to resolve dependencies for me.
i like make primarily as an 'entry point'. there are better tools for dependency management and building, usually language-specific, but as the OP notes, that also requires remembering each tool's invocation details, and documenting them in the README for anyone else wanting to build your project. it's easier to capture the tool invocation in a Makefile and let that serve as your primary interface.
"Command output is always buffered. This means commands running in parallel don’t interleave their output, and when a command fails we can print its failure output next to the full command line that produced the failure. "
This kind of projects always makes me sad: you've got to read and debug through an impenetrable wall of custom python code to understand why the build fails.
When it fails it doesn't. And if you need to bring up Autotools that weren't brought up by anyone in this thread then that justification isn't sound at all.
It is not about specific building scheme for dependencies that make uses. It is more about "all-purpose" stuff: automate tasks that you run manually or via CI in a general purpose language.
Makefiles are great entry points for ci/cd pipelines. It's easy to pass arbitrary environment variables at runtime, targets to build, define basic dependencies, and have clear steps to execute that can include some minimal inline shell. And since it's pretty dependency-less, I can run the same make commands locally to test the pipeline as I'd use in a remote CI system.
I often use them as a wrapper for Terraform weirdness, where you may want to call an ADFS-enabled AWS login tool or not, depending on if `aws sts get-caller-identity` returns. Or assume a role before running all targets. Or extract values from a terraform.tfvars.json, to pass to the above two steps. Or bootstrap a remote backend if it doesn't exist. Or remove stale module symlinks. Or properly run init, get, and validate before running a plan or apply. Or document weird -target usage. The end result of just running make prep and make apply with no further knowledge required is exactly the experience I wanted out of Terraform initially.
I sometimes use GNU Make to fire off custom code generators, before the files are handed off to other parts of the toolchain which can have their own complicated dependency management. This works quite well. The one annoying problem that I often encounter is that Make does not handle multiple targets (i.e. the code generator generates multiple files, e.g. 'file1.h', 'file1.cpp', 'file2.h', 'file2.cpp', 'test.cpp'). I usually end up inserting a bunch of .PHONY targets, which causes unnecessary evaluation of the dependency graph, but at least it works, instead of breaking in seemingly random ways.
My other use of Makefiles is to capture small (< ~5 lines) of bash, python or such scripts for doing certain things within a directory. I find that to be more efficient than documenting that sort of info in a README.md file.
Not sure what you mean by "Make does not handle multiple targets".
You can definitely do `make foo bar` and it will run the recipes for both foo and bar. You can also write a recipe with multiple prerequisites (which could be the result of a variable expansion).
Curious about the limitation; could be there's a way around it or that I never ran into it.
I tried helping a friend with some Fortran but eventually gave up due to this issue and GNU Fortran's insistence on using the standard % instead of "." for structure syntax.
Isn't the solution to that problem simply to use stamp files
You declare foo-stamp as a dependency of both foo.c and foo.h.
foo-stamp itself is dependent on the file used to generate those foo.* files.
The recipe for foo-stamp invokes the code generator which updates the files, then you `touch foo-stamp`at the end of the recipe.
There are other ways to do this in Make as well. I use Make extensively to drive CMake which makes all sorts of products, so my confusion comes from thinking this works fine (works for my purpose, obviously, but I'm interested in other possibilities).
If you have some targets that depend on foo.c and other targets that depend on foo.h and you build in parallel, some-tool will run twice, possibly at the same time as something is reading the output, possibly causing problems.
[edit]
The workaround I've used is something like:
foo.c: foo.in
some-tool foo.in
foo.h: foo.c
true
Note that the "true" command is important because GNU make will search a library of implicit rules if you have no recipes for a particular target.
In general multiple output files are even harder with some make replacements, because most make replacements try to be smarter about build outputs. redo, for example cannot handle this, but tup can.
Indeed I've stayed away from -j because it's very hard to figure out how recipes may interact when you have a complex Makefile (and with a simple one -j rarely helps much).
I like your fix; even without -j, if some-tool is expensive to run you don't want it run redundantly.
I use make (or redo) with -j for parallelizing tasks all the time. It's a really great tool for that.
If you specify your dependencies correctly, then -j will not break anything (other than out-of-memory when parallelizing beyond your memory resources).
Many implementations of redo have an option that will randomize the order of building dependencies; it's a useful way for testing if you have your dependencies fully specified or not.
For non one-off tasks where incremental builds are desired, I've been really liking tup. It will automatically catch many mistakes with dependencies for you up-front, saving much hair-pulling down the road.
[edit]
An example of what I use make/redo parallelization for in one-off scripts is bulk conversion of images or sound files; it's usually 1 file in 1 file out and rarely are the tools already heavily parallel.
Make is very programmable; here's something from our code base:
# Yes I am aware that this looks like TECO and Prolog had a baby.
$(foreach prog,${3P-nonboost-packages},$(patsubst %,3P-build-%/${prog},${MAKE_CONFIGURATIONS})): 3P-src/$${@F} $(patsubst %,toolchain-%/_env,${CONFIGURATIONS})
@echo Building ${@F} for $(subst 3P-build-,,${@D})
@mkdir -p $@
@if [ -f "$</CMakeLists.txt" ] ; then \
${env-$(subst 3P-build-,,${@D})} cd $@ ; \
cmake -DCMAKE_MODULE_PATH='../../3P-$(subst 3P-build-,,${@D})/lib;../../toolchain-$(subst 3P-build-,,${@D})/lib' ${${@F}-cmake} -DCMAKE_INSTALL_PREFIX=../../3P-$(subst 3P-build-,,${@D}) ../../$< && cmake --build . ; fi
Make was what really drove home the concept of "developer time" for me, when after spending a quantity of time learning more Makefile tricks than I knew before and fixing the Makefile for our C/C++ code to handle header files correctly so that I would stop having to run make clean constantly, I realized that the Makefile cost the company ~$1000 for me to write. I'd like to think it saved the company more than that in future dev time, but it was still startling to me to realize that by deciding to rewrite the Makefile I had effectively decided that the company should spend $1000 on a build config.
Using a plethora of disconnected, non-build targets in a Makefile to provide a "make <command>" language sometimes seems like such an anti-pattern. Those commands just want to be simple scripts, right?
Why does that pattern persist? I believe it is for these psycho-technical reasons.
1. The current directory "." is usually not in PATH for security reasons. But make ignores that; it reads a Makefile from the current directory.
The psychological hypothesis here is that people somehow like typing
make bundle
make yarn
make db-reset
compared to the no-Makefile alternative scripts:
./bundle
./yarn
./db-reset
Something always feels off about running a program as ./name.
2. If there are any shared make variables between the non-build utility steps like "make bundle" and actual build steps, then it's easier for those utility steps to be in the Makefile so they can interpolate the make variables. The scripted alternative would be to have shell variables in some "vars.sh" file that is sourced by all the commands. But then somehow the Makefile would have to pick those up also in some clean way, probably requiring a ./make wrapper:
#!/bin/sh
. ./vars.sh
# propagated needed subset of vars to make
make FOO="$FOO" BAR="$BAR" "$@"
So I think these are some of the main sources of the "pressure" for various project-related automated tasks to go into the Makefile.
Another source of the pressure is that the "<command> <subcommand>" pattern is present elsewhere, like in version control tools "quilt push", "git blame", ...
It has the technical advantage of namespacing. If you have a make target called "ls", then "make ls" doesn't clash in any way with /bin/ls.
This uses ImageMagick commands to massage the various image files into the desired form without me having to manually invoke the commands image by image. Admittedly, on looking at it, I don't think I got a great deal of dependency-tracking mileage out of make in this case, because the source images weren't actually changing—only the build process was changing, and make doesn't track that (although redo, for example, does.) But in cases where you're dynamically adding new input files, make is super helpful for generating thumbnails or whatever from them. As long as the filenames don't have spaces.
My most immediate work task for the morning is helping a colleague figure out why SCons is failing to build the JNI binding for our project, although the old makefile builds it fine. Sigh.
Make has very serious problems with it's design in my opinion. It's builds are not hermetic. There's no way go distribute/include another person's make file. The language it uses is extremely complicated and focuses on being compact instead of easy to understand.
I with we all dropped makefiles and decided on a single build system I'm the bazel-lineage to lean on. The world would be a better place if everything came with BUILD files.
Gnu make build systems can horrible to debug when they get complicated. Cmake or some of the other build systems can generate makefiles in addition of checks executed prior to building the project which are useful for finding all dependencies. I find it easier to work with cmake than with pure makefiles.
IMHO make is not a great target for code generation. I.e. cmake should probably compile to something like ninja (the way meson does, actually I haven't checked if cmake could potentially compile to ninja).
Cmake is a weird tool, it's kind of ok, one can see why it was built, but it is astonishing how little benefit it provides over autoconf+automake as a makefile generator.
The thing about pure Makefiles is: they don't scale well. They usually start simple, and with project-growth the accumulate parameters, features and tasks, until they become an unbearable maintenance burden. I remember working on a project where I deleted a Makefile, and rewrote it from scratch, just because it contained the tinkering of about 10 devs, most of which were not actively developing the project anymore. In the end it was a magnitude smaller and faster. But: I am pretty sure it has either grown massively again over time, or it has fallen out of use.
I use Makefiles when I'm learning the ropes of a new system build tool. E.g. "I want to do <foo>, so I run `make <foo>`", and the make target named <foo> has all the commands to build what I want. I did this when I was learning how Docker worked. I put the incantation to build a new image into a Makefile, as well as how to run the container and exec into it. Not the best system, but works for me as a kind of living notebook.
I run Makefiles in other places too. <3 Couldn't live without it.
Make is like the Lisp of the build world. It's powerful and you can build anything with it, but it won't be compatible with anyone else's stuff the way it would be in a more opinionated system, so you can't leverage other peoples' work much.
I used make for decades, then switched to CMake, got burned too many times, and now I've moved on to Meson. There really isn't a good build system for C/C++, which is a shame :/
Super simple, uses your favorite language for specifying the build (usually bash, but ... anything goes), much more robust than make, parallelizes builds, and can be included in your project as a 800-line bash script so that your users don't have to install it.
It's not blaze/bazel (et al) -- no hermetic builds, for example, but it doesn't put the '.o' files out there unless the compilation is successful, and it does verify file contents rather than just time by default - most build systems fail on the last two.
However, there is one thing which Make absolutely cannot handle, and that is file names with spaces. If you have any risk of encountering these without any possibility of renaming them, you’ll sadly have to give up on using Make; it just won’t work.