Hacker News new | past | comments | ask | show | jobs | submit login
How I stopped worrying and loved Makefiles (gagor.pro)
227 points by ___timor___ 14 days ago | hide | past | favorite | 140 comments



If you’re really looking for a tool to collect small steps/script I highly recommend you check out the ‘just’ cli tool. It’s completely replaced our use of Make and the syntax is much easier.

I've always wanted to use one of those, but you have to convince the other developers on your team to use it as well. Make is a relatively easy sell because it's on pretty much every system or at least trivial to install and has decades of reputation.

> it's on pretty much every system

Some system have GNU, some BSD.

> at least trivial to install

Then same for `just`, package manager have it.


This is why I'm uninterested in just. I'd rather just use make than add another dependency.

Heresy, I know, but Make is a dependency in windows land.

Luckily few of my users build from source on Windows.

It would have taken you less time to install it than it took you to write your comment.

You're not accounting for updating my hundreds of existing Makefiles and CI config, and asking users to install yet another build dependency.

If it's hundreds, I'll agree with you, obviously that would be a huge effort to migrate.

But you made the argument that adding another build dependency is somehow difficult or undesirable which I can't see; the tool is installed in seconds on all major OS-es and package managers.

So let's not conflate those two separate things, yeah?


Why ask all my users to install just, when make works fine? Adding a build dependency is always undesirable.

just doesn't even do many of the things make does, as the README states: "just is a command runner, not a build system"


`make` running fine is the hill I'll proudly die on. Or simply put, I'll disagree forever. It's full of subtle issues that I got tired of tip-toeing around. It's a never ending black hole of programmer time.

And yes, `just` isn't a build system per se, it still requires something to track modified source files and whether they map to the requisite build artifacts. Good for C/C++ and maybe stuff like Latex -> HTML but outside of that it's obsolete and thank the gods for that.

If it works well for you, cool. As mentioned up-thread I wouldn't attempt a switch with hundreds of Makefiles already working fine either.

But I'm very happy to tell you that your case is the vanishing minority. Happily the world has moved on.


I've been using this on almost all of my projects, and am really pleased with it. Shell autocompletion is a nice bonus. If you also Nix, checkout `just-flake`:

https://github.com/juspay/just-flake


Have moved all my 'frontend' Makefiles to Just and couldn't be happier.

It doesn't do conditional processing of only out of date dependencies though, which is something that is often needed.

Seconded. For all the million things we used Makefiles for besides compiling software, Just is much more ergonomic.

Looks interesting.

Agree.

I should specifically mention their docs. The docs are easily approachable with plenty of clear and concise examples. I even have a pdf copy of the doc book for quick reference.



Came here to say the same. Make works fine, until it evolves into a web of commands and associated shell scripts, and then most users give up on trying to understand what is happening. Justfiles are much more manageable.

lol that's a purple link for me: https://github.com/casey/just

If you're going to use Makefiles as a top-level build wrapper you might be interested in self-documenting targets.

https://marmelab.com/blog/2016/02/29/auto-documented-makefil...


The punchline here is a help target that digs through your makefiles looking for comments:

  help:
 @grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}'
I stumbled over .*? since ? usually means optional, but turns out it means lazy in this context. The $$ would be makefile escaping. Dropping the grep and changing the regex slightly, I just appended this to an overengineered makefile:

  # Help idea derived from https://marmelab.com/blog/2016/02/29/auto-documented-makefile.html
  # Prints the help-text from `target: ## help-text`, slightly reformatted and sorted
  .PHONY: help
  help: ## Write this help
   awk 'BEGIN {FS = ":.*#+"}; /^[a-zA-Z_*.-]+:.*## .*$$/ {printf "%-30s %s\n", $$1, $$2}' $(MAKEFILE_LIST) | sort

Very nice idea, thanks for sharing it!

If you can give up regexp greediness sed+column can do it:

    sed -rn 's/^([^:]+):.*[ ]##[ ](.+)/\1:\2/p' $(MAKEFILE_LIST) | column -ts: -l2 | sort

Sed confuses me more than awk but you're right. That would also remove the only use of awk in my makefile (sed is there already for hacking around spaces in filenames).

Whitespace padding output in sed is probably horrible, column looks simpler than printf via bash or trying to use make's $info.


Whilst acknowledging that "Confused by SED" has overlap with "have used SED for 40 years and have three books purely on SED" I can recommend

https://www.grymoire.com/Unix/Sed.html

as a reference some might occasionally swear by.

    Anyhow, sed is a marvelous utility. Unfortunately, most people never learn its real power. The language is very simple, but the documentation is terrible. The Solaris on-line manual pages for sed are five pages long, and two of those pages describe the 34 different errors you can get. A program that spends as much space documenting the errors as it does documenting the language has a serious learning curve.

Appreciated, reading through it. I suspect the majority of the sed experience can be attributed to it using "posix regular expressions" by default. It was about a decade after first discovering sed that I realised passing -E was really important.

It is difficult for newcomers to guess that "extended regular expressions" refers to the barely-usable subset of "regular expressions" and "posix regular expressions" are terrible in comparison to either.

edit: alright, yes, one can program in that. Sed can recurse.

  .PHONY: help3
  help3:
   sed -nE 's/^([a-zA-Z_*.-]+):.*## (.*)$$/\1 :\2/ p' \
   $(MAKEFILE_LIST) | \
   sed -E -e ':again s/^([^:]{1,16})[:]([^:]+)$$/\1 :\2/ ' -e 't again '  |\
   sed -E 's/^([^ ]*)([ ]*):(.*)$$/\1:\2\3/' |\
   sort
The first invocation filters out the lines of interest, second one space pads to 16. That works by putting the colon before the help text and repeatedly inserting a space before the colon until there are at least sixteen non-colon characters in the first group.

Composing the -n/p combination with act on everything is a stumbling block for merging the multiple invocations together but I expect it to be solvable.


After a slightly dubious use of time, I can confirm that columns is not necessary. Also noticed that the original version missed double_colon:: style targets. I fear sort is not necessary either but one has to draw the line somewhere.

  HELP_PADDING := 30
  
  .PHONY: awkhelp
  awkhelp: ## Write this help using awk
   @echo "awkhelp:"
   @awk 'BEGIN {FS = ":.*#+"}; /^[a-zA-Z_*.-]+:.*## .*$$/ {printf "  %-'$(HELP_PADDING)'s %s\n", $$1, $$2}' \
   $(MAKEFILE_LIST) | \
   sort
  
  .PHONY: sedhelp
  sedhelp: ## Write this help using sed
   @echo "sedhelp:"
   @sed -E \
   -e '/^([a-zA-Z_*.-]+::?[ ]*)##[ ]*([^#]*)$$/ !d # grep' \
   -e 's/([a-zA-Z_*.-]+:):?(.*)/  \1\2/ # drop :: and prefix pad' \
   -e ':again s/^([^#]{1,'$(HELP_PADDING)'})##[ ]*([^#]*)$$/\1 ##\2/ # insert a space' \
   -e 't again # do it again (termination is via {1, HELP_PADDING})' \
   -e 's/^([^#]*)##([^#]*)$$/\1\2/ # remove the ##' \
   $(MAKEFILE_LIST) | \
   sort

Which make version does the latter command use? For GNU make 4.2.1 on Linux, I hade more luck with something along the lines of

    @awk '/^[a-zA-Z_-]+:[^#]*## .*$$/ {match($$0, /## (.*)$$/, arr); printf "\033[36m%-30s\033[0m %s\n", $$1, substr($$0, RSTART+3)}' $(MAKEFILE_LIST)

Looks like 4.3 but I don't think it matters - awk vs gawk/nawk might be significant though, gawk 5.2 on the machine I ran this on.

The match with substr is interesting. It's more complicated than setting the field separator to something like :|#+ but should mean : in the help text works. For something one only writes and debugs once, probably better to do the complicated thing that always works.

gawk will write the groups to an array, that's possibly more legible (and slower? should be slower than the leading non-capture //)

  @gawk 'match($$0, /^([a-zA-Z_*.-]+):.*## (.*)$$/, arr) {printf "  %-30s %s\n", arr[1], arr[2]}' $(MAKEFILE_LIST) | sort

I have been doing this for a long while now, and it's great

I've been using this for a while now - it's fantastic, highly recommended for non-core-Unix-build-stuff. Just make `help` your default recipe and voila, a massive drop in people asking "how do I run tests in make" -> they probably tried to `make` already and it told them how.

Great idea, added to my todo :)

Great tip! Thanks for sharing :-)

Even the slightest attempt at guessing the host system or searching for tools present on a system will quickly convert this into a blog article earnestly begging for deep societal changes and analyses the inevitable marching of time.


The real reason container systems like Docker became so popular.

the real _and unfortunate_ reason system like Docker became so popular: Caving in to the problem and creating endless replication and mess.

Hahaha. Yeap. And project dependency analysis and library feature detection quickly demand the GNU variant, autotools (gasp!), clunky scripts, or something else. Use make for simple things and simple things only.

Eric S. Raymond has a new project to eliminate the need for autotools: https://gitlab.com/esr/autodafe

It appears to give the benefits of autotools with the simplicity of make.


Yeah. Which is why I wouldn't implement that in the first place - I come down hard on the idea that you should ship your dependencies, have them in a standard place or use a third-party resolution system (like every vaugely modern setup does. C#, Python, Ruby, Node, Rust even Java all understand that this is not optional anymore).

Its a code smell to me when your build system starts to become too complicated and not fit on a screen.


You do know that there is software that runs on more than one version of Linux (or even on a *BSD, MacOS or Windows), works with more than a single version of a compiler,...?

Thats only really a problem for C/C++, which I have had the misfortune to make build systems for.

The solution is not to ship complicated and bug infested build systems, it is to fix the dependency problem in the same way any other language has done so far and until we do this, ship the dependencies with the program.

And if you aren't making a GUI and don't need to target Windows, just wrap it in Docker, which enforces a functional dependency system and means you can ship your dependencies.


> Thats only really a problem for C/C++

Even if that would be true, still every language that isn't C (or JS in the browser) needs to link or build against some C (or Fortran), which results in more or less working solutions on how to integrate with or build C sources. Of course you may not see that (as long as it works), because somebody else has wrapped that up in a package (or whatever) for your language of choice, but somebody has to do that.

> ship the dependencies with the program.

My post should have been the answer to this argument: this is not possible for example when "shipping" the source for a cross-platform library. Or a cross-platform end-user program/app. Or just about anything which isn't "just" some web-backend or a server of some kind.

Does that mean _you_ need a complicated (I'd call that "working for anything but the most basic stuff") build-system? No.

I am a bit puzzled by this phrase:

> The solution is not to ship complicated and bug infested build systems, it is to fix the dependency problem in the same way any other language has done so far

But any other language than C or C++ (sadly, _way_ less than "any other") solves that by using a complicated and bug infested build system(s) and package manager(s) or a combination of both.


>But any other language than C or C++ (sadly, _way_ less than "any other") solves that by using a complicated and bug infested build system(s) and package manager(s) or a combination of both.

Which would go away if C++ had a working package manager, that worked in the same way every other language did, was written once only and about as bug free as the compiler. This would also allow you to ship the source code for the library with a simple file that list the dependencies you need.

I guess my final issue boils down to this: CMake does too much and too little. It is too hard to get it reliably pull down the libraries I want to use and it can do too much as part of the build to the point that it becomes too complicated.

Do one thing, and do it well.


Oh, please don't get me wrong, I hate CMake and Autotools (I'm not sure which is worse). And both Vcpkg and Conan have their own problems.

> The solution is not to ship complicated and bug infested build systems, it is to fix the dependency problem in the same way any other language has done so far

With maybe one (or two) exceptions, those other languages' build systems are incredibly susceptible to supply-chain attacks.

And, to be honest, unless you have a burning need for autoconf's main value proposition (cross-compiling for a different target system), plain gnu-make and storing your dependencies in your repo is probably a lot safer than many other build systems.

I've built software with dependencies on libpng, libcurl, libsodium and more and was confident in the security of the resulting binary. I've also done one or two node.js projects, and had much less confidence that it won't be supply-chain attacked on the next build.


> if you aren't making a GUI and don't need to target Windows, just wrap it in Docker

In other words, if you also don't need to target MacOS or *BSD, which were the parent's stated requirements. MacOS can't run Docker, and BSDs seem to be unstable/unsupported targets, so the only stable way to get Docker on such a machine is to run Linux in a VM. Which isn't really a solution for cross-platform development, as it is a denial of it.

(Also, if you were to go down that route, why not just ship the VM; rather than dragging in all the crap that Docker entails?)


I use makefiles a lot as make-do but I must admit the syntax does my head in at times. If anyone has a good resource that teaches makefile progressively, I'd be interested.

The main issue I have is that it goes from dead simple to pit of what the hell is happening with various things interacting with each others.



I just use copilot for this and bash scripting, which I both do very little of and forget the syntax

> If anyone has a good resource that teaches makefile progressively

https://www.gnu.org/software/make/manual/make.html


This pattern of usage always seems like abuse.

When `make` is used as a glorified front-end to `bash` scriptlets, why not use `bash` directly instead of having two-level of scripting?

See: https://blog.aloni.org/posts/bash-functional-command-relay/


Because PHONY targets can do that, too, and without the needless manual work. Because a Makefile can still do Makefile things: PHONY targets depending on other PHONY targets, which so happens to depend on that one openapi json export you also create, which in turn depends on ...

You can do that in Bash. And now you've reinvented Makefile, but poorly.


Because the Makefile also becomes a central place of what you can run in a project without having dozens of different shell scripts. You can comment on targets, depend on others. Makefile targets to restore, the build i18n files, etc

I made a bash script that takes your Makefile and gives you a nice dialog menu for the targets that have comments. Works nicely as a self documenting project command menu.

https://gist.github.com/oppianmatt/dcc6f19542b080973e6164c71...

https://private-user-images.githubusercontent.com/48596/3262...


GNU Make gives me:

   - tab completion of targets
   - automatic dependency execution
   - automatic entry points between every task
   - result caching
   - parallel execution
Yes, it’s possible to do all of this by hand in shell scripts. But why would I, when Make is ubiquitous and battle-tested?

Tab completion of targets is usually done by a separate package, not by GNU make itself: bash-completion.

Yes, that's true. But it is "on-by-default" for 95% of desktop Linux machines, with no special action needed.

Its on until you try to tab-complete a filename that is created through make but the completion script can't detect it at which point the entire bash-completion package is uninstalled.

What?

Making sure a dependency is up to date before doing something is annoying. Building a representation of dependencies to figure out what can be done in parallel is a bit more complex. Doing it for dozens of targets is a major pain in the backside.

Sure, you can do it in bash, or python, or whatever. But then you have a cumbersome, not particularly interesting piece of code full of boiler plate. Of course, you can design it a bit, organise things neatly, and then use a config file because fiddling with the code in each project is unsustainable in the long run. At this point, you’ve just made a poor copy of make and thrown away all the good bits that result from decades of experience and weird corner cases.

The syntax of Makefiles is terrible, but make itself is very useful and versatile.

And that pattern is not abuse, it’s the sort of things Make was designed for. It’s just that we’re used to think of make as this old thing that just runs a compiler and that’s such a pain to deal with that we need Makefile generators to do it properly. And certainly that’s true for complex software compilation, but make is more versatile than that.


`make` does dependency resolution. That's its original job, by the way, and calling out the dependency resolution steps to bash was the original intention.

Also there are plugins for all editors to allow clicking on a target and it runs code. This would be significantly harder with bash.

It is abuse, but people love ergonomics more than they love reducing dependencies

If you use Make to build, and you use Make to deploy, it's good for ergonomics, and it's good for reducing dependencies.

Certainly abuse, but hey.


Turns out software is for humans, mostly :)

My favourite (ab)use of `make` is as a batch runner: https://news.ycombinator.com/item?id=32441602

This (ab)use of `make` runs multiple times a day, every day, and works perfectly every time.

The inspiration of this was an (ab)use of `make` as a way to paralellize Linux system startup: https://web.archive.org/web/20110606144530/http://www.ibm.co...


I (ab)use make to manage my dotfiles.

https://github.com/matheusmoreira/.files/blob/master/GNUmake...

I'm surprised at how well this thing works every time I use it. I even blogged about it.

https://www.matheusmoreira.com/articles/managing-dotfiles-wi...

Recently made a tool that processes the make database and prints the phony targets and their dependencies:

https://github.com/matheusmoreira/.files/blob/master/~/.loca...

I use it as a sort of makefile help command. Works surprisingly well too.


GNU Make is a surprisingly lisplike language. At some point I realized it had an eval function which I could use to metaprogram it. This actually ruined one of my projects, I got so lost trying to create the perfect makefile system that nothing short of a rewrite would fix it. As a side effect, I finally understood what autoconf's purpose was.

You can build Gnu Make so it can work with Guile Scheme.

One issue I don't hear mentioned often is reuse.

A task runner's tasks could be arbitrarily complicated, pulling in all sorts of dependencies of their own. This is less true for the traditional compile targets make was designed for.

Because the things we do in a Makefile are pretty much always project local and don't get reused, it limits how much heavy lifting these tasks are likely to do for us. Whereas if you built your our CLI in Python with Click or something, you would be able to make it a development dependency of your project. You can afford to invest in those tasks more because they'll be reused.

The Just command runner has the same problem, but at least it's designed to be a task runner.


Build a CLI / complex task as part of your project, then invoke it via make. This pattern is much more about documenting and composing steps than implementing them

Why invoke it via make when it can invoke itself? It's just another dependency that's not needed in this scenario.

Make is leaner and more agile than a custom CLI. It takes no time to get started, and there is no boilerplate. Removing or adding steps is trivial, running shell commands is trivial, and hooking into the dependency graph is trivial. Parallelism is built-in, as is dependency resolution. Tab-completion is standard on most Linux distros.

It's also better from an architectural separation perspective. Your custom CLI will have custom commands and flags, and probably will need to be called in some standardized way. Make is very good at calling your toolchain / build-system (whatever that might be) with the exact arguments that you want. And things like "make all" or "make clean" are muscle-memory for hundreds of thousands of developers.

Why mix your custom tooling with the task of standardizing an entry point?

I've had really great success over the years with the pattern of "some build system or another (cmake, bazel, autotools, etc) orchestrated by a top-level Makefile." It's simple, portable, and flexible. What's not to like, other than ugly syntax?


I know a lot of developers who agree with you.

But how is make dependency free? You need to install make. Which version? GNU make, or FreeBSD make? What platform are you installing it on? What version? In my team we had to get all our devs to manually upgrade from 3 to 4 as we were using modern make features to make it a nicer task runner.

These are all things you've already had to deal with in the custom CLI, which is also a perfectly good entry point. You also have a lot more control of command line arguments, rather than just make targets ("just" has also added this as a feature)


I don't like makefiles, but I've been enjoying justfiles: https://github.com/casey/just

Ninja is simple and fast, but intentionally limited in order to not be programmable. Make is powerful and versatile (especially the GNU variant) but has an arcane syntax and lots of pitfalls. I feel that there is a niche between make and ninja for the task runner.

> I feel that there is a niche between make and ninja for the task runner.

I think that's called CMake.


As one of the unfortunate souls damned to using CMake regularly, I can confidently say that it is slower, less maintainable, and less intelligible than make.

No, CMake is a compatibility layer on top of existing task runners like make and ninja. I don't want a compatibility layer, and also CMake has even more features than make.

CMake is a (not the) correct answer according to the Ninja manual[1]. "Some explicit non-goals: convenient syntax for writing build files by hand. You should generate your ninja files using another program. This is how we can sidestep many policy decisions."

The other options are here: https://github.com/ninja-build/ninja/wiki/List-of-generators...

[1] https://ninja-build.org/manual.html#_design_goals


It is entirely correct that Ninja is technically designed for a related but different problem. But you can write Ninja by hand with some restrictions (I have done so for example), so bring back some of those sidestepped decisions may still be worthwhile.

CMake is a very cool buildsystem that can do a lot, you can use it to run tests, fetch sources, and everything else. The documentation is the only thing that I find quite suboptimal, they could really add examples in there and better explain things. Or at least have list of projects that they believe are using CMake correctly so one can have some guidance. It took me five years before I was comfortable writing CMake from scratch.

I had a boss who once quipped that CMake became much easier for him to understand and write once he realized it was just a really shitty version of BASIC with only global variables. (He later added "but two separate namespaces for them" because of the prevalent use of environmental variables as well as CMake-specific variables)

We need something for CMake like what Elixir is to Erlang. I hate the syntax, fit, and finish but I am very thankful it exists.

Unfortunately make's behaviour around dynamically setting variables/environment variables is insane and quickly leads you towards hairy eval commands with extremely tricky quoting & escaping.

I suggest using := as in `APP_URL := http://localhost` vs raw "=". The colon-equals format means "set value now", so it's easier to understand.

I've never used eval.

Make's use of Bash can lead to hairy quoting/escaping. I use Make as a "dumb high-level runner" and put any sort of intelligence, like conditionals or loops or networking, in lower-level scripts or programs.

Make is an orchestrator.


What's the rationale for using makefiles as script runners over just having a directory with scripts inside?

Not for compiling, just as script runner.

I see this practice often and I haven't found a good reason


If scripts need particular arguments the make is a good place to record them.

I use it quite a lot for automating deployments - if you want to Terraform up a VM:

  make colo1-fooserver01.vm
Then if you want to run Ansible against it:

  make colo1-fooserver01
You don’t have remember or type all of the flags or arguments - just type make and hit tab for a list of the targets that you can build

Most shells will tab complete after `./scripts/` too. In fact that's probably more common than make completion.

I think the real reason is you have it all in one file rather than multiple scripts which makes it easier to edit and maintain.


Quite - one makefile rather than dozens of scripts which all do practically the same things.

But this can literally just be done in a simple shell script as well. The makefile ends up just being a redundant way to run a shell script.

> But this can literally just be done in a simple shell script as well.

Only if there's no dependencies. It's unusual that GP's type of usage has no dependencies.


When my shell scripts depend on another script ... they run the other script. Make definitely has its place, especially when dependencies get complex and parallel, but it's hardly necessary for simple cases. Once Make is needed, it's trivial to drop in and have it wrap the standalone scripts.

> When my shell scripts depend on another script ... they run the other script.

I hear you, but you're running the other script unconditionally. If it downloads something, it will download it every time you run the first script.

In this simple case, make runs the other script conditionally, so it need not run every time.


I build .tf files from parameters for each host in the Makefile (and script which knows the vSphere topology) for one-shot execution (it only creates the VM, it doesn’t manage the lifecycle) and also template config that needs to be done before deployment - there are plenty of dependencies

Dependency management, definitely. Loads of scripts don't work until X has been done, and X, Y, Z, and sometimes QWERTY have to be done first, and they take minutes and a ton of bandwidth so you don't want to do them unless you have to...

... and if your scripts do all that, they've basically rebuilt make, but it's undocumented and worse.

(I say this as someone with LOTS of experience with make, and am not really a fan because I know too much and it's horrifying. But I dislike custom crippled versions even more.)


It can help abstract the differences you may have across projects. If you're on a team with many projects/repositories, having conventions across them all helps improve onboarding, cross-teamwork and promotes better dev ux. A really simple way to do this is make. It lets you have the common targets and convert them to the relevant target. This can become more useful as you write automation for CI and deployments for all your projects.

Likely a declarative way to specify dependencies. But not sure if make as a tool for that is the best option in general.

Dependency management and automatic parallelization (via `make -j`).

I’m tired of seeing articles full of examples of .PHONY targets. Make works with the file system!

I've gotten so used to using Makefiles with Go dev that for my other side projects with node, python, ruby, etc. I wrap the tools and commands I can't remember in a Makefile. A quick squizz after a few months away reminds me of how that environment works with building, testing, etc.

The "Simple Makefile for Python projects" exemplifies why I dislike (ab)using make. It doesn't actually track deps properly, so venv doesn't get updated if requirements.txt changes, and nor does the dependency change tracking work properly for test target. To make it more correct you'd need bunch of .stamp files and/or globs, and even then it might be iffy. For lots of uses the simple file mtime based change/dep tracking is just too crude, and phony targets are largely an antipattern.

For script-running just is great, for full dep tracking build tool something like buck2 is an improvement.

The one place where make shines is when your workflow is truly file-based, so all steps can really be described as some variations of "transform file A to file B".


My team uses Make to handle the top-level scripting for a Python development project, and it works great. It was pretty easy to set up the correct dependency relationships.

Make is a powerful tool. You just have to understand how it thinks about the world, and adjust your own thinking a bit if needed.

If you just want to have tasks that depend on other tasks, you don't need stamps, phony, or anything else.

But what happens when you want to say "only rebuild my venv if requirements.txt changed"? that's a file dependency that you can reasonably express between requirements.txt and venv/bin/activate. And then all of a sudden, you're squarely in Make's wheelhouse.


i always hated make. until i read the manual...

Now this sounds like wisdom. I'm going to read the French manual.

Thanks! That article demystified quite a bit of the magic of Makefiles for me.

However, even after 10+ years of professional dev work during which I've regularly crossed paths with Makefiles, they still scare me, and I've still never written one from scratch myself. I cling to bash scripts, which I'm also a rookie at (or, for more complex cases, I write Python scripts, which I'm much more comfortable with).

I guess one day I'll read the manual, and digest some tutorials, and actually learn make. But I've made it this far...


> However, even after 10+ years of professional dev work during which I've regularly crossed paths with Makefiles, they still scare me, and I've still never written one from scratch myself.

Make has its warts (look at when and what it was designed for, after all!), but I've found it much easier to write Makefiles than YAML-for-github-runners.

If you're used to YAML, you get pleasantly surprised when using Make, which does execution of dependency trees in a more readable manner than most CI/CD YAML files do.


I'd love some minor tweak of Make to compare/cache with hashes instead of mtime. A worse-is-better Bazel if you will.

This is better achieved with the compiler level tooling, rather than in Make. It's pretty easy to replace `cc` with a short script that runs the pre-compiler and compares the result with a cache.

If the source code is kept tight, it doesn't yield much of a speed-up, though - precompiling hundreds of unnecessary headers can take a lot of time. Better to put some effort into moving unnecessary #includes out of header files. In fact, you can use your log of cache-hits to guide that work.


I also wish I could make certain variables/flags part of the dependencies of a file, like if I have something with a lot of #defines that I really want to rebuild any time CPPFLAGS changes

This might be worth building into the makefile interpreter. Doing it in the makefiles themselves is quite difficult to get right and very messy.

Most "modern" build tools do ruin make. Recently I did this for Go: https://gist.github.com/ceving/edeb6f58429d552e8828a70640db3... But it does not feel right.

I’ve replaced my Make usage mostly with taskfile.dev and/or Warp workflows/notebooks

There is quite a lot to love about make! I still haven't seen an alternative that is substantially better, and most are worse in one way or another.

My opinion may be a bit skewed by the fact that I write code that gets built on a variety of different platforms, though, and make is essentially universal. It lets me have a consistent build process regardless of platform.

It's also very useful for automation that isn't related to building code.


One minor tip is that you can define .PHONY multiple times. I find this much easier to manage because the .PHONY definition is right next to the target itself.

    .PHONY: foo
    foo:
        echo foo

    bar: barin
        cp barin bar

    .PHONY: baz
    baz: bar
        echo baz

For people using make and vscode my plugin is a must have:

https://marketplace.visualstudio.com/items?itemName=lfm.vsco...

It allows you to click above target to run target.


I'm a bit unsure what this does better than just calling those few lines directly in a shell script.

The fact that make is basically useless on Windows still makes it "not the first thing to try", to be honest. And it's rare to still see a project that can't be cross-platform, only lots that went "well my computer runs XYZ so I'm only compiling on that" =)

Make works just fine on Windows.

"Can be made to work", absolutely. "Works fine", gonna have to disagree. The hoops one needs to jump through for every single new project that uses Makefiles in some new and creative "works flawlessly on unixy systems, why aren't you on one of those?" are appreciably larger than zero.

If you’re very careful with forward-vs-back slashes and your username isn’t “Firstname Lastname”, sure. But in practice there are tons of issues especially for non expert users

I adore Makefiles as they're a very simple way to create a higher level abstraction in my projects. It builds data and runs small bits of code, so I can concentrate on the business-level value. I haven't found a better/simpler way that good old Makefiles.

Make is excellent if you use it properly to model your dependencies. This works really well for languages like C/C++, but I think Make really struggles with languages like Go, JavaScript, and Python or when your using a large combination of technologies.

I've found Earthly [0] to be the _perfect_ tool to replace Make. It's a familiar syntax (combination of Dockerfiles + Makefiles). Every target is run in an isolated Docker container, and each target can copy files from other targets. This allows Earthly to perform caching and parallelization for free, and in addition you get lots of safety with containerization. I've been using Earthly for a couple of years now and I love it.

Some things I've built with it:

* At work [1], we use it to build Docker images for E2E testing. This includes building a Go project, our mkdocs documentation, our Vue UI, and a ton of little scripts all over the place for generating documentation, release notes, dependency information (like the licenses of our deps), etc.

* I used it to create my macOS cross compiler project [2].

* A project for playing a collaborative game of Pokemon on Discord [3]

IMO Makefiles are great if you have a few small targets. If you're looking at more than >50 lines, if your project uses many languages, or you need to run targets in a Docker container, then Earthly is a great choice.

[0]: https://earthly.dev/

[1]: https://p3m.dev/

[2]: https://github.com/shepherdjerred/macos-cross-compiler

[3]: https://github.com/shepherdjerred/discord-plays-pokemon


Taskfiles are pretty cool. The tab/space things makes Makefiles painful for me.

I learned to stop worrying about Makefiles and loved Rakefiles instead because they allow you to write Ruby code anytime you need to do slightly complex tasks.

Makefile or Vagrantfile. Ruby's syntax is perfect for custom DSLs.

For simple things, make is fine. It does file caching and the GNU tool does parallel concurrency.

For anything needing light-to-moderate software configuration management, cmake is readily available and simple.

If you're running a Google then you're using a build and DVCS that make use of extensive caching, synthetic filesystems, and tools that wrap other tools.

I won't say what not to use. ;)


Obligatory reminder that the phrase "How I stopped worrying and loved X" was supposed to be a sign of insanity, not an endorsement of X.

> on my system binary is only 16kB in size,

I don't believe that.


119k on MacOS.

$ make love

make: ** No rule to make target `love'. Stop.


tell me you've never seen Dr. Strangelove without telling me you've never seen Dr. Strangelove

Makefiles fill that great need for a high-level 'scripting DSL', where you have a lot of different programs (or scripts), with a loose set of dependencies or order of operation, and you want a very simple way to call them, with some very simple logic determining the order, arguments to pass, parallelization, etc. Their ubiquity on all platforms makes it even easier to use them.

I much prefer Make to alternatives like Just or Taskfile. Besides the fact that more people know Make, Make actually has incredibly useful functionality that alternatives remove 'for simplicity', but then later you realize you want that functionality and go back to Make. Sometimes old tricks are the best tricks.


Seeing people reinvent make every 10 years is very frustrating. Just learn make!

Make is conceptually great but brings a lot of legacy baggage. You often need to set up .PHONY targets, reset .SUFFIXES, and/or set MAKEFLAGS += --no-builtin-rules. There's also dollar-symbol variables (which plague Perl and shell as well) which made lots of sense in the 1970s with teletypes but hinder readability today (what the hell was $@ again?).

Or the fact that $FOO interprets as $(F)OO without the slightest warning. And of course if you're in a script line, you probably meant $$FOO..

Make certainly has some obscure variables, but of all the basic knowledge of Make you need to learn, $@ is near the top of the list (it's "target". an @ sign looks kind of like a bullseye. If you want to see it as visiting a dependency graph, it's the dependency you're currently "at").


Sure, if you use make enough, you likely remember the most important dollar-symbol stuff. But $@ is "all arguments (obeying quoting)" in bash, which is nothing like what it means in make.

That of course becomes "$$@" in a Makefile recipe if you want bash's behavior and not make's... which is one of the reasons I tend to keep my shell scripts in separate files, and only grow a Makefile to wrap them later if I have to. These days I just have the directory of scripts and no Makefile. Even the rare times I do C, I prefer a script that recompiles everything and slapping ccache on top of it (but usually I'm dealing with an existing Makefile, and I just pray that it's not generated by automake)

Practically any language will do for running a set of tasks to compile a program. Unless you already love this "wheel" intimately, just learn a real language and use that.

> Practically any language will do for running a set of tasks to compile a program.

Like? None of the mainstream languages have any sort of builtin for dependency resolution.


Yeah remember tup? It was on hn probably a decade ago, people put a ton of work into it. Now "just" is the hot new thing.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: