Makefiles are far too difficult to read and write compared to alternatives in my opinion when you're automating anything beyond a few simple tasks. You'll inevitably have tasks that require several lines of code, complex logic, different settings for production, staging and development environments, common code that has to be shared between build steps etc. I find it difficult to make shell scripts robust and maintainable.
I'd much rather use JavaScript's Gulp (especially if my project was using Node like in the post) or Python's Fabric if possible.
You know that you could keep the make tasks simple and have them invoke external scripts that are written in the language of your choosing, right? Building a mound of spaghetti shell code in a single file isn't the only way to construct a Makefile.
Sure, but once you want to start sharing code between your external scripts and the Makefile, it makes a lot more sense to me to write it all in a single more maintainable language.
Well I did say "in my opinion". I didn't find this code from the post readable for example (and this comment is directed at Make and not the script writer):
BIN := ./node_modules/.bin
UGLIFY ?= $(BIN)/uglify-js
UGLIFY_FLAGS ?= --screw-ie8
build/%.min.js: build/%.js
@$(UGLIFY) $(UGLIFY_FLAGS) $< > $@
I find all Makefiles end up containing code like this where it makes sense when you write it but when returning to it a few weeks later you don't understand what it does anymore.
No code is readable if you don't bother to understand the language. That code right there is pretty bog-standard and frankly much easier to read than the Grunt equivalent.
> I think what he means is that their Makefiles effectively start having a common API across projects, by having similar target names.
> No need to use "all" if you put the default target first (see `default` subsection in the post).
That's obviously the reason they're doing that, but it's an odd decision. Especially since "all" is a common name for a Makefile target and .DEFAULT_GOAL can be used to override the "first target" convention of Makefiles.
firsttarget: mysource.txt
cp $< $@
.DEFAULT_GOAL=all # no need to put it first
.PHONY: all
all: firsttarget secondtarget
secondtarget:
touch $@
I realise their standard probably doesn't include all, but it seems like the obvious choice. Maybe I've just been conditioned by autotools, but I think it's standard practice to have all be your default target.
Then I'd like to bring to your attention that you can have the @ come from a variable too, which means you can have conditionally silent files too. That always seemed to me like the perfect middle ground between silent compiles and knowing what's going on if things do fail.
I think it's a reaction to there being a new build system announced fairly frequently. A while ago I actually started writing up a "Make for web development" tutorial file similar to Isaac's gist (linked in the article). At least for me, seeing new systems announced over and over again – especially the ones where you're writing JS functions to build stuff instead of shell commands – just makes me want to spread the "just use Make" word even more.
Yes, 70's era Unix tools are not the most friendly, but there's still a lot of uses to Make (as opposed to Autotools)
You can actually build a Make file to solve any DAG, written as dependencies, or use its tools to not rebuilt your whole project when only one file has changed
This is similar to how it was done on a project I recently worked on. It worked well enough on a project that built about 300 libraries and 200 executable. It also made it really easy to add new things.
It's amazing how far apart evaluations can be. Even something like `Make` has people who love it, and people that eschew it. Moreover, both camps contain very rational, intelligent people. I wonder why.
A) It is a 2-phase build system (read DAG, traverse DAG) whereas code generation requires an N-phase build system (build some files, detect more dependencies, build more files, ...)
B) It has no way to express dependencies on the inexistence of files (#include "foo.h" will behave differently if the first search directory in the include path starts also featuring a "foo.h", but this cannot be specified), necessarily meaning that incremental build become incorrect in various circumstances
C) It does mtime-newer check, rather than mtime-equal check. This has numerous problems with various file systems.
D) It does not check the mtime did not change during a build, effectively allowing the build tree to be poisoned with an incorrect build result. For example, edit foo.c while foo.o is being compiled from it. foo.o can be correct w.r.t old foo.c, but its mtime suggests it is newer than the current foo.c. All incremental builds thus become incorrect.
In short, make doesn't try hard enough to be a correct build system, and it is also inflexible.
This is why I wrote `buildsome`[1], where I resolve all these issues and more.
buildsome is only tailored for our needs at the moment, so can only build on Linux, and not on OS X or Windows.
Is this irrational from the POV of an individual? Learning a complex tool is quite an effort. In the same time one could develop one's own tool that's likely more fine-tuned to the tasks at hand. Moreover, if one's tool acquires outside users it's highly CV-worthy.
Makefiles are far too difficult to read and write compared to alternatives in my opinion when you're automating anything beyond a few simple tasks. You'll inevitably have tasks that require several lines of code, complex logic, different settings for production, staging and development environments, common code that has to be shared between build steps etc. I find it difficult to make shell scripts robust and maintainable.
I'd much rather use JavaScript's Gulp (especially if my project was using Node like in the post) or Python's Fabric if possible.