Hacker News new | past | comments | ask | show | jobs | submit login
Why Use Make (ocks.org)
358 points by pie on Feb 24, 2013 | hide | past | favorite | 240 comments



If you use GNU Make it's worth using my GNU Make Standard Library: http://gmsl.sourceforge.net/

And also reading everything I wrote as Mr. Make: http://blog.jgc.org/2013/02/updated-list-of-my-gnu-make-arti...

Or buy my book: http://www.lulu.com/shop/john-graham-cumming/gnu-make-unleas...


I'll have to respectfully disagree. IMHO, the utility of DSLs like Make is high when used very sparingly, for simple flows like the one the linked article describes. The more complex your Makefile, the less you should be using Make. If you need to write complete programs in Make, just stop bending over backwards and pick a real programming language.

FWIW most non-trivial projects nowadays that use Make don't write makefiles directly but use make generators like CMake and Gyp.


Make is the assembly/C language of build system. Lots of other systems just generate makefile compatible files (qmake, cmake, premake, others), but often these fall into overly complex traps trying to explain everything both with simple unique words, and then clashing into these.

Make is very optimal at the stage it is, everything above or below it it's just not that optimal.


> Make is very optimal at the stage it is, everything above or below it it's just not that optimal.

Right: GNU Make is at a local optimum for describing how to generate products based on rules and a dependency graph. I'd also wager it's close to a global optimum.

GNU Make is certainly closer to that global optimum than Boost's jam or the countless homegrown XML-based things I've seen over the years --- make is powerful, but despite that power, simple things look simple and are simple to write. In other systems, either simple things are hard to write or the system itself is so simple than it's not powerful enough to describe what I want done.


Well... If by XML-based things you are thinking of Maven, they're not really at the same level. Maven is much more higher-level, and is fully declarative, which is both a blessing and a curse. I would not describe it as a general-purpose build system, it's really made for the Java ecosystem, but in this context, it works well for a majority of projects, because they can follow the rails. And it does handle out of the box automated retrieval of dependencies, which is way outside the scope of make.

It has its hear in the right place, though. High-level build systems should strive to be as declarative as possible. It's the rest of the design which is problematic :)


I think he was more talking about tools like Apache Ant.


If Make is a local optimum and close to a global optimum, it by definition is the global optimum.


Say you have a city with two skyscrapers on opposite sides of town, one 49 stories high and the other 51 stories high. When you're at the top of the 49-story building, you're at a local maximum of height and close to the global maximum, but you're not exactly a short gradient-ascent jaunt away from the global maximum.


Sure, but how much work are you going to put in to go up two floors? Local optimum + close to a global optimum means there's not much cost benefit in changing.


Exactly. That's what Make is still used.


Not really, depends on definition of "close", i.e. how you measure distance. If closeness means distance in search space, then you're right, but it also could mean local vs global are close in value being optimized, but they are located far from each other in search space.


Cute, but that depends how close...


Which "homegrown XML things"?


As the assembly language of build systems, I agree that Make works well. But there, IMHO, its utility ends.


I would say if fares rather poorly when you compare it something like ninja (http://martine.github.com/ninja/) which is made to be generated by tools Gyp or CMake. I have used the CMake+ninja combination on large projects and it lives up the hype.


I haven't really used Make in any big projects (read: 100s of files and dependencies), but that's mainly because I have a habit of breaking components into smaller projects and thus keeping things, including my Makefiles, very simple. For project structures like that, I think your tools and book are overkill, so I'll recommend this reference from Oreilly's UNIX in a nutshell. I found that this is by far the most easily digestible reference for GNU Make I can find that's free. It's fairly to the point and concise as far as reference go, you can read the whole thing and have a fairly good understanding in under an hour. It's a reference that I go to 90% of the time, the other 10% come from Google and trial and error.

http://oreilly.com/linux/excerpts/9780596100292/gnu-make-uti...


Why does the pdf version cost more than the paperback?


Good question.

In the Lulu interface I have the prices set at $15 for the print book and $9.99 for the PDF/ePub versions. It appears that something weird happened with EUR and USD. Originally I had these priced in EUR and switched currencies.

Apologies for that, I've reset everything and it looks right now.


Awesome! Just bought it.


I'd really like an answer to this. It greatly annoys me when I see this type of thing and it always pisses me off so much that I usually forego getting the book. Perhaps I'm over reacting, but to me, it indicates a lack of respect for the audience. Are we so stupid 1) we won't notice, and 2) we can't figure out mailing a physical thing has to have more overhead than the digital copy?


That is exactly how I feel. I was literally about to purchase the book but that changed my mind.


gmsl is a wonderful, well considered library. I had to buy the book after finding it.

Thanks!


Absolutely do not use make for any new project. If you love make, it's a big, red, burning flag that you're not demanding enough of your tools and that you're not keeping up with changes in your ecosystem.

There are many, many way better alternatives to make. Which one is better depends on the platform you're on. The majority of them throws in automatic dependency management for free.

Yes, I know that the essence of the post is "use a build system". I agree completely. In fact, script everything. Then script your scripts. Then refactor your scripts because they are getting messy. But don't give impressionable souls the idea that "make" is anywhere near an acceptable (generalised, default) choice today.


I could not disagree with you more strongly. Make is powerful, ubiquitous, and extensible. There's a reason it's stood the test of time. If you must, use something that will generate makefiles for you, like CMake or GNU autotools, but even with these tools, you'll still be using make, and if you understand how make works, you'll be far better equipped to understand the actual operation of your build system.

To me, blatant avoidance of make is a big, red, burning flag that whoever made that decision values novelty over value and that he's likely to be a bandwagon-jumper or a NIHer in other aspects of his professional life too, and such people are best avoided.


>GNU autotools

Autoconf is similarly an abomination that should be put out of our misery. The syntax is so opaque that 99% of people copy-and-paste the configuration file into their project, leading to 10 minute ./configure runs that check for 200 things the project doesn't use in addition to the one that it does.

Not to mention that 99% of what it's checking for is OBE: A simple check for whether you're trying to build on a modern Linux, one of 2-3 Windows build flavors, or OS X, is sufficient to set the 5-10 typical flags most projects need.

Look at LuaJIT's build process, for example. It builds practically everywhere and digs into OS internals, and yet doesn't need anything complicated to build.

I agree that Make should be retired, and yet I'm typically the "Makefile expert" where ever I work. I've worked with a lot of smart people, and most don't know anything other than the basics of Make. I have to suspect that most people who are defending Make haven't had to REALLY use it to do anything complex, because when you do, it sucks.


> I agree that Make should be retired, and yet I'm typically the "Makefile expert" where ever I work.

I'm the Makefile expert in my software shop. I still use GNU Make simply because I haven't found anything "better enough" to justify switching a toolchain. CMake was probably the closest--and that mostly because I know that it works for KDE, so I should be able to learn by example.

What sort of complexity do you think shows Make's problems? It's just a big dependency graph, so IMHO the hardest part is defining the dependencies in a way that's accurate (for parallelism and incremental builds) without being redundant. Multi-versioned builds shouldn't really add any complexity beyond a single variable per input dimension. Complex serial sub-processes can easily be factored out into scripts. Platform detection, likewise, can easily be factored out into a combination of scxripts and multi-versioned builds.

So where have you seen it suck the most? I like to think I've done reasonably complex things with it, but maybe not.


>So where have you seen it suck the most?

When people (myself included) start taking advantage of the fact that Make is Turing-complete and writing arbitrary "programs" in their Makefiles.

It typically starts simple; you want to do something like build ALL the files in a folder, so you use a wildcard. Then you want to add dependency checking, so you use the wildcard to convert between .o to .d, keeping the same folder structure.

And I don't want the .o and .d files to be generated where the .c files live, so I need to add this code here that converts the paths.

OOPS, this project uses a slightly different folder structure, and so I need to add a case where it looks in DIFFERENT relative paths.

Oh dear; I just realized that I need this code to work ALMOST the same in a different project that needs to be built with the same Makefile; that means I need to include it twice, using different options each time.

And it turns out that it DOESN'T work the way I expect, so now I have to use $(eval), meaning some of my Make variables are referenced with $(VAR), and some with $$(VAR), depending on whether I want them to grab the CURRENT version of the variable or the calculated version.

But now, now I have all of my code to create my project in one convenient place, and creating a new project Makefile is quite trivial! It's all very clean and nice. But the next person to try to change the folder structure, or to otherwise try to get this now-crazy-complicated house of cards to do something that I didn't anticipate has to become just as adept at the subtleties of $(eval ...) and Makefile functions (define ...); error messages when you get things wrong tend to make early C and C++ compiler errors look straightforward and useful by comparison.

For a far more complicated example, take a look at the Android NDK Makefile build system. 5430 lines of .mk files that make your life very easy...right up until you want to do something they didn't anticipate, or until they ship a version with a bug (which they've done a several times now) that screws up your build.

Here's one small excerpt for your viewing pleasure, just to get the flavor:

http://pastie.org/6331932


> some of my Make variables are referenced with $(VAR), and some with $$(VAR), depending on whether I want them to grab the CURRENT version of the variable or the calculated version.

Hah, my latest Makefile work has been a set of functions which generate Make-syntax output, which then gets $(eval)ed. I hear you on the debugging nightmare that this can be: does a given variable get resolved when the function is first $(call)ed, when the block gets $(eval)ed, or when the recipe is invoked? But IMHO it's not too bad to do printf-style debugging. Replace $(eval $(call ...)) with $(error $(call ...)), then work backwards from there.

It also helps to be very disciplined about immediate assignment (`var := stmt`) and to always use recipe-local variables, rather than global variables.

I do feel like all of this aspect would be cleaner in Python or Lua... but the problem is, the _rest_ of the build, which more people interact with on a daily basis, gets more complex when that happens. Because there are always the ancillary targets and recipes where normal Makefile syntax works just fine.

Thanks for the NDK reference, I'm interested in seeing other "ugly" Makefile support infrastructure for comparison :)


I also use "printf debugging"; I have to.

The worst problem I had, though, was REALLY annoying; I was getting an inscrutable error in the middle of a function, and I could delete large parts of the code to get the error to go away, but putting ANY of the code back brought the error back -- it didn't matter which parts I put back.

It turned out that git had changed LF to CRLF in the file, and some end-of-line character was screwing up the spacing. Tweaking .gitattributes and fixing the files made everything work.

I SO hate significant white-space. I never really forgave Python for that "feature" either. But I could totally get behind Lua for the logic. :)

Actually, if it were my job, I would use LuaJIT to write a make replacement; the dependencies could all be specified in tables or extended strings, and any more complicated logic could be explicitly outside of the "rules".

>but the problem is, the _rest_ of the build, which more people interact with on a daily basis, gets more complex when that happens

I think a good design would NOT have that problem. You could have it say "these files get built by default rules" separately from "these rules trigger this bit of Lua code, which can spit out warnings, add dependencies dynamically (oh wouldn't THAT be nice!), or do this other bit of complicated build processing that doesn't fit well into the rule-based system".

If you're doing it in Makefiles, then yes, you could make everything more complicated that way. But I think a fresh design could really do a good job in killing make. I'm just so busy with other things right now, though...

Another reason I would STRONGLY choose Lua over any other scripting system is that the entire tool can embed Lua trivially, while Python or Ruby or Perl would each bring an entire ecosystem with it. You can have a dozen different Lua installs on your system without requiring a separate infrastructure for managing Lua installs.


> Another reason I would STRONGLY choose Lua over any other scripting system is that the entire tool can embed Lua trivially, while Python or Ruby or Perl would each bring an entire ecosystem with it.

Oh yeah, I like that idea. I'm just not so thrilled when I hear about modern build systems when they require me to install recent versions of relatively bulky scripting languages. I'm not a big fan of Lua in general, but this sounds like a perfect application.


So what? you are doing it wrong.

there are lot's of people that use PHP for data crunching and bash for GUI and C without caring for managing memory properly.

should we retire all languages that can be abused?

also, your idea of how $ and $$ is wrong. but it could be that you messing up with = and := before that point :) so i guess your point stands. but again, all languages can be abused.

blame the bad coder, not the tool.


> also, your idea of how $ and $$ is wrong

No, he's spot-on about that. If you are using a function from within a Makefile to generate Make code which then gets $(eval)ed, then then inner function must output $${variable} so that the outer function sees ${variable} and does not immediately resolve it.

It's hairy. Hairier than macros in C. But like any other specialization, it can potentially save an immense amount of time for the rest of the tam.


> messing up with = and :=

Sorry, but I wasn't messing up those two. That's Makefile 101 knowledge; I'm talking about crazy advanced stuff, where := doesn't work the way you expect.

Even := doesn't do what you want if, after the Makefile has been loaded and you've used := three times on the same variable, ALL the instances of that variable are replaced by the last assignment. Here's an example:

    FOO:=1

    rule1 :
        echo $(FOO)

    FOO:=2

    rule2 :
        echo $(FOO)
make rule1 and make rule2 both echo 2. $(FOO) is evaluated in both cases AFTER the Makefile is loaded.


Target specific variables do this job:

    rule1 : FOO:=1
    rule1 :
        echo $(FOO)

    rule2 : FOO:=2
    rule2 :
        echo $(FOO)


Interesting. Didn't know this trick.

Turns out it wouldn't work for the usage pattern I needed (my example was simplified -- typically the variable settings would all happen in another file, and they couldn't happen on a target line because there wouldn't be a single target to use, in addition to just being ugly for that use), but it's good to know.


> I still use GNU Make simply because I haven't found anything "better enough" to justify switching a toolchain.

For what it's worth, as much as I rag on Make, I also find myself using it most of the time. To paraphrase Churchil, it's the worst build system imaginable, except for all the others.

I would just really, really love a system like Make crossed with an imperative language for everything that doesn't fit well with the auto-dependency-tracking model. There's a product in there somewhere.


Can't mod this up enough.

Microsoft's crazy-ass xml build mechanism is a giant pain in the ass, even compared to their old nmake (I think) tool.

The new system is a great example of a system where someone presumably said "Make is garbage, a big red flag, let's build something better."


Yep: the parts of the Windows tree still built with nmake are much more pleasant to hack on than the parts that use msbuild.


I'm confused. Are we implicitly talking about C(++) here, or would you accuse, say, a Clojure developer using Leiningen of valuing novelty over value?


Lots of "Don't do this" without suggesting alternatives doesn't do anyone much good.

If you're on a *nix system, building C/C++ programs (or automating one-off builds like the example), what would you recommend in make's place?


Assuming you're building your own code and are willing to arrange things in a compatible manner, then I have a solution to offer. It figures out the dependencies by reading the source code. You can create objects and binaries just by asking it to build a certain target.

I recorded an example of installing and using it here:

http://rachelbythebay.com/jvt/view?bb_install

You can get a copy for experimentation here:

http://rachelbythebay.com/bb/

It isn't for everyone, but only you can know if it'll meet your needs. Hopefully this helps.


Whoa, your terminal playback thing is pretty neat. Did you use GNU Screen to record the session?


Thank you! I used script. Certain implementations have an option to emit timing and byte count data on stderr. Then it's just a matter of saving it and building something to honor those delays during playback.


That's awesome! I had no idea about `script -t`; I'd written my own a few years back. Thank you!



Pretty majorly uncool to do that and not give credit or even a link to rachelbythebay.


> willing to arrange things in a compatible manner, then I have a solution to offer.

So you're proposing that people give up screwdrivers because figuring out what bit to use is hard, and that they should instead use your potato peeler, never mind that it doesn't actually drive screws?


Nope. Not at all. But I like that you think that.


That is very cool, and it's something that will only get better if the Apple-proposed modules get widespread.


Thank you! I'm not sure what you mean by "Apple-proposed modules", though. Please let me know more and I'll see what I can do.



Is the source available?


If you mean the source to my replay stuff, well, it seems you have already established a way to snag that. The only original part of that was my wrapper to the terminal to fetch byte streams and replay them while honoring the timing data.

Regarding the build tool source, I haven't decided what to do about that just yet. It could be particularly valuable in a corporate environment.


I debated whether to start namedropping build systems, but I decided it was going to be counter productive and devolve into a debate of the relative merits of the systems I'd picked out. In the end it depends heavily on the environment you're in what makes sense for your project.


In other words: if you're not sure, use make.


No, really not. If you pick a build system at random it's probably better than make. If it's a big project it's worth taking 30 minutes to actually look at what's available and pick one.


I'd suggest tup[1] and Shake[2].

[1]: http://gittup.org/tup/ [2]: http://community.haskell.org/~ndm/shake/


Thanks for the links! I will definitely use tup for some next project. I try from time to time different make-like systems but I always come back to GNU Make. I also don't fear the GNU Make Reference. But tup looks, again, quite promising. It even has those little context sensitive one-special-char one-letter variables :) But they do something better by design, I see http://gittup.org/tup/make_vs_tup.html


> This page compares make to tup. This page is a little biased because tup is so fast. How fast? This one time a beam of light was flying through the vacuum of space at the speed of light and then tup went by and was like "Yo beam of light, you need a lift?" cuz tup was going so fast it thought the beam of light had a flat tire and was stuck. True story. Anyway, feel free to run your own comparisons if you don't believe me and my (true) story.

The completely unprofessional tone here really turns me off to the entire system. If you write like a typical teenager, you probably code like a typical teenager, and I don't want a typical teenager writing my goddamn build system.

Besides: who the hell is bottlenecked on the build system? The compiler and linker (or the equivalent for your favorite language) do all the work. Anyone who believes this article makes a different is completely ignorant of Amdahl's Law.


You might find my paper more informative and less unprofessional: http://gittup.org/tup/build_system_rules_and_algorithms.pdf

Many projects are bottlenecked on the build system. You can benchmark this by timing a null build (running 'time make' after building everything). Some examples from my machine are the Linux kernel (28 seconds), and Firefox (1m 23 seconds). Some of this time is from unnecessarily recompiling things, but that is a separate issue from the inherent lack of scalability in make.

Suppose I want to change a single C/C++ file in one of these projects - the total turnaround time from when I type 'make' to when the build finishes can be described as:

T(total) = T(build system) + T(sub-processes)

Ideally T(total) would be zero, meaning we get an instant response from when we change the file to when we can test the result. Here, T(build system) is the null build time, and T(sub-processes) is the time it takes to run the compiler and such. Using the Linux kernel as an example again, compiling fs/ext3/balloc.c takes 0.478 seconds. In comparison to the null build of 28 seconds, there are significant gains to be had by optimizing T(build system).

Amdahl's Law is a little tricky to apply since tup is not parallelizing T(build system), but rather changing it from a linear-time algorithm to a logarithmic-time algorithm. So you can set P easily based on the relative values of T(build system) and T(sub-processes), but S is not a simple "count-the-cores" metric. The speedup is effectively N/log(N), where N is the number of files. This is much better than simple parallelization - T(build system) for tup with these projects is only about 4ms. The total turnaround time for the balloc.c file in the Linux kernel is 1.1 seconds (which includes compilation and all the linking steps afterward), in comparison to make's total turnaround time of 29.5 seconds.


For very large projects the build system can quite easily become the bottleneck when just changing a single file, which also happens to be the most important use-case for developers. In extreme cases a no-op build with make can easily get to 15+ seconds.


> In extreme cases a no-op build with make can easily get to 15+ seconds.

I have never seen cases so extreme, but my opinion on the matter is that this is a "build smell". If the Makefile has to resolve a DAG this large, that means that developers have to worry about compile- or link-time interactions this large, as well. 100k source files all linked into a single executable is more complex than 10k source files split across 10 executables, and a handful (say <100) of headers which represent "public" APIs. Because if you have 100k source files and your developers haven't all killed themselves already, then there are some firewalls separating various modules already. Formalize it at an API level and split apart the builds, so that it's _impossible_ for anything outside of the API itself to trigger a full rebuild.


Typically this shows up in recursive make projects with lots of sub projects—it doesn't take that much time to stat every file in question but reinvoking make sixteen times can be quite slow.

I don't deal with this by not using make, I deal with this by not writing recursive makefiles.


> it doesn't take that much time to stat every file in question but reinvoking make sixteen times can be quite slow.

Yes, reinvoking Make repeatedly tends to force redundant `stat` calls. But I have worked in environments where heavily-templated code was hosted over a remote filesystem, and every `stat` call was something like 10msec. That adds up _extremely_ fast, even with non-recursive make. Ugh.


> In extreme cases a no-op build with make can easily get to 15+ seconds.

Most developers will never see such a system. Optimizing for that kind of scale at an early stage has all the problems of any other premature optimization. It's most important to just get the build system out of the way so you can get your real work done, and you do that by writing makefiles, since makefiles are universally understood.

Now, when a project does grow to the proportions you mention, you can start looking at alternatives --- but I'd argue that these alternatives should amount to more efficient ways to load and evaluate existing makefile rules, not entirely different build paradigms. Make's simplicity is too important to give up.


You dislike his humor, that's fine. Calling it unprofessional is subjective. Some professional environments with great professional output appreciate humor.

Also, I am bottlenecked on my build system at my workplace, which takes ~45 seconds to realize nothing needs to be done (It isn't "make", because "make" does not support our build process).


I do enjoy reading through tup's site

I like how he uses Sauron's All Seeing Eye as a drop in replacement for gods algorithm.


also excellently documented... not like shake, at first



CMake


Personally I'd suggest Premake because it uses Lua instead of rolling its own scripting language. The world would be a better place if we could all agree on one scripting language for stuff like this so no one has to look up the syntax for things like creating arrays for every individual tool.

Feature-wise premake isn't entirely caught up to CMake but it has everything important. Also the premake files look a lot cleaner and more readable compared to cmake files.



> The world would be a better place if we could all agree on one scripting language for stuff like this

It's called bloody "make". If Prolog is a language, so is make. Make is still a scripting language even if it's not boneheadedly sequential and imperative the way, say, Python is.


Make is simply not up to the job. It make writing correct build systems very difficult (hard to not under-specify dependencies). It does not support auto-generating code and then scanning it for extra build dependencies. It is a crappy tool and we should standardize on something better.


If what you want is a standardized language Scons has that and is a bit more established.


CMake is pretty ugly in its own right, though I will grant it's hard to be worse than GNU make.


CMake is an abomination


Mind pointing out any constructive arguments against it? I recently switched over from hand-built makefiles to cmake for one of my projects, and it's been a breeze.


I prefer autotools to CMake. Maybe I didn't really give CMake a fair shot, but in a few hours of trying, I couldn't out figure out the CMake equivalent to a bit of custom glue code in configure.ac that checked for a Lisp compiler.

I get the feeling CMake works fine as long as you color within the lines, but that it's much harder to extend than autotools is.


Yes, this is something I have thought through, though I may have been lucky/unlucky with my pick of projects. But generally speaking, installs I have installed based on GNU autotools, seems to install with less fuzz than those with C-Make. So, that, and the fact that there is full free documentation of GNU Autotools, even books out there, have made my personal choice easy.

For me the overall factor is the ease for any of my users, installing something I have written.


Using a macro language was already bad in the 90ies (autoconf/automake using m4), but using one in 2000 is just tragic.

It is also impossible to debug, partly because of the abomination the cmake language is: even understanding where a variable is defined is hard, and because the language is so unexpressive, the Find*.cmake modules are often in the 1000s lines count.

For all its suckiness, I take autoconf/automake over cmake.


I'm sorry, but your comment does not make much sense to me. autotools is mostly written in m4, with some shell snippets. Those are macro languages. If you don't like macro languages, logically you should not like autotools.

CMake is not "impossible to debug." In fact, it is a lot easier to debug than autotools-- partly because you end up writing so much less code. Also you don't have the three levels of "generated files generating other file generators" that you do in autotools.

For all its suckiness, I take autoconf/automake over cmake.

Well, we agree on one thing. autotools does suck.


My point was that autotools had the excuse of being written in the early 90ies, cmake doesn't.

Your experience with debugging autotools vs cmake does not match mine: cmake is not an improvement over autotools (if only because at least with autotools, there is some decent doc out there and google knows a lot about autoconf insanity). It took me hours to debug trivial issues with cmake, because you can't easily trace where variables are defined.


I have never been able to figure out how to cross-compile a project that uses CMake. GNU Make lets you just set some environment flags and drop in a different compiler.


CMake honors the same environment variables as Make. Just set them before running CMake.

CMake also supports cross-compiling. www.vtk.org/Wiki/CMake_Cross_Compiling


Ninja.


You probably don't want to write ninja files yourself. CMake + Ninja is a nice combo.


If you are on a unix system, "redo" (designed by djb and implemented by apenwarr) is excellent. It's refreshingly simple and robust.

And, there's a minimalist version called "do" which is a hundred-or-so lines of shell, that does a complete rebuild (no dependency tracking) - so you can package that with your project, and not have to worry about your users having to install yet another build system.

Other alternatives I'd recommend, with some degree of success:

SCons - cross platform, but slow (every big enough project eventually abandons it)

waf - a unix-only (AFAIK) SCons derivative that's a bit more limited, but much faster

CMake - cross platform, very complete, on par with Make on every level (including ugliness and complexity)

Premake - cross platform, makes IDE file but can also write makefiles for you.


I'm also writing a redo-inspired build tool, but less devoted to djb's vision, and more intent on taking advantage of some nice features of Inferno/Plan 9. For example, the /env filesystem allows dependencies on environment variables. The <{} syntax allows parallelism while grouping output.

In a nutshell, credo tries to advance the build-tool state of the art, by replacing *ake files with shell-scriptable commands to build a system from bits of library code. More at http://github.com/catenate/credo See especially the first literate test, which has a more complete introduction.


waf is cross-platform (Python) https://code.google.com/p/waf/


I don't know if waf is really faster than other alternatives, but having used waf in production, I can safely say that deploying waf is at best hellish. A single-file script with zipimported package that extracts itself to a world-writable hidden directory? Crazy.


For Unix-only and smaller projects another great option is fabricate.py


Another vote for Fabricate; here's a comment of mine from a while back explaining in detail why I prefer it (or Tup) to Make.

http://news.ycombinator.com/item?id=4190804

I use it on Windows too with an strace replacement I wrote which isn't online but if anyone's interested then just ask.


Thanks for the tip on fabricate, it appears to be just the tool I was looking for. A light weight dependency based "build" system in python that is not focused on distutils or making python packages. We shall see how well it works for data processing.


'Drake' for that use was on here too, not too long ago. http://blog.factual.com/introducing-drake-a-kind-of-make-for...


I agree that make's syntax is hard to pick up and often unintuitive, but in my experience, if you reject make, it's a big, red burning flag that you don't fully understand the problems that it solves. The ability to define arbitrary dependencies and actions (including overriding built-in ones) is the defining aspect of a build system. All of the build systems I've used besides "make" (mostly scons, waf, ant, and gyp) have attempted to first-class certain types of dependencies, and the better ones allow you to define some of your own, but they all make assumptions about what one might want to do that make things I do all the time impossible (or at best, way off the beaten path). Examples include post-processing object files before linking, post-processing a binary after linking, or generating object files from something other than the compiler.


Syntax is hard? Somewhat, but that's not the problem.

Syntax is not intuitive? It's not bad once you've read the docs.

Complex Makefiles with huge included boilerplate libraries (e.g., the Android build system) are completely impossible to debug in any reasonable way? THAT is the problem.

As soon as you do anything non-trivial in your Makefile, you've created something that, when it fails, the reason will be COMPLETELY opaque to anyone who isn't intimately familiar with the ENTIRE program.

The implicit connections that a Makefile makes for building are great for JUST that part. As soon as you try to use them to write a program, you have to commit all manner of write-only-code abominations. I know because I've both DONE this and tried to debug OTHER people's code.

There is NO good way to debug Makefiles. And that alone means that they should be consigned to historic projects only, to be replaced by SOME kind of a better system.


> There are many, many way better alternatives to make. Which one is better depends on the platform you're on.

There's your answer: there are many alternatives, but only one make. Ok maybe 2 o 4. :) But a build system should for the most part be platform agnostic. I have such a hard time understanding why so many programming languages seem to need their own Make alternative (Rake, Cake, Fake, ...?)


Sure, make is a generic tool, and you can do anything with it; from parsing json files, to compiling C code, to formatting documents.

...but so are bash scripts.

I wouldn't advocate actually using either of these though, unless the situation is appropriate.

Every system and platform has different requirements, and its a bit of a big ask to want make to be 'the right tool' for all of them.

Certainly, I'd never use make to look after a ruby project, or a c project. That doesn't mean I think make is horrible and I'd never use it for anything, but you can see, perhaps, how something as low level as make might not provide all the tools (dependency management, downloading resources from git hub, automatically tracking system information, a templating system for generating system-dependent headers) you might need for some things.

You could think about it this way: If make did all of these things and was easy to use, there wouldnt be all of these other clones and different attempts at the same thing.

...but there are. So there's certainly something about it isn't making people happy.


I just want to inform you that GNU make indeed is very good for c projects as it can read the dependency files made by gcc (and clang), so you get very small very readable makefiles, that takes care of tracking the dependencies for you. http://wiki.osdev.org/Makefile


Really?

There are a lot of people who use makefiles for C projects, but most of those people don't write them by hand.

They use a frontend that generates a vastly complex and arcane makefile using either automake, cmake, qmake, etc.

These makefiles are utterly unmaintainable and deserve a place next to 'goto' in the section labeled 'considered harmful'... but they serve a purpose; correctly collecting build settings, templates and metadata and using those to construct the correct makefile.

That doesn't count as 'using make'.

There are a vanishingly few projects that actually use make; google for it's NDK builds (and a few other things; but these are massive recursive makefile monsters that you have a tiny safe api to work with), LUA with its 15 makefiles, one for each platform. There are a couple of other examples, but not many. I can't think of any big ones off the top of my head.

I think we can safely say that writing a Makefile to build your C code is a bad idea.


Linux kernel? Uses its own automation. See also *BSD make, which is the basis for ports among other things (unless that's changed since last I looked); entirely in-make. Plan 9 and the userspace port uses mk, which is a slightly cleaned up variant of the BSD make.

Fact is, a makefile for building C code is typically smaller than the autoconf files required to get autotools to work on the same code. Most horrible makefiles are written by terrible build automation software (autotools being among the worst offenders here) or by people who don't understand the dependency graph model. A ten line Makefile that automated something repetitive is fantastically important, even if it is just writing down something in an executable fashion such that you don't have to remember it later. Almost every build automation tool out there either doesn't scale up (too simple) or is too hard for small work, or occasionally both, like Ant.

If I need to do something simple, a Makefile is only a very little bit more complex than a command line—I frequently crib the command I just ran to start off the Makefile. If I needed to do nontrivial logic in a Makefile and couldn't avoid it, I wouldn't use jam or tup or redo or SCons—I couldn't, because they're less useful than make! I would probably end up using Rake, which is the only build automation tool I've seen so far that isn't a make clone and can do implicit dependency generation.


> I think we can safely say that writing a Makefile to build your C code is a bad idea.

I don't think that's a safe assumption at all, I think that's a dangerous overgeneralization. You're only considering open-source projects... and as you point out, both NDK and Lua (projects in the embedded space) use Make. I would not be surprised to find hordes of non-OSS embedded developers using Make natively precisely _because_ it is the "assembler" of build systems.


"There are many, many way better alternatives to make"

Can you give an example with the ubiquity of make and better expressiveness?


No, probably not. But I think the metrics you've chosen are irrelevant. Make was not only the best, it was the only player in the game until quite recently, so of course it's ubiquitous. As for expressiveness, yes, it certainly is expressive. It's also very hard to learn and very difficult to maintain. But build systems such as Gradle or SBT, are also incredibly expressive, by virtue of being configurable in actual programming languages - with the added benefit of not having to learn a new language.


"But I think the metrics you've chosen are irrelevant."

Nothing is more frustrating than discovering that you need to install a new component or build system just to build a specific component. It gets worse when dealing with multiple external components in different build systems.

"It's also very hard to learn and very difficult to maintain"

I agree insofar as most people go into make without trying to learn it properly. There's a manpage and pretty good documentation for the GNU extensions. But in my experience, with custom build setups using stuff like Maven, a lot of time is spent fighting the build system when a simple Makefile would suffice.

"with the added benefit of not having to learn a new language."

You've shifted the burden from "learning a new but very simple language" to "learning a framework atop your language", which (based on my reading of gradle's docs) is not very concise.


> Nothing is more frustrating than discovering that you need to install a new component or build system just to build a specific component.

If you're talking about a FOSS component that random people will have to rebuild: Yes, absolutely. (Go listen to the Linux From Scratch community complain about CMake!)

If you're talking about a proprietary project that will only be distributed as binaries, or maybe even not distributed externally at all, why not pick the best tool for the job?


Indeed, mseebach's answer is really saying that those metrics are irrelevant to him.


No, what I'm really saying is that if you get to cherry-pick the metrics, you can win any debate.

Of course there are tradeoffs in anything, and if the expressiveness of your build script is the deal-breaker for your project, by all means use make. On the whole, I will still argue that more modern alternatives provide a better total experience. Also, note that the OP is directed at beginners.


"Ubiquity" and "expressiveness" are hardly cherry picking in my book


For OP's particular use case... bash? More expressive and a standard on pretty much all *nix systems.


Indeed. The metric is exactly wrong; you don't want a more expressive alternative to make, you want a less expressive alternative - one in which builds are more constrained, so that a newcomer to your project has a chance of figuring out what's actually going on.


Use the best tool for the job. Your advice is probably correct for whichever apps you've written in the past, but it's definitely not correct in the general case. Appropriate tools are highly dependent on what you're building. In my area of experience make has been more than adequate (crossplatform games and Erlang servers with rebar + make). Yes the syntax is arcane and ugly but it does a fine job.


While my advice might not be directly applicable in the general sense, I maintain that the headline advice of the article is wrong in the general sense.

Also, I quite specifically didn't claim that make is never the right choice.


> If you love make, it's a big, red, burning flag that you're not demanding enough of your tools and that you're not keeping up with changes in your ecosystem.

I wouldn't say that I love make, but it is my tool of choice. I have tried out SCons and CMake, and both seemed to require more hoop-jumping for customizations. That is, Makefiles are very close to being raw shell scripts. That, and SCons had performance issues (IIRC due to the md5sum it was doing on every input file). I have casually perused bjam files and even _building_ boost seemed awkward to me, with the weirdo string-parsing within individual flags. I am happy with autoconf as a package user, but I've never developed any software packages of my own with it.

I typically work with C and C++ on Linux server environments. I'm quite happy with non-recursive Makefiles, timestamp-based change detection, and GCC-generated dependency graphs. This setup does incremental builds correctly for me, and parallizes linearly (and occasionally superlinearly due to I/O). I'd genuinely like to know what magic sauce you would recommend for build automation on this sort of a platform.


This is akin to saying that you should not be using libc, on account of there being many newer, more suitable libraries for string manipulation, I/O, and so on.

There's a clear downside to having dependencies on relatively obscure, cutting-edge technologies that may not necessarily work or be available elsewhere. And since build systems usually don't really need to be fancy, as much as they need to work, being conservative in this area usually doesn't hurt.


Such a broad statement needs support and you offer only vague assertions. I would be the first to suggest alternatives for make on a large, complex project but for a simple project make is quite useful, particularly since it's so broadly available.

I have a few websites with very short makefiles to package static files. That's 5 simple lines of mostly patterns, very easy to understand and there's no tooling related overhead on any system we use. I wouldn't say that make is better than all of the alternatives but you're in serious diminishing returns territory trying to make further improvements.

The real advice I would offer is the bottom line observation that your build system is supposed to save you time. Don't start looking for a cool-kid approved new one until you're spending time on the tool rather than the complexity specific to your project.


"Which one is better depends on the platform you're on"

That works against a portable build. That still matters to some of us.


By 'platform' I meant programming language and execution environment (JVM/Ruby/Python), not OS.


I'm fond of not having the number of build systems I'm responsible for maintaining grow in step with the number of programming languages I use in a project.


All build systems I've come across are quite happy to build code in other languages - just like make. The reason I'd recommend the dominant system in each ecosystem is, well, that: It's dominant, thus likely to be better supported for the issue you're likely to face.


I've gone back and re-read your comments in this post and tried to find something concrete in them. You use strong, imperative language, yet I have no idea what you are recommending.


Which one is better depends on the platform you're on.

We are not stuck on any particular platform, we target numerous platforms new and old. Using make, the target may determine which of those wonderful tools in our ever changing (improving, failing and obsoleting) ecosystem does the actual build.

    build:
	xcodebuild -target MyApp -configuration Release clean
	xcodebuild -target MyApp-universal -configuration Release-universal clean
But all I need to do is type make


If your project sports auto-generated code, or any sort of build targets that depend on inputs only known after generation -- Make simply cannot handle this. There are ugly workarounds but they don't work well.

Make is:

* Slow (e.g: when compared with tup[1])

* Very easy to get it wrong (under-specify dependencies), with cryptic bugs (or over-building) as a result. Pretty difficult to get it right (e.g: doing proper #include scanning, especially with the above-mentioned generated code).

* Terrible scripting language (mentioned by many others)

* Lacks useful features of other build systems (e.g: tup, or shake) of useful profile reports and queries about the build process.

It is just an antiquated, poor tool that ought to be replaced by betters that already exist, whenever possible.

[1]: http://gittup.org/tup/


> If your project sports auto-generated code, or any sort of > build targets that depend on inputs only known after > generation -- Make simply cannot handle this. > There are ugly workarounds but they don't work well.

Hmm. I write a lot of generated code and have not found this to be true at all. Many of my projects have multiple levels of code generation that and make handles them just fine. The key is to play to make's strengths and define pattern rules so you can hook into rules engine. Nothing like typing make and having it know to run a script to generate a text file that is further processed by another tool to generate more files that eventually get compiled to produce what you want.


Can you should an example Makefile that does this correctly?

How do you get make to do a multi-phase build, that generates code, then scans it to add to the dependency tree, then generates more, and so forth?


Have you used tup? I have encountered on the web, and it looks very well done. But I never hear about anyone using it. Is it just under-publicized?

People sometimes talk about waf as an alternative. I took a look at waf awhile aback and I felt disappointed, I forget why. I think node.js switched off waf for some reason I don't recall; either that or they didn't like it.


I used tup, and it has its shortcomings. I would never prefer Make to tup though, even with its shortcomings.

I am now working with Shake, and it seems to be the nicest I've used yet. It is not as robust as tup at verifying there are no under-specified dependencies, nor at detecting what needs to be rebuilt when a build script changes. But it is much more flexible (you get to write a build script with the full power of Haskell), it generates much nicer reports as a result, and it has some other interesting features that tup lacks. Unlike tup, you don't need to specify all the dependencies statically before building anything.

Since tup has some useful features that Shake lacks, I don't think there's a clear winner here. But either one is far preferable to Make in every setting.


What are the shortcomings of tup? That you have to specify dependencies statically?


Yeah, though it is not that horrible since tup allows you to over-specify dependencies with very minor ill-effects (less parallelism, but no over-building).

My list of short-comings besides that are:

* No "run" command on Windows, means if you want portable tup you're stuck with its primitive scripting language (or reverting to Make-style hacks like ugly multi-phase build that generates the Tupfiles)

* Has an arbitrary restriction on rule line size. So if you do use "run", and have large targets that have tons of dependencies, you may hit a dreaded: "Line too long" error.

* Has some trouble because of its fuse-based system to capture dependencies. Commands are exposed to weird paths in ~/.tup/mnt/ and such. The abstraction sometimes leaks.

All in all, these short-comings are very minor compared to Make's huge ones :)


Have you tried Rendaw's lua branch at all? I'm curious if that would remove your need for the 'run' command. At some point that will be merged to the mainline. Here's his tree: https://github.com/Rendaw/tup

Is the "Line too long" error an error message from tup, or from the shell when it tries to fork a process? If it's the latter I'm not sure there is an easy fix. If it's the former maybe I was just lazy when implementing something :)

You can run sub-processes in a chroot by specifying a flag (search the man page for 'run inside a chroot'). This will prevent the fuse paths from leaking to sub-processes, but unfortunately it requires the tup executable to be suid root. (If it didn't need suid, this would be the default).


Instead of making the tup process setuid root, just have a small chroot helper that is setuid and shell out to that. That way the entire tup codebase doesn't have to be trusted as root.

It still requires root for installation, but you can basically solve the security problem.


I didn't try the Lua branch, that sounds OK. Though I'd really prefer to use Haskell :)

IIRC, the "line too long" was from tup.

I think the abstraction may have leaked after specifying the flag and using the setuid tup. But I may be wrong here.


> Yeah, though it is not that horrible since tup allows you to over-specify dependencies with very minor ill-effects (less parallelism, but no over-building).

Make has exactly the same behavior with respect to additional dependencies.

> All in all, these short-comings are very minor compared to Make's huge ones

Like what?


> Make has exactly the same behavior with respect to additional dependencies

No it doesn't. If I overspecify that foo.o depends on foo.h, and I change foo.h, Make will rebuild foo.o even if it doesn't actually depend on foo.h. Tup won't.

> Like what?

Like incorrect builds, not supporting auto-generated code with dependency scanning, not scaling to large projects (slow builds, as demo'd by the benchmark page), lack of any kind of reports about the build, no guarantees at all about any output, and more.


"or any sort of build targets that depend on inputs only known after generation"

Have you tried the gnu make SECONDEXPANSION technique?


Adding a single extra phase is not enough, because you need an arbitrary number of phases to recursively build&scan all dependencies.


Can you provide a concrete example of a problem GNU make's secondary expansion feature cannot solve?


Auto-generate script that generates header0.h with #include to header1.h, and header1.h with #include to header2.h, up to N=10.

Of course, we don't know before generating the headers exactly what they'll need to #include. And N is determined solely by the existence of the #include, i.e: generation of header10.h does not #include header11.h and that should stop the build.


you can use most C compilers to autogenerate the dependencies in make format (see http://hastebin.com/kufajaqeso.sh for a sample makefile and script to generate the relevant code -- there's a commented line in header10.h that you should remove to prove to yourself that it indeed does the right thing)


Your solution isn't a solution, because it is not actually a Makefile.

Your shell script and Makefile both need to be run, in the correct order, by a meta build-system.

It won't re-generate just the right files when things change. Unless you always regenerate all the code, and that's gonna be very wasteful.

It won't parallelize things as possible. The code generation can be parallelized with parts of the build that don't depend on it.

It won't rescan just the right files (it will do more work than necessary).

Compare that with a real build system, such as tup or shake, that gets all of these properties right.


Everybody seems to be missing the fact that he's not talking about building a big software project. He's talking about scripting a simple workflow around a few files. Make is probably better than anything else for this, because under these circumstances it's so simple it's hard to get it wrong.

Something like Rake is obviously better when you need to really program your builds.


I depend on make for recording a workflow history of my projects. There's no point in creating an alias in my ~/.bashrc for an arcane command I'm only going to use once or twice a year to transform, sync or configure a group of files. When I check a project out of version control, it's nice to have a Makefile there to remind me how I accomplished something.

Make allows you to consistently specify actions in a universal way. Typing 'make edit' will open the main file in vim for me in any project. Typing 'make update' will check for available updates on any platform using the appropriate package manager, and 'make upgrade' will download and install them. Typing 'make sync' will transfer my project files to the specified $SERVER as the specified $USER with the appropriate protocol. Typing 'make install' will copy an updated configuration file and restart all of the daemons necessary for it to work. It's a great timesaver. Even when I need to write shell scripts for something too elaborate for make, I tend to create a target that runs the script, instead of trying to remember its name and all of the options.


I wanted to say exactly this. I wouldn't choose Make to build a software project these days (probably), but I use it all the time for exactly what the author describes: recording/managing a data processing workflow. For stats/ML exploratory work, it's invaluable, because everything you do is recorded in various targets. It makes my one-off scripts not quite so one-off, so when I revisit the code six months later, I can quickly figure out what I was doing. It also means less typing, because every command is just `make foo`, and that delegates out to R/Ruby/Python scripts, shell pipelines, or whatever I need. I'm delighted that someone else is (ab)using make the same way!


Using make gets quickly really ugly. I wrote many Makefiles in my life and if you have to try to stick to GNU Make. At least it provides some basic functionality. But even that gets nasty quite soon when you need a simple thing like conditions with "and" or "or". If you have to write portable Makefiles ... well fsck.

http://www.conifersystems.com/whitepapers/gnu-make/

There were recent discussions about adding GNU Guile to GNU Make. I think this is a brilliant idea. Not only would a major GNU project not only use Guile. But it would make writing Makefiles much more enjoyable because you have a real programming language at hand.


Personally, I don't want a programming language in my makefiles. Unless working in a very codified environment (Java and Maven, for example), I think it is vital to keep your build system as simple as you can. If I absolutely need nontrivial logic in my makefile, I would much, much rather be forced to call out to a script file (because it encourages not doing that).

And if you insist on parking a programming language into it, it should probably be something with a reasonable amount of existing knowledge and users. I like Scheme, but unless the project was written in a Lisp I'd seriously question the judgment of anybody whacking Scheme into a build file multiple people had to work with and understand.


After using Rake (ruby with some dependency DSLish stuff) on a project for a solid year, I appreciate Make much more. Rake is too powerful and makes it too easy to keep a bunch of complex logic inside the Rakefile. Our project, left unchecked, turned the Rakefile into a pile of spaghetti.

Make, on the other hand, being kind of crufty and inconvenient, encourages moving complex logic out of itself and into external scripts earlier than Rake. On top of that it encourages you to have reasonable interfaces to those scripts so that they fit back into the Makefile gracefully.

So ironically, Make, by being obtuse, ends up encouraging better overall project build structure.


What sort of issue did you run into. I am wondering if its osmething that could be solved by better modularization/sandboxing or if its more fundamental problems.


It's not a programming issue per se, it's a human issue. Rake provides a nice baseline for throwing all your quickie scripts into little functions that end up with reasonable (if basic) command line interfaces for free.

Those quickie scripts inevitably grow more complicated as the project goes on (usually because the project itself gets more complicated over time) and before long you've outgrown Rake. Except that since Rake is just Ruby it tricks you into thinking you haven't outgrown it!

I started noticing it when I was taking a bunch of time deciding in what order I should be putting the optional parameters to my Rake tasks such that it was most convenient to the user and spending way to much code validating those arguments and setting defaults when I realized that I could've written a script using 'optparse' that would easier to document, easier to use, and easier to write and modify.

There's a graph you could make where the X axis is size of the script and the Y axis is complexity (or maybe "effort"). The Rake line, drawn on this graph, starts near 0,0 but climbs and a nice steep rate. Make starts at basically the same spot as Rake but climbs way faster. A standalone command-line script starts a bit higher on the Y axis, but is flatter over all. The point at which the Rake (or Make) and the script lines meet is where you should switch to a standalone script.

With Make, this happens fairly early on when things are still relatively simple. So you convert your bash commands into a Ruby script and you end up better off in the long run. With Rake it happens so late that converting to a standalone script becomes a very large undertaking and nobody wants to do it (because it still works--why mess with it?). Over the long haul it becomes a pain point.


You already have a programming language in your makefiles.

http://okmij.org/ftp/Computation/#Makefile-functional


I hear this sort of argument all the time (build systems, templating engines, config files, etc) and I'm not sure I agree with it anymore. Invariably people start adding features and you wnd up with horrible warty languages full of corner cases like make or shell.


Yeah, limitations like these are often just a red rag to hackers that naturally love the challenge of overcoming apparent limitations with ornate workarounds. See the gmsl library mentioned in the current top post for example, which I read a bit of while trying to get a handle on the Android NDK's build system; respect to jgrahamc for its cleverness but if you don't like Make in the first place, you're inclined to feel that more of it is bad and not good.


GNU Make already is a programming language. It's just not a very good one. Wouldn't Scheme be helpful to your "don't write programs in Make"-attitude? I mean if only few people know scheme they'll be less likely to hack your Makefile.


The stupid thing is that conditions are supported just fine in BSD make, but the syntax is slightly different than GNU make. So you're forced to write for one or the other, and GNU make is more widely used - so makefiles end up being GNU make only, even though they don't require any GNU-specific features.


Here's a very simple but powerful build system by the author of that paper: http://code.google.com/p/make-py/

It's just a single ~400 line python script. The only major feature that's missing IMO is hash-based rebuild detection, which is essential if you're using much auto-generated code.


> The only major feature that's missing IMO is hash-based rebuild detection, which is essential if you're using much auto-generated code.

I'm guessing that your goal here is to avoid a recompile if the auto-generated content did not change. This is not a problem for timestamp-based build tools if you add a single step: auto-generate your output to a temp file, and only replace the target with the temp file if the file shows a difference. The program `install` will do the 'copy iff file contents have changed' part for you, so it's really just two extra lines in the Makefile recipe.


Interesting idea, I hadn't thought of that before. This only increases the complexity of the Makefile though, and for not much reason. Hashing can help in other circumstances (such as possibly skipping a linking step if the object files don't change), and it would really be much cleaner to have it as part of the build system. The last time I played around with generating code through make, it got real ugly real fast, and I don't think your 'install' trick will help much in that regard...


> ... for not much reason

There's actually a good reason to do this, but it is an edge case. A timestamp-based build system can do a no-op check with very little I/O. A hash-based build system has to read all of the file contents in order to determine that nothing has changed. Depending on the latency and bandwidth of your storage, this can make a big difference in incremental builds.


I was more talking about the need to implement this layer yourself, when it should be taken care of by the build system (though I don't know of any build systems that can easily implement your suggestion, since it has to somehow redirect the output of a code-generation step to another file). Agreed that there are performance differences, though in my experience hashing is quite acceptable even for large projects if used reasonably (only hashing inputs to build steps if they are also outputs of another build step).


Any thoughts on CMake for portable Makefile generation. It seems a lot projects use that.


CMake is a great example on problems with DSLs. Simply not everybody is a good language designer. CMake uses a pretty horrible language and comes with its own kinds of strangeness. I have used it quite extensively and it's probably the way to go if you want to support MSVC. But I'm not really happy with it.


Most systems I use have GNU make 3.81 installed. GNU make 3.81 added the special target .ONESHELL which makes code like the following possible. I'm not sure if it is a good idea, but it does remove the dependency that make has on shell programming.

    .ONESHELL:

    SHELL = /usr/bin/python

    VAR := lorem ipsum

    all:
        @
        import re
        
        n = 3
        
        print('make variable: $(VAR)')
        print('local variable n: {}'.format(n))
        
        rx = re.compile('\d+')
        s = 'foo 17'

        m = rx.search(s)
        if m:
          print('has number: {}'.format(s))
        else:
          print('no number: {}'.format(s))



I find this a really interesting conversation. On the one hand I've built some really really complicated systems with make (like all of SunOS) and some even more complicated systems with a custom build tool (the google base infrastructure packages) and there are pluses and minuses to both approaches.

If you're building something small, its hard to beat a simple make file. Its easy to write, it allows you to capture dependencies and refactor quickly, and it doesn't interrupt your coding flow.

If you're building something quite large there are some real productivity benefits from building knowledge of what you are building into the build system. And computational build systems (which is to say build systems where the build spec file includes the capability to do local computation) can make retargeting the same build to different environments easier.


Three years ago I wrote a blog post arguing in favour of Make : http://blog.jgc.org/2010/11/things-make-got-right-and-how-to...

I'm not sure I still agree with everything in that post, but Make is very terse and expressive which I like.

One of the biggest problems with Make is that it's a macro language and some people don't get along well with them.


Like everyone seems to be mentioning this post is really about "you should have automatic builds" and make is just an incidental. redo[1] is another interesting make replacement that keeps most of the good points of make (e.g., simple and shell based), while having a more powerful dependency mechanism. It's even compatible with make itself allowing you to move a part of a recursively built project to redo, while having it call into make for other subcomponents.

What I've done before when doing data-driven blog posts[2][3] was to write the whole analysis end-to-end in ruby, including branching out to R for stats and graphing[4]. I ended up using rake (ruby's make-like tool) to tie everything together with dependency tracking but I could just as well have written a script that would call everything in order. Doing that gives you a way to quickly reproduce results (what's discussed here) but also, together with version management, a way to go back to previous versions and use things like "git bisect" to figure out when you introduced a bug.

[1] https://github.com/apenwarr/redo/

[2] http://pedrocr.pt/text/how-much-gnu-in-gnu-linux

[3] http://pedrocr.pt/text/preliminary-results-open-source-evolu...

[4] https://github.com/pedrocr/codecomp


To everyone with alternatives to make:

Make is easy to learn.

It simplifies a slightly complex task.

Other build tools have a steeper learning curve. If they're more complex than the problem being solved, people won't want to adopt them.

When things get complex, there's the GNU make manual and libraries.

When you have to target multiple platforms, there's autotools, which is really complex and intimidating, but less complex than targeting multiple platforms.

Alternatives to make seem to fit in between make and autotools.

Make sucks. I've also heard that Unix sucks, and C sucks, too. Despite this alleged suckyness, these things not only persist, but accumulate improvements over the years.


Definitely agree, unfortunately people always know better, even when they don't have any clue. My philosophy is "If the development environment provide you a modern, native build system(ex: go build, get), go with it, otherwise stick to make unless using autotools etc. will give you a reasonable advantage."


This article isn't so much "Why Use Make" as it is "Use a Build System". It even says so in the sidebar.


Yep! I wrote about Make because it's what I use and it's the most ubiquitous. But aside from the section on syntax, the post applies to nearly any build system. (And as another poster mentioned, other build systems might offer advantages over GNU Make, such as using content hashes instead of modification times, or more elegant syntax.)


Drake was just released, it's a make replacement specifically for data processing and seems to have some nice features.

http://blog.factual.com/introducing-drake-a-kind-of-make-for...


I like discussing build systems. However, personally I still come back to make all the time. There are certainly better alternatives (tup, shake, redo, etc), but make is simply available everywhere. It is available, if I use an OS which is a decade old. It will be available, if I use an OS in ten years from now.

I am writing this on Ubuntu LTS (12.04) and none of those three advanced build systems above is available from the default repos. This means a serious dependency burden, if somebody wants to build my stuff.

A few years ago, Jam was a promising build system. It seems to be dead now. Hopefully, tup and shake will stay longer.

Until you have good reasons against make, it is a perfectly fine initial solution.


CMake is available absolutely everywhere and is used to build very large projects such as Boost, KDE, and ROS. Of course it's a C/C++ build system not a general purpose one.


Everyone points at make and says, "use something better". They point at Rake, they point at Maven. However, Make has one feature that I really need, that I rely on to speed up builds: parallel dependency construction.

The last time I checked, neither Maven nor Rake do this properly. Maven runs everything inside a single VM. Who wants to have to worry about multithreading in their unit tests? Rake requires thread support in the underlying Ruby interpreter (why?).

I've got 64+ hardware threads (Sparc T4-1) all sitting there waiting to be spun up and do my bidding. Please help me to convert electricity to heat!


Inferno's <{} construct dispatches processes in parallel, and gathers output in coherent blocks. I use it in credo to parallelize all the dependencies for a target. I rely on mutual exclusion to avoid building a target twice at the same time, and on cached checksums to avoid rebuilding when there have been no changes. See the sh-inferno scripts map, credeper, and cresum in the project http://github.com/catenate/credo


Waf can also do parallel build with proper dependencies, and it's Python-based. Have you looked at it? The documentation is rather clear, but the initial learning curve is probably steeper than make.


Thanks for the pointer, I'll go check it out.


I can't tell you how many projects I've shelved for months and when I returned thought, "Damn it. How do I build this again? Oh look a Makefile. Thank you Past Eric."

Even for projects that use more sophisticated build tools like rebar, leiningen or npm I write a Makefile so I don't have to remember those tools. Make provides a universal interface to those tools.


I've used Make in a similar context: building documents.

Raw simulation results (.log) -> Processed for plotting (.plot) -> Ugly Fig files (.x.fig) -> Pretty Fig files (.fig) -> EPS files (.eps) -> The final document (.pdf).

By including the right dependencies in there, you can have individual figures update themselves when the raw data changes, and whole swathes of charts update themselves when the 'fixer' scripts get updated.


I've done this for a large scientific data reduction task. Each operation at the level of one month needed to be repeated several times to dial in parameters for tossing junk data.

Once the per-month operations were done, all the results were combined into various plots and html pages. There were about 120 months, and running one month took several CPU hours.

I put it all in a makefile. It saved a huge amount of time to not have to repeat all the data reduction (for each month) whenever the plots and subsequent analysis needed to be updated due to an underlying parameter change for a few months' data. I could run make and know that all the per-month changes would roll up correctly to the per-year and overall summaries.

Also, the -j argument to make handled parallelizing the data reduction at zero cost to me.


I used Make, inotifytools, pdflatex (or similar), "xpdf -remote", and $EDITOR to get an almost-instantly-updated view of papers and my thesis, including all figures and illustrations.

When any of the dependencies of the final output changed, inotifywait noticed and kicked off a "make", and then notified the xpdf instance to refresh itself. xpdf was nice enough to try to stay on the same page when it did this.


Me too, building a series of HTML/PDF output pages from a docbook source, for example.

Make is a lovely tool that is useful for more than just compiling source code.


I once used gmake to implement a multi-stage Mechanical Turk workflow.

It was awful. The syntax sucks. But it worked consistently, and the core logic was only 110 lines of Makefile. It described the files and the data flow between them. Even now, I can read it and understand it with not too much effort.

Make is a very simple functional language. It's restartable. If you type 'make -j 2' it becomes parallelizable. For almost anything for which you might write a shell script, you have to ask yourself: Why not make instead?

I would like a cleaner, kinder make, but I also want it to retain make's essential make-iness. Nothing else seems to do that.

I do feel bad for anyone downstream of my makefiles, though.


actually, why not a shell script?


Shell scripts are fine, but if your shell scripts generate files than makefiles might help you model the dependencies between them. You get parallel execution and restart ability almost for free. (and they detect errors a little better)


I agree Make is great at what it does but too many people really abuse their makefiles and don't use dependency management correctly. I suggest taking a look at http://www.amazon.com/Managing-Projects-Make-Nutshell-Handbo...

Really well written book. You don't have to agree with everything but its a great look at some of the better ways of using Make.


As noted in many of the comments, make sucks, and the article is not promoting make per se. But make seems to be the topic of conversation, so:

To me, make is a cruddy low-level declarative language that gets abused as a pseudo-imperative language, compounding the problem. Phony targets like "make <verb>" break the paradigm, because <verb> is not an artefact that can be tested for up-to-date-ness.

But one advantage of make that I'm seeing underrepresented in the comments is its ubiquity. If I just want to try your project, and I need to build your project to try it, and your project is using the cool new SchnauzerBuild[1] tool, and there's no SchnauzerBuild package for my OS, I have to go install that from source... which might have its own build dependencies... ok, your project looks kind of cool but I have better things to do than this.

I think it's great when build tools are able to write out a Makefile or sh[2] script that just builds everything, and when projects ship with that pregenerated. (and remember to keep it updated)

[1] fictional [2] another technology that sucks but is ubiquitous


This is a sloppily phrased call for automation. I can wholeheartedly endorse automation as a goal, but the idea that make can be your go-to tool for this is embarrassing. Make will get you into trouble the moment your task becomes non-trivial. Everyone has their favourite tools and no single tool is best for every job (although grep and find are often your friend).

As I often say in job interviews, I'm lazy. I protect the lazy guy inside of me, because he's the one who cries out "Didn't we do this manually before? Shouldn't we automate it?" He's a good guy to have around.


>>(Note: use tabs rather than spaces to indent the commands in your makefile. Otherwise Make will crash with a cryptic error.)

Sheesh.


Requiring tabs alone is reason enough to ditch make.


... or use an editor with syntax highlighting for Makefiles.


"Makefiles are machine-readable documentation ... "

Yes it is machine readable but how about us humans? Makefiles are ugly and when working on someone else's project it is very hard to reverse engineer the build system. Build systems are themselves software projects and we need better tools to develop and maintain them.

SCons was a promising project at one point, it improved things by capturing the build system in Python classes. I thought things would be more maintainable and readable. However for me it wasn't a well design, it obscures build system development with mixed declarative and iterative programming.

Makefiles are the defacto standard today, but they're no where near beautiful, or maintainable, or readable.

There are also variations of it such as gmake, imake, and so on, who only add their own quirks without solving the real problems.


scons is a good alternative to make. Make is ok for small projects, but it gets ugly pretty quick...and it allows you to make mistakes easily...


all these "make for beginners" tutorials i've seen point out (correctly) how make is, at its most basic, just a dsl for specifying a dag.

it seems to me that it would be pretty useful to have a tool that let you build up said dag graphically, perhaps dragging and dropping files from an explorer pane, and then generated a makefile under the hood.

add in simple "infinite undo" git support that checkpointed every time you built with new inputs, and you'd have a dead simple way for non-programmers (who nonetheless have to do some programming to work with their data) to get the benefits of programming best practices. does such a tool exist?


This is a nice article, especially on a general level of writing scripts that automate work so you don't have to redo it manually.

I want to point out one thing.

  targetfile: sourcefiles
	command
This violates DRY, there is duplication between command and source files (and target file?). In theory, it should be possible to automatically deduct the source files from the command.

It's far from easy in the general case, but it would save you from having to manually update source files when the command changes. (Any small inefficiency times lots of occurrences times lots of people really adds up.)


I don't understand. In make, I can do:

    program : f1.o f2.o f3.o
            $(CC) -o $@ $^

    f1.o : f1.c f1.h f.h 
            $(CC) -c -o $@ $<
Where "$@" is the target, "$^" is a list of the source files, and "$<" is the first source file. And that's if you want to be verbose.


I've started using make for production data generation (not building software) instead of pure Python. Mostly to tie together Python scripts doing the real work.

Pro

-dependency management for free

-well-known paradigm makes it easy for someone other than me to figure out where to look if something went wrong.

-scripts I call out to can be focused

Cons -syntax

-for processes that don't generate an output (think adding data to a file in place) I wind up creating placeholder files ("file.transformA.done").

I actually want my dependency management to be terse and declarative, which is the opposite of what I'm looking for in a programming language, so it feels like a pretty natural divide.


> You can approximate URL dependencies by checking the > Last-Modified header via curl -I.

... or better, use the '-z' option.

    counties.zip:
        curl -o counties.zip -z counties.zip 'http://whatever.zip'


No you shouldn't. I know make inside and out, a result of being stubborn and lazy, and I am positive I can build a better replacement for it in a matter of days. Something like, but more general than, Rake.

The way Make works is very complex with lot of implicit rules, special exceptions, etc. Debugging Makefile is a chore.

If you are bent to use it, two pieces of advice: there is a handy debug flag that will tell you everything make is doing, and do disable all the implicit rules.


> I couldn't disagree more: make is powerful and ubiquitous.

Then why haven't you?

> Debugging Makefile is a chore.

Debugging make is a breeze. make -rd tells you exactly what make decided to do and why.


More than convincing me to use Make for dataviz, this really begs the question, "Is there a good dataflow manager for data visualization?" Something that can use URLs as dependencies, simpler syntax than Make, perhaps has a node/pipe dataflow GUI... Cascading.org comes to mind, but it is too complicated and Hadoop-oriented for this kind of dataviz.


My problem with make for data pipelines is that a lot if decisions have to be content based instead of timestamp based. Multiple platforms is also an issue, I usually don't bother installing the whole Cygwin just for gnu make. I end up with custom python scripts which may be more verbose but more flexible and almost always cross-platform.


Have you tried Waf? http://code.google.com/p/waf/ Seems like it might be a good fit.


How does Waf compare to SCons? I'm a big fan of SCons, although it annoyingly does not have transitive dependency resolution for libraries. Waf has this functionality, correct?


I found it hard to make scons build outside my source directory (instead of polluting it). This is important for example when you don't want random files showing up in your git clone.

With waf, this is easy.

Subjectively, waf code looks more Pythonic than scons, though both are supposedly Python.


Waf is also much nicer for end-users, if you're the sort who distributes source packages. It mimics the "./configure; make; make install" sequence as "./waf configure; ./waf; ./waf install".


I haven't used Waf either, but some information is here:

www.scons.org/wiki/SconsVsOtherBuildTools


This is funny to read, especially after spending several hours trying to get a project to compile with make, while tracking down an annoying and cryptic error that doesn't even make sense.

Make is a horrible tool, and is extremely hard to learn. I think it should only be used when generated by other build systems.


Make is an excellent expert system / topological sorting tool with an abominable syntax.


Posted a new link to Fbuild. Its a build system that has quite a different and interesting take on the build process.

http://news.ycombinator.com/item?id=5276504


And don't forget how broken recursive make can in fact be: http://aegis.sourceforge.net/auug97.pdf . I lot of builds are nasty "Heisenbuilds."


You should use CMake.


CMake generates Makefiles and it really is only good for building software while Make can be used for any number of complex tasks. I agree that if you need a build system use CMake it will make your life easy.


> use CMake it will make your life easy

At the expense of your users.

Okay, I know next to nothing about CMake. All I know is that it is extremely frustrating when I get the source to a project that uses CMake and then have to modify the CFLAGS or figure out why it can't find some include file. Of course, the same applies to makefiles generated by automake.

Let's just write our makefiles by hand, people.


Thats is the worse idea ever, hand written make files are extremly prone to failure, moreso then CMake files. There is a way to verify that you have all your dependencies and everything is discovered rather then hard coded, but some people really like to hard code things ( CMake and make both let you do this )

The thing CMake gives you is that it will _automatically_ handle almost everything involved with creating correct make files if you tell it to. Why write the same code to discover dependencies and to have external build directories and staging directories when we can do it once and have a computer write it out for us. I see it this way Make files are great and reduce you typing at the computer by a factor of 1000. CMake is also great and reduces your typing over make by a factor of 10 to 20. Things like Go or SBT/Maven are even better and they reduce typing to almost nothing at all ( SBT doesn't require any typing to build most standalone programs )


"everything is discovered rather than hard coded"

I take it you've never used or read any of the Find.cmake files, or tried to figure out which variables you need to set in all the spaghetti. Is it _LIBRARIES or *_LIBRARY_DIR? Is the comment at the top related to reality? The awful homegrown language makes it all worse.


CMake honours the CFLAGS and CXXFLAGS environment variables, but only on the initial generation of CMakeCache.txt (first call to cmake in a new build directory). You can also edit them later using the GUI (but only by enabling advanced options) or by editing CMakeCache.txt.

But I can see why this is annoying to users.


Inevitably, any build system becomes part of the software. With make this means you are now using a far more inferior language to build software.


From what the example accomplishes, it seems to be that it would have been easier to create a shell script. Is there something I am missing?


I think shell script can work as well as makefile in most cases. The pro of shell script is not having one more tool for running your commands. You have a shell, why do you need make for describing build steps?

For the moment I see a makefile as a shell script that fails when a command fails. But you can use `-e` option with sh, bash, rc shells and expect the same behaviour.

The only thing shell misses (comparing to makefile) is dependencies check, you have to write dependencies mechanism yourself if you need it.

Also I don't like the .PHONY stuff, I just can't get it, it feels alien.

Am I missing anything obvious?


It doesn't sound like it to me. Thanks for the reply.


I just started learning about Make today, but according to the other comments in this post, there are lots of differences, like incremental builds and parallelism. It also checks things like file modification dates and whatnot.


Make is fine, easy to learn and understand, but not very portable, while Autotools are scary giant monsters left from previous generations.


What about Java based tools like Ant or Maven?


Pro: they avoid starting another instance of the JVM for each "command"; they run on Windows.

Con: they are grotesquely bloated for humans to read and write; (as indicated elsewhere) they make any custom file generation "tasks" a PITA.

Personally, I'd like to strangle the bastard that invented Ant :-)

It would have been so much nicer to have just copied make syntax, and predefined a few macros for java-specific tasks which actually spun off threads within a JVM based interpreter.


<description><sentence> <subject>It</subject> <verb>depends</verb> </sentence></description>

Xml is an awful language to program in and every ant file grows with the software it builds => ant files that take more than 2 screens are very rarely maintainable.


Ant doesn't conditionally re-execute a recipe when the prerequisites are newer than the target. Such logic isn't needed for Java projects since it is built into javac. Hence Ant is okay perhaps for Java projects but inefficient for others.


I had a professor who would use make to manage document format conversion of his assignments and handouts.


I think what you were looking for here is actually a shell script, rather than a Makefile. A shell script is much more suitable for general purpose programming. Or better yet, use Python ;-) And, let's admit it, for small projects dependency analysis is overrated (although it can be implemented easily enough in a real programming language).


One thing make allows you to do is incremental runs. For example, if the first step in your shell script costs 1/2 hour, you really don't want to run the shell script over and over again when you are debugging the next step. Yes, you can implement your own caching logic in your shell script, but its usually much better to use a tool specifically designed for that job.


Isn't this just as true of Ant or MSBuild or Gradle (or s/.*/yourfavoritebuildtool/s)?


Man alive just use an IDE.


can't I use js with Node.js instead?


Cakefiles are a nice CoffeeScript based alternative.


Pardon? I think you might be missing a project name eg, foo.js


Created in 1977, Make has its quirks. But whether you prefer GNU Make or a more recent alternative, consider the benefits of capturing your workflow in a machine-readable format.

Like a programming language?


Yes. Make is just a DSL for file-based dependency management.

You could certainly write makefiles in another language, but chances are it would be a lot more verbose and probably less portable. Of course there are other make tools apart from Make, so feel free to use whatever you feel is best suited.


Exactly what I was thinking. Use a scripting language you're proficient at. This article doesn't make Make look worth the trouble if you don't have a more specific reason to use it. Too much complexity, and quirks like you’ll need to delete the previously-downloaded zip file before running make can easily be avoided with a scripting language.

These days any relatively convoluted task that I may need to do more than once, I cook it in Ruby (and sprinkle some AppleScript/appscript if it involves UI). Languages like Ruby or Python offer a good balance of flexibility and abstraction for this kind of automation.


If you're using a series of shell commands, each of which takes seconds to hours to run, you will be happy with make's ability to (1) run shell commands easily and (2) not run them again when they don't need to be run. You could write a dependency tracking thingy in any programming language, but at that point you might as well be using Rake or some other make replacement.


Agreed, I generate documents with graphs and calculations in them for my research, but I tend to document and automate them with python scripts rather than makefiles, because python does not make my brain hurt.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: