Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: In what creative ways are you using Makefiles?
96 points by kamranahmed_se on July 24, 2017 | hide | past | favorite | 99 comments



My favorite use was during my PhD. My thesis could be regenerated from the source data, through to creating plots with gnuplot/GRI and finally assembled from the Latex and eps files into the final pdf.

It was quite simple really, but really powerful to be able to tweak/replace a dataset hit make, and have a fully updated version of my thesis ready to go.


I'd go as far as to argue that such a plan is essential to reproducibility (and it makes your work faster and less error-prone).

My take on the same plan ( "make thesis" ):

It's not pretty, but it works. https://github.com/4kbt/PlateWash

Unblinding tweet: https://twitter.com/CharlieHagedorn/status/59565285891105177...

Finished product: https://digital.lib.washington.edu/researchworks/handle/1773...


> My thesis could be regenerated from the source data, through to creating plots with gnuplot/GRI and finally assembled from the Latex and eps files into the final pdf.

All research should be like this. Explaining things in words is simply so imprecise you end up spending forever to show someone something that a program can tell you quickly.

"I used xyz transformation with abc parameters" can be gleaned easily from code.


I do the same with latex when generating contracts for my company. The makefile and its corresponding python program asks for various arguments (company name, xxx, yyy). Then it generates the contract with the associated prices. I even put some automatic reduction/gifts in place (free service) if the amount is bigger than X or Y.

I quite like it !


Please could it be possible to view a snippet - just to get an idea of how you do this. Cheers!


I do something similar, here's my Makefile -- I have scripts that build figures in a separate directory, /figures. I'm sure it could be terser, but it does the job for me.

    texfiles = acronyms.tex analytical_mecs_procedure.tex analytical_mecs.tex \
               anderson_old.tex background.tex chaincap.tex \
               conclusions.tex cvici.tex gold_chain_test.tex introduction.tex \
               main.tex mcci_manual.tex methods.tex moljunc.tex \
               tb_sum_test.tex times_procedure.tex tm_mcci_workflow.tex tmo.tex \
               vici_intro.tex
 
    # dynamically generated figures
 
 
    all: main.pdf
 
    main.pdf: $(texfiles) figures/junction_occupations.pdf figures/overlaps_barplot.pdf \
                          figures/transmission_comparison.pdf \
                          figures/wigner_distributions.pdf
        pdflatex main.tex && bibtex main && pdflatex main.tex && pdflatex main.tex
 
    figures/junction_occupations.pdf: figures/junction_occupations.hs
        ghc --make figures/junction_occupations.hs
        figures/junction_occupations -w 800 -h 400 -o figures/junction_occupations.svg
        inkscape -D -A figures/junction_occupations.pdf figures/junction_occupations.svg 
 
    figures/overlaps_barplot.pdf: figures/overlaps_barplot.py
        python figures/overlaps_barplot.py
 
    figures/transmission_comparison.pdf: figures/transmission_comparison.py
        python figures/transmission_comparison.py
 
    figures/wigner_distributions.pdf: figures/wigner_distributions.py
        python figures/transmission_comparison.py
 
    clean:
        rm *.log *.aux *.blg *.bbl *.dvi main.pdf


I noticed that you have some file dependencies not encoded in the targets. Also, you might like reading up on Automatic Variables ($@, $^, $<, etc). Anyway, just for fun I tried rewriting your script in a way that should Just Work a little better.

    texfiles    = $(wildcard *.tex)

    figures_ink = figures/junction_occupations.pdf
    figures_py  = figures/overlaps_barplot.pdf        \
                  figures/transmission_comparison.pdf \
                  figures/wigner_distributions.pdf
    figures     = $(figures_ink) $(figures_py)


    main.pdf: main.tex $(texfiles) $(figures)
        pdflatex $<
        bibtex $(<:*.tex=*)
        pdflatex $<
        pdflatex $<

    figures/junction_occupations: %: %.hs
        ghc --make $^

    figures/junction_occupations.svg: figures/junction_occupations
        $< -w 800 -h 400 -o $@

    $(figures_ink): %.pdf: %.svg
        inkscape -D -A $@ $^

    $(figures_py): %.pdf: %.py
        python $^

    clean:
        rm *.log *.aux *.blg *.bbl *.dvi main.pdf


Check out latexmk: it keeps track of having to run bibtex etc, and runs latex "enough times" so that all equation refs etc have stabilised. (Latexmk -pdf to build a pdf, default is dvi.)


Yes, I don't think I'll get a chance to dig out my Makefile. But my Makefile was very much like this.


  figures/%.pdf: figures/%.py
      python $<


For what it's worth, I did something similar for my master's dissertation, but couldn't be bothered to learn make, so I used a python library called pydoit:

http://pydoit.org/

All my analysis code, plots, etc were already in python, so it fit in well. Lyx has a CLI from where I exported .tex and compiled to pdf.


I use Makefile as a wrapper for build / test bash commands. For example I often define these targets:

- make test : run the entire test suite on local environment

- make ci : run the whole test suite (using docker compose so this can easily be executed by any CI server without having to install anything other than docker and docker-compose) and generate code coverage report, use linter tools to check code standards

- make install-deps : installs dependencies for current project

- make update-deps : will check if there is a newer version of dependencies available and install it

- make fmt : formats the code (replace spaces for tabs or vice-versa, remove additional whitespaces from beginning/end of files etc)

- make build : would compile and build a binary for current platform, I would also defined platform specific sub commands like make build-linux or make build-windows


I have the exact same usage, generally with the same names too.

It's great for going back to a project you haven't touched in months/years and then typing "make" to build it regardless of the language or tool chain.


Late to the thread, but just wondering why are so many people using makefiles instead of say bash scripts?

I personally achieve the same result with a homebrew solution that uses bash exclusively (https://github.com/zweicoder/magic) but just curious to know why so many people prefer makefiles.


We handle it pretty similar at work, to have consistent commands over all repositories regardless of language. Additional targets are

- make docker-build

- make docker-test

which are essentially wrappers around build/test, just in docker.


I more or less use the same targets, with one more that I've been using for the past couple years:

- make dev: stops, builds, and runs the code locally in a Docker container.


are you me?


Teradata contributes the Facebook open-source project Presto. Presto uses Docker to run tests against Presto. Since the tests require Hadoop to do much of anything useful, we install Hadoop in docker containers.

And we run tests on 3 flavors of Hadoop (HDP, CDH, and IOP), each of which is broken down into a flavor-base image with most of the packages installed, and various other images derived from that, which means we have a dependency chain that looks like:

base-image -> base-image-with-java -> flavor-base => several other images.

Enter make, to make sure that all of these get rebuilt in the correct order and that at the end, you have a consistent set of images.

https://github.com/Teradata/docker-images

But wait, there's more. Docker LABEL information is contained in a layer. Our LABEL data currently includes the git hash of the repo. Which means any time you commit, the LABEL data on base-with-java changes, and invalidates everything downstream. This is terrible, because downloading the hadoop packages can take a while. So I have a WIP branch that builds the images from an unlabelled layer.

https://github.com/ebd2/docker-images/tree/from-unlabelled

As an added bonus, there's a graph target that automatically creates an image of the dependency graph of the images using graphviz.

Arguably, all of the above is a pretty serious misuse of both docker and make :-)

I can answer complaints about the sins I've committed with make, but the sins we've committed with Docker are (mostly) not my doing.


I wanted to download a few hundreds of files, but the server was enabling only 4 simultaneous connections.

I did a makefile like

    file1: 
        wget http://example.com/file1

    file2: 
        wget http://example.com/file2

    file3: 
        wget http://example.com/file3

And used make -j4 to download all of them, but only 4 parallel tasks at once. It starts another download when one finishes


Check out aria2 next time.

https://aria2.github.io/


Xargs parallel processing or Gnu parallel would probably help a lot here.


I once implemented FizzBuzz in Make: https://www.reddit.com/r/programming/comments/412kqz/a_criti...

Even though Make does not have built-in support for arithmetic (as far as I know), it's possible to implement it by way of string manipulation.

I don't recommend ever doing this in production code, but it was a fun challenge!


Did you get the job?


No, I just made it for fun. Someone else in the thread said they'd be impressed by a proper implementation in Make, so I took a stab at it.

I do wonder what the response would be like if I actually wrote something like this in a real job interview though...

Would the interviewer like my comment style? Would they be impressed that I have the technical skills needed to actually pull it off? Or, would they be so horrified by it that they'd refuse to ever let me touch any of their code? :)


Not particularly creative, but I'm using it to generate this blog:

http://www.oilshell.org/blog/ (Makefile not available)

and build a Python program into a single file (stripped-down Python interpreter + embedded bytecode):

https://github.com/oilshell/oil/blob/master/Makefile

Although generally I prefer shell to Make. I just use Make for the graph, while shell has most of the logic. Although honestly Make is pretty poor at specifying a build graph.


I did the same, makefile and accompanying blog entry is here: http://flukus.github.io/building-a-blog-engine.html

Turned out much simpler (if less feature full), much faster (it's finished in the time it takes node to start) and much more stable than other's I've used.


What are make's weaknesses in specifying build graphs (as someone that hasn't used a lot of make, but might be soonish)


1. You can only specify that a recipe creates multiple output files (for instance, an output file and a separate index file) if it has wildcards. 2. Temporary file handling is completely broken. You can declare a file to be temporary, so that make deletes it after all the jobs that use it have finished. However, make randomly deletes the files at other times (like for instance if a command fails), and fails to delete the files at other times. 3. There is a complete inability to specify resource handling - for instance, I want to mark that this recipe is single-threaded, but that one uses all available CPU cores, and have make schedule an appropriate number of jobs. 4. If you want to have crash-recovery, then you need to make your recipes generate the output files under a different name and then do an atomic move-into-place afterwards. Manually. On every single recipe.

These reasons (and others) are why I gave up on make for bioinformatics processing and wrote a replacement. I'll release and publish it at some point.


I know make can seem a little baroque, but this is just wrong.

1. Multiple targets for a single recipe:

    file.a file.b file.c: dep.foo dep.bar
        ...
This says that the recipe makes all of file.a, file.b and file.c in one go.

2. Make definitely doesn't randomly delete files. It deletes implicit targets.

Make by default knows how to build a lot of standard things like object files for c programs, yacc and bison stuff, etc. These are called implicit targets. These are considered intermediate files to be deleted. You can override the defaults or add your own implicit targets by using pattern matching like this:

    %.foo: %.bar
        ...
If you want to use pattern matching for non-implicit targets so they don't get deleted, you can do that too:

    a.foo b.foo c.foo: %.foo: %.bar
        ...
The list before the first colon says which targets the pattern-matched rule applies to and shouldn't contain wildcards. These targets won't get deleted.

3. This seems like a misunderstanding of make's basic role. Make just spawns shells when running a recipe; like bash, it shouldn't need to know how many threads you're using to run an arbitrary command. If you want make to build targets in parallel whenever possible, look at the `-j` option. If you want a certain build recipe to run multi-threaded, use the proper tool for the recipe.

4. Not sure what you mean by crash recovery, but considering the above, I'm pretty sure you might just be fighting make unnecessarily.

Honestly, try reading the info manual. It's kind of massive and daunting, but the intro material is really accessible, and taken in pieces, you can easily learn to become friends with this venerable tool.


1. That doesn't do what you think it does. From the manual: "A rule with multiple targets is equivalent to writing many rules, each with one target, and all identical aside from that." It does not mean one rule that creates multiple targets. To achieve that, you need to use wildcards. For some reason, when using wildcards, the syntax is interpreted differently.

2. If I create a rule to create an intermediate file "b" from original file "a", then another rule to create file "c" that I want to keep from "b", but there is an error running the command that creates "c", then make will happily delete the intermediate "b" (which in my case took 27 hours to create) although it knows the final "c" wasn't created properly. This means that when I rerun make (having fixed the problem), that 27 hour process needs to be run again, which is a waste of my time.

3. I want to say "make -j 64" on my 64-thread server, and not have 64 64-thread processes start. But I also do want 64 single-threaded processes to run when possible.

4. By crash recovery, I mean that by default a process will start creating the target file. If someone yanks the power, that target file will be present, with a recent modified time, but incomplete. Make will assume the file was created fine, so when I rerun make it will try to perform the next step, which may take 10 hours to fail. I want make to notice that the command did not successfully complete, and restart it from scratch.


For #2, you can mark intermediates worth keeping with .PRECIOUS.

https://www.gnu.org/software/make/manual/html_node/Special-T...

For #3, I think you may be misreading ps; on Linux, ps will show you threads as if they are processes when they are not.


For #2, .PRECIOUS doesn't help me. From the make manual: "Also, if the target is an intermediate file, it will not be deleted after it is no longer needed, as is normally done." This means that my intermediate files will never be deleted by make, even when everything that is built from them has been completed.

For #3, no I think I know how to read ps. I don't want 64 64-thread processes running on my 64-thread server, because that is hell for an OS scheduler, and makes things run slower, not faster.


For #2, you could always make a dependency that removes your intermediates for you after your final use. You can't be mad at make because it deletes intermediates and because it doesn't delete intermediates. Make isn't psychic.

For #3, I didn't mean to come across as pedantic. I haven't encountered what you're describing, but I have personally been surprised by how Linux does process accounting, so I apologize; I just figured you were being bitten by the same thing.

I like make a lot, but I don't use it for everything, because sometimes there simply are better tools for the task, and I hope you were able to figure out a solution.


1. Oh. I screwed up. Thanks for the correction. Apparently, I'm the one that needs to read the info page again! :P

2. This is an interesting case. It's like the opposite of .DELETE_ON_ERROR.

Anyway, it seems like you have some legitimate workloads that make just doesn't fit well with. Mind sharing the solution you designed?


Eventually. Got a lot on my plate at the moment.


I agree with all your criticisms, except I'm a bit confused about #3. Are you saying you're using Make with multi-threaded build actions?

As far as I know, most compilers are single-threaded, so this isn't much of an issue in practice. But I'm curious where you've encountered this problem.


No, I was using make to process large files for bioinformatics. So, think 60GB (compressed) of sequencing data from a whole genome sequencing run, which comes as a set of ~800,000,000 individually sequenced short stretches of DNA in two files. A multi-threaded process converts that into a file containing the sequences and where they align in the human genome, and takes about a day. Once that job has been finished, other jobs can be kicked off to use the produced data. Overall, the build process is a DAG with several hundred individual jobs, and performing that in a make-like system helps it to be managed effectively. Just not make itself.


#1 -- No, this is the "obvious but wrong" solution. It doesn't work for parallel builds.

https://www.cmcrossroads.com/article/rules-multiple-outputs-...

All his criticisms are correct except maybe #3 which I don't understand.

Another problem I've found is that Make doesn't consider the absence of a prequisite to mean the target is out of date. So if foo.html depends on foo.intermediate, and then you delete foo.intermediate, then "make foo.html", foo.html will be considered up to date. I guess this is part of the odd feature where Make deletes intermediaries, but even if you have .SECONDARY on, which I do, it still behaves this way.

The bottom line is that it's extraordinarily easy to write incorrect Makefiles -- either underspecifying or overspecifying dependencies -- and it's very difficult to debug those problems. My Makefile is still full of bugs, so I "make clean" when something goes wrong.

One thing that would go a long way is if it had a shorthand for referring to the prequisites in commands, like $P1 $P2 $P3, and if it actually enforced that you use those in the command lines! I don't want to create variables for every single file, and when I rename files, rules can grow invisible bugs easily.

Some details here: http://www.oilshell.org/blog/2017/05/31.html


The biggest "weakness" is that make can seem confusing at first. I strongly recommend reading the make info page. It's pretty huge with a lot of material, but the intro stuff is really accessible.

I would avoid learning make hodge-podge from StackOverflow as that will just frustrate you. If you take the info page in pieces and are a little methodical about it, you will probably end up liking make!

Happy make-ing!


I was thinking of the multiple build outputs issue. The fact that someone gave the "obvious but wrong" [1] solution as an answer only underscores this problem.

Make is full of cases where the obvious thing is wrong. That is not a good UI!

As a conceptual summary, I would say that the problems stem from a couple underlying causes:

1) The execution model of Make is confused. It is sort of "functional" but sort of not. To debug it sometimes you have to "step through" the procedural logic of Make, rather than reasoning about inputs and outputs like a functional program. I mentioned this here [2].

2) You want to specify the correct build graph, and Make offers you virtually no help in doing so. An incorrect graph is when you underspecify or overspecify your dependencies. Underspecifying means you do "make clean" all the time because the build might be wrong. Overspecifying means your builds are slow because things rebuild that shouldn't rebuild.

In practice, Makefiles are full bugs like this. In fact I should have mentioned that my Oil makfile is FULL OF bugs. Making it truly correct is hard to express because some dependencies are dynamic (i.e. the gcc -M problem.) But I just "make clean" for now.

The Google build system Bazel [3] is very principled about these things, but I don't think it makes sense for most open source projects because it's pretty heavy and makes a lot of assumptions. It works well within Google though.

It does some simple things like check that your build action actually produces the things it said it would! Make does not do this! It can run build actions in a sandbox, to prevent them from using prerequisites that aren't declared. And it has better concepts of build variants, better caching, etc.

All these things are really helpful for specifying a correct build graph (and actually trivial to implement).

3) Another thing I thought of: Make works on timestamps of file system entries, but timestamps in Unix mean totally different things for files and directories! You can depend on a directory and that has no coherent meaning that I can think of. Conversely it's hard to depend on a directory tree of files whose names aren't known in advance.

4) Both Make and Bazel essentially assume the build graph is static, when it is often dynamic. (gcc -M again, but I also encountered it with Oil's Python dependencies) The "Shake" build system apparently does something clever here.

[1] https://www.cmcrossroads.com/article/rules-multiple-outputs-...

[2] http://www.oilshell.org/blog/2017/05/31.html

[3] https://bazel.build/

[4] http://shakebuild.com/


I've used it when I was doing a pentest - searching a network for leaks of information. I wrote dozens of shell scripts that scanned the network for .html files, then extracted URL's from them, downloaded all of the files referenced in them, and searched those files (.doc, *.pdf, etc.) for metadata that contained sensitive information. This involved eliminating redundant URL's and files, using scripts to extract information which was piped into other scripts, and a dozen different ways of extracting metadata from from various file types. I wrote a lot of scripts that where long, single-use and complicated, and I used a Makefile to document and save these so I could re-do them if there was an update, or make variations of them if I had a new ideas.


I use Makefiles for two components of my research:

- Compilation of papers I am writing (in LaTeX). The Makefile processes the .tex and .bib files, and produces a final pdf. Fairly simple makefile

- Creation of initial conditions for galaxy merger simulations. This I obtained from a collaborator. We do idealized galaxy merger simulations and my collaborator has developed a scheme to create galaxies with multiple dynamical components (dark matter halos, stellar disks, stellar spheroids, etc.) very near equilibrium. We have makefiles that generate galaxy models, place those galaxies on initial orbits, and then numerically evolve the system.


To set up my dotfiles, although I'm not in enough of a routine for it to be truly useful.

    tmux:
    	ln -s $(CURDIR)/.tmux.conf $(HOME)/.tmux.conf
    	tmux source-file ~/.tmux.conf
    
    reload-tmux:
    	tmux source-file ~/.tmux.conf
    
    gitconfig:
    	ln -s $(CURDIR)/.gitconfig $(HOME)/.gitconfig
cd ~/configs then make whatever. ~/configs itself is a git repository.


You should check out Stow, its an already established project that does what you need.


What advantage does Stow give me over my current solution? I've always got git and make installed (but I'm not averse to adding something else).


One advantage would be not having to write "ln -s ..." for every file you want to link. Stow handles file trees as well.


Not exactly creative but KISS. I use only Makefile for a C project that compiles on both Linux, BSD and Mac OS.

Point being that autoconf is often overkill for smaller C projects.


Have you made a cross-build system that compiles to all three on a single machine, or that works when run on each of the three? I've been working on a system based on gcc, MinGW64, and osxcross, but if that fails I'll use docker with crossbuild.


The Stockfish Makefile is probably the best example of this I've seen. It also does some advanced stuff I'm not sure I understand yet. Profiling, setting flags for CPU instructions and making profile builds, all with support for different C++ compilers. Pretty neat stuff from one of the best chess engines.

https://github.com/official-stockfish/Stockfish/blob/master/...


I'm taking a grad school embedded systems class where we have to maintain a makefile that cross compiles to two different embedded platforms as well as compiles on the host machine.

We're doing it by calling a PLATFORM and using an IF statement in the makefile: make PLATFORM=HOST for example. The makefile then replaces the compiler variable, compiler flags, linker, flags, etc.


Linux's makefile is fairly easy to read, and has a very smart way of doing cross platform builds w/ gcc.


...unless you want to support more platforms and discover missing dependencies you never heard of before.


This is a bit late, but in the book The Tao of tmux, I delve into how I use Makefile's to create cross-platform file watchers that can trigger unit tests. https://leanpub.com/the-tao-of-tmux/read#file-watching

I use Makefile's regularly on open source and personal profiles (e.g. https://github.com/tony/tmuxp/blob/master/Makefile). Feel free to take and use that code, it's available under the BSD license.

The creativity comes in when dealing with cross-platform compatibility: Not all file listing commands are implemented the same. ls(1) doesn't work the same across all shell systems, and find on BSD accepts different arguments than GNU's find. So to collect a list of files to watch, we use POSIX find and store it in a Make variable.

Then, there's a need to get a cross platform file watcher. This is tricky since file events work differently across operating systems. So we bring in entr(1) (http://entrproject.org/). This works across Linux, BSD's and macOS and packaged across linux distros, ports, and homebrew.

Another random tip: For recursive Make calls, use $(MAKE). This will assure that non-GNU Make systems can work with your scripts. See here: https://github.com/liuxinyu95/AlgoXY/pull/16


Not something I have personal experience with, but I have heard a story about a Makefile-operated tokamak at the local university. Apparently, the operator would do something like "make shot PARA=X PARB=Y ..." and it would control the tokamak and produce the output data using a bunch of shell scripts.


I have "make Makefiles", which uses BSD make logic to create portable POSIX-compliant Makefiles.


I once used make to jury-rig a fairly complex set of backup jobs for a customer on a very short notice. Jobs were grouped and each group was allowed to run a certain number of jobs in parallel, and some jobs had a non-overlap constraint. The problem was well beyond regular time-based scheduling, so I made a script to generate recursive makefiles for each group that started backups via a command-line utility, and a master makefile to invoke them with group-specific parallelism via -j.

File outputs were progress logs of the backups that got renamed after the backup, so if any jobs failed in the backup window, you could easily inspect them and rerun the failed jobs just by rerunning the make command.

Fun times. Handling filenames with spaces was an absolute pain, though.


Miki: Makefile Wiki https://github.com/a3n/miki

A personal wiki and resource catalog. The only thing delivered is the makefile, which uses existing tools, and a small convenience script to run it.


Until recently we used them at Snowplow for orchestrating data processing pipelines, per this blog post:

https://snowplowanalytics.com/blog/2015/10/13/orchestrating-...

We gradually swapped them out in favour of our own DAG-runner written in Rust, called Factotum:

https://github.com/snowplow/factotum


I use it to setup my programming environment. One Makefile per project, semi-transferable to other pcs. It contains

    * a source code download, 
    * copying IDE project files not included in the source, 
    * creating a build folders for multiple builds (debug/release/converage/benchmark, clang & gcc), 
    * building and installing a specific branch, 
    * copying to a remote server for benchmark tests.


Lisp in make [0] is probably the most creative project I've seen. For myself, in some tightly controlled environments I've resorted to it to create a template language, as something like pandoc was forbidden. It was awful, but worked.

[0] https://github.com/kanaka/mal/tree/master/make


I use makefile as the library package dependency [1], maybe like what package.json was in node environment.

The idea is if you want to use the library, you just include the makefile inside your project makefile, define a TARGET values and you will automatically have tasks for build, debug, etc.

The key is a hack on .SECONDEXPANSION pragma of GNU make, which means it's only work in GNU/Linux environment.

[1] https://GitHub.com/shuLhan/libvos

Edit: ah, turn out I write some documentation about it here: http://kilabit.info/projects/libvos/doc/index.html


I don't use it, but your question made me think of one: I would like to see it (mis)used as a way to bring up an operating system.

It probably will require quite a few changes, but if the /proc file system exposed running processes by name, and contained a file for each port that something listened to, one _could_ run make on that 'directory' with a makefile that describes the dependencies between components of the system.

Useful? Unlikely, as the makefile would have to describe all hardware and their dependencies, and it is quite unlikely nowadays that that is even possible (although, come to think of it, a true hacker with too much time in hand and a bit of a masochistic tendencies could probably use autotools to creative use)


You may appreciate https://github.com/Andy753421/mkinit though it is written in mk (from Plan 9) rather than Make.


Yes, I do. Thanks.


I'm developing flight software at work on various Linux pc's that have support drivers installed for some PCIe cards. If I want to code on these PC's it's either sit inside a freezing clean room or "ssh -X" into a PC to bring up a editor. This sucks, so I have a makefile to rake in certain specifics of my flight software build with additional compile time switches for flexibility to build natively on my own computer. This allows me to essentially ignore installed drivers/libs and work comfortably in my own environment until I require the actual PC in the cleanroom to run my build.


I'm using ruby's rake in almost every project, even when it's not ruby otherwise.

It has much of the same functionality, but I already know (and love) ruby, whereas make comes with its own syntax that isn't useful anywhere else.

You can easily create workflows, and get parallelism and caching of intermediate results for free. Even if you're not using ruby and/or rails, it's almost no work to still throw together the data model and use it for data administration as well (although the file-based semantics unfortunately do not extend to the database, something I've been meaning to try to implement).

Lately, I've been using it for machine learning data pipelines: spidering, image resizing, backups, data cleanup etc.


> It has much of the same functionality

You could use both to accomplish the same thing, sure. But their concepts are quite different.

Rake works on tasks, which you define (or import from some gem).

Make works with file targets more than tasks. You define how it can make a certain (type of) file and it does the job.

Personally I mostly use Make if I want to generate files from something else. Otherwise I find small scripts easier than Rake or equivalent.


Actually, Rake has both. You can define file targets using "file", but I found that for smaller projects it just becomes a more verbose make.


True, Rake also has file tasks. There still are tasks and not much like file targets in Make.


i think Rake is one of the various newer re-implementations of make that more or less miss the point of what is good about make.

make is pretty neat if you think about it as a framework to help you compute/derive values from other values. each value happens to be stored in the filesystem as a file.

in contrast, many of the newer build-tool make replacements seem to miss the whole value of values and either push or force you in a direction of doing actions with side effects.


They are pretty much the same:

   file 'blogpost.html' # we want to produce blogpost.html

   # this rule specifies how to build 
   # any .html from the source
   rule ".html" => ".md" do |t| 
      sh "pandoc -o #{t.name} #{t.source}"
   end


Not mine but here's a Lisp interpreter written in Make: https://github.com/kanaka/mal/tree/master/make


I have a makefile I use for all of my AVR projects. It has targets to build, program, erase, and bring up a screen on ttyS0 and maybe more. I add targets whenever I realize I'm doing anything repetitive with the development workflow.


I haven't, but one of the cool uses that I've seen lately is how OpenResty's folks are using it for their own website, they convert markdown -> html, then with metadata to TSV, finally loading it into a postgres db. They then use OpenResty to interface with the DB etc. But all the documentation is originally authored in markdown files.

Makefile: https://github.com/openresty/openresty.org/blob/master/v2/Ma...


I use Ansible for deployment and Ansible Vault for storing encrypted config files in the repo. Of course, it's always a bit of a nightmare scenario that you accidentally commit unencrypted files, right?

Well, I have "make encrypt" and "make decrypt" commands that will iterate over the files in an ".encrypted-files" file. Decrypt will also add a pre-commit hook that will reject any commit with a warning.

This is tons easier than trying to remember the ansible-vault commands, and I never have to worry about trying to remember how to permanently delete a commit from GitHub.


To generate 100 Terabytes of data in parallel ... on Hadoop

https://github.com/hortonworks/hive-testbench/blob/hive14/tp...

The shell script generates a Makefile and the Makefile runs the hadoop commands, so that the parallel dep handling is entirely handed off to Make.

This make it super easy to run 2 parallel workloads at all times - unlike xargs -P 2, this is much more friendly towards complex before/after deps and failure handling.


I used a Makefile for managing a large number of SSL certificates, private keys and trust stores. This was for an app that needed certs for IIS, Java, Apache and they all expect certificates to be presented in different formats.

Using a Makefile allowed someone to quickly drop in new keys/certs and have all of the output formats built in a single command. Converting and packaging a single certificate requires one or more intermediate commands and Makefile is setup to directly handle this type of workflow.


I guess it depends what you consider creative?

I use one to build my company's Debian Vagrant boxes: https://app.vagrantup.com/koalephant

I use one to build a PHP library into a .phar archive and upload it to BitBucket

My static-ish site generator can create a self-updating Makefile: https://news.ycombinator.com/item?id=14836706

I use them as a standard part of most project setup


I'm creating a config.inc makefile during make to store config settings, analog to the config.h https://github.com/perl11/potion/blob/master/config.mak#L275

Instead of bloated autotools I also call a config.sh from make to fill some config.inc or config.h values, which even works fine for cross-compiling.


We use Makefile "libraries" to reduce the amount of boilerplate each of our microservices have to contain. This then allows us to change our testing practices in bulk throughout all our repos.

https://github.com/Clever/dev-handbook/tree/master/make


The main question to ask if you really need to use make. If you do, there practically no limit of what you can do with it these days, including deployment to different servers, starting containers/dedicated instances etc. But unless you are already using make or are forced to, it's better to check one of newer build systems. I personally like CMake (it actually generates Makefiles).


I have a makefile that sets up a brand new computer with the software I need. It means I can be up and running on a new machine in a few minutes.


https://erlang.mk/ - need I say more? :)


One "creative" use is project setup. Sometimes, less technical colleagues need to run our application, and explaining git and recursive submodules takes a lot of time, so I usually create a Makefile with a "setup" target that checks out submodules and generates some required files to run the project.


I use Makefiles that run "git push $branch" and then call a Jenkins API to start a build/deploy of that $branch. This way I never have to leave vim; I use the fugitive plugin for vim to "git add" and "git commit", then run ":make".


i use it to solve dependency graphs for me in my program language of choice, at the moment this involves setting up containers and container networking but i throw it at anything graph based

make seems to be easier to install/get running than the myriad of non packaged, github only projects i have found.


Have also seen a Full Certificate authority implemented using a makefile that was one of the easiest i have ever used

i am also currently using it with rsync to implement a poormans dropbox on a vps host with a systemd timer unit to clean files after 30 days for sharing files with customers, a simple wrapper script dumps it in the right folder, invokes make and causes rsync to run. the makefile also haandles setup of the account like ssh-add (with restricted commands), key generation and config options (via include files)


I use it to generate my latex CV. In my case I have multiple target countries, so I have pseudo-i18n with pseudo-l10n, and different values like page size, addresses, phone numbers, and then I just make for the target country like make us or make ja.



I use makefile to gen my static website. Also my CV, latex and make works well together.


I've used Makefiles to determine what order to run batch jobs in so that dependencies can be met. Instead of describing what order to run things in, you describe what depends on what.

It's pretty cool, but not ideal.


Nowadays I mostly use Tup. If I use make it is usually for when I'm working with other people on LaTeX documents, and often times it's enough to just call rubber from make x)


I use it to run Verilog testbenches and start a Riscv simulator.


I use make as a poor man's substitute for rsync (well, local rsync. Like cp -r), when I need to add some filtering in between.


I use it to build all my Go micro services, run test suite, compile SaSS, minify css, minify js


I use make to pre-compile markdown into HTML for a static website.


I use redo to build the menus for a GOPHER site. It's the same principle. Make changes; run redo; menus get updated automatically. I also use it to rebuild the indexes for package repositories after making changes.

See gopher://jdebp.info/1/Repository/freebsd/ under "how this site is built" and gopher://jdebp.info/1/Repository/debian/dists/ .

See also gopher://jdebp.info/h/Softwares/djbwares/guide/gopherd.html .


I'm doing something similar.

I'm building a static(ish) site generator, that features a built-in "configure" command to generate a makefile, so that only changed files and their dependents need to be rebuilt.

That's also part of why it's called a 'static-ish' site generator - it can render stuff into pure static html, but it can also use things like SSI or ESI to embed common things, so e.g. a nav bar/footer could be injected using SSI or ESI, and then when that changes, not every page needs to be re-built.

Edit: s/script/command/


I use make to make things




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: