Hacker News new | past | comments | ask | show | jobs | submit login
Nixery – Docker images on the fly with Nix (nixery.dev)
201 points by _fnqu on April 19, 2022 | hide | past | favorite | 89 comments



I don’t know if it’s just because I hopped on the bandwagon in the past few months, but it really is starting to feel like Nix is gaining momentum.

You can use it on Mac, WSL, Ubuntu multiuser, Docker, in an nspawn container, or through NixOS on a drive. You can build a livecd iso or a Raspberry Pi image from the same flake that you use for everything else.

I work in embedded with Yocto every day, and I can’t help but think that Nix is going to eat their lunch in the next decade.

There’s really never been anything (usable) that’s like Nix. I think it’s inevitable that it takes over everything.


We recently adopted it at my company for managing local dev machines, project environments, and CI. It definitely has some warts, often the best documentation is “read the source code”, but man is it an awesome tool. I’ve switched all of my machines / servers over to it and I’ll never look back.

Now I’m looking at my iPad and iPhone and wishing I could manage them through Nix too.

I’d put it at a comparable difficulty to learn / powerful tool as git. Which given that they’re both based on hash trees makes sense.


Though the documentation may be lacking, or difficult for beginners, I'd like to point out that I've found the community to be extremely helpful, patient and welcoming when asking questions and for help on Matrix.


How can one use Nix to manage project-specific dependencies that typically store Dotfiles in annoying locations like the home folder?

I am aware of home-manager but am not sure how (or if) it would work for per-project dot file management.


First, direnv (along with nix-direnv) is really the glue that makes all of this work seamlessly.

It depends on what the dotfiles are for and how they're used.

Here's an example of a problematic one: AWS configs. You can theoretically override the default location of these files with environment variables, but a lot of tooling doesn't respect this and will break as a result.

I often deal with a large number of AWS accounts, so my solution is to have one main way for populating AWS config files that lives outside of projects, and then, because I have a rubric for account/role assignment naming, I can select the correct account/role for each project, and I can even have projects that switch them based on specific code/deployments.


Can you say what specifically? Many language-specific package managers match your description and are completely handled by Nix, but maybe you're talking about something else?


Can you name a tool which does not allow per project local configuration files?

One option to do it per project with hm would be to add new options which set certain configuration options.


Considering any command-line tool that requires a dotfile to be in some ~/<path>


There's an experiment Nix fork of Temrux for Android, if you want to try on mobile. Good luck on an iPhone though.


There's also actual nixos for mobile, although then you're really limited in hardware options.


I was afraid of Nix before I adopted it based on what I've read. Now that I've taken the leap, there is no going back. Other operating systems are crude dinosaurs in comparison. Once you get past the learning curve and the initial setup (which can be steep), your system will be far more stable and easy to maintain than anything out there. Declarative OS builds are the future, whether it's Nix or something else.


Okay, sorry to hijack, but I keep trying it, and keep getting stuck. Most recently, how do I install a Rust binary from Github? They have a releases page, or I can just do a cargo build. Either way, I would just drop the resulting binary in /usr/bin and it's done. With Nix... I'm totally stumped. Do I have to package it myself somehow?


If you just want to take a precompiled binary and install it, you just have to write a derivation wrapper around the binary which will declare the expected hash and take care of e.g. unzipping and moving to $out/bin.

Here is a more involved example of downloading a release and then extracting the binary from a .pkg file: https://gist.github.com/J-Swift/364a8b158bf0b603f6e784e454ca...

Here is a more simplified example: https://gist.github.com/mitchellh/c47e3333bb78f57836ba2aa806...

EDIT: to get the sha, unfortunately you have to perform some esoteric command line incantations: https://github.com/NixOS/nix/issues/1880#issuecomment-366615... and https://gist.github.com/boxofrox/d8a3080fbb03f84b7d7a31e102b...


Once I discovered `lib.fakeSha256`, I just put that in the derivation, try to build it and then use the error message to find the correct hash to put into it. Probably not the fastest way to do it, but it's easier for me to remember.


yeah that seems way easier, thanks for the tip!


> Either way, I would just drop the resulting binary in /usr/bin and it's done. With Nix... I'm totally stumped.

If you don't care about Nixpkgs conventions ("build phases", etc.) then you can use `runCommand` to run arbitrary bash code. It's a function which takes three arguments: a name for the output, a key/value mapping for the environment variables (plus a few some special names), and a string of bash code (usually written between ''two single quotes'').

    with import <nixpkgs> {};
    runCommand
      "my-favourite-program"  # A name for the output
      {
        # A key/value mapping of env vars
        myVar = "myValue";

        # The 'buildInputs' name is special: for each element 'x', the directory '${x}/bin'
        # will be appended to the PATH env var
        buildInputs = [ jq gcc ];
      }
      ''
        # This is a multi-line string containing arbitrary bash code.
        # The output path will be provided via the env var $out so we
        # just need to create a file or folder with that path
        mkdir -p "$out/bin"
        echo "$myVar" > "$out/bin/my-executable"
        chmod +x "$out/bin/my-executable"
      ''
One complication is that (by default) the script will be run in a sandbox, with no network access. We should fetch anything it depends on up-front, using e.g. fetchurl, fetchGit, etc. Here's a more realistic example:

    with import <nixpkgs> {};
    runCommand "foo"
      {
        release = fetchurl {
          url = "http://example.com/foo/foo-1.0.zip";
          hash = "sha256-iqZDwWkQA9XMTICEMCt5xDlmfmiIwzpeE3HJLbgbDXs=";
        };
        buildInputs = [ unzip ];
      }
      ''
        unzip "$release"
        mkdir -p "$out/bin"
        mv foo-1.0/binary "$out/bin/foo"
      ''


Yes, you will have to package it if it's not already in nixpkgs.

The good news is once you learn how, it's basically trivial with crate2nix[0], which can autogenerate nix derivations from rust crates

[0] https://github.com/kolloch/crate2nix


You can also use naersk¹ if you want to avoid a two-step process. It's especially convenient when using nix flakes.

¹https://github.com/nix-community/naersk


I actually hate Nix.. and i agree, i'm never going back. I use it on my desktop (Linux), my Macbooks, and i want it on my Windows machine (not that it's going to happen hah).

The thing that Nix, for me, currently fails at is introspection. Every function is a black box and i have no clue what's in it. I have to go dig up files on github to see what it even accepts. It's as if everything is obtuse.

A "simple" LSP/Type system would do wonders for understanding what the function is, what it does, and the inputs/outputs.

.. also i have some difficulty understanding functional "mutation" patterns like how overloads are implemented. But i hope that'll make sense eventually.

NOTE: i also think Flakes are absolutely necessary for Nix. Nix is way less valuable to me without Flakes.


Agreed on Flakes. It both makes the entire thing much more pure in terms of its guarantees about reproducibility/portability as well as being a lot easier to understand for me as a non-expert consumer.


As a learner myself over the past year, I also strongly prefer the flake workflow— the tooling design makes more sense, there's no implicit magic about where your inputs are coming from, and everything is pure by default. Not to mention the absolute delight that is the `--override-input` flag— being able to layer your project into multiple flakes and then trivially rebuild it with just one part overridden from a modified local source, so great.

But yeah, it's super frustrating that it's all still hidden behind experimental flags and the official documentation continues to suggest non-flake workflows, though. It's ready for primetime— commit to it please!


Guess I know what I’m learning this weekend.

Anyone have tutorials they want to share?


I would read a bit and look at the nix-pills [1] even though I could never understand them when I was learning. Then, what I always recommend is this playlist by Burke Libbey on youtube [2]. There are a couple fundamental things that you can internalize which will make everything much more approachable:

1. Nix the language is basically a JSON object. Almost everything is about generating and composing subtrees to build a _really_ big JSON object.

2. `<foo>` means from channel `foo`. This is less relevant now with flakes but that was always extremely confusing syntax to me and when I "got it" it made things way easier.

[1] - https://nixos.org/guides/nix-pills/

[2] - https://www.youtube.com/watch?v=NYyImy-lqaA&list=PLRGI9KQ3_H...


I'm about a year into Nix and consider myself a moderate in some areas but something of an expert (by necessity) in others. I think the ideal pattern is to attack it bottom-up and top-down at the same time.

Bottom up, you want to learn the fundamentals of how the Nix language works, and how the basic primitives it offers can be used to build up package definitions, a package manager, and ultimately an entire OS. For this, the Nix Pills are invaluable: https://nixos.org/guides/nix-pills/

Top down, you need goals for some specific things you want to accomplish in the system. For me, this was automated packaging for hundreds of source repos internal to my company, but that's a bit of a crazy case— for a normal person, I'd expect this could be something like "I want to run a webserver with some static assets" or "I want to define and launch a container declaratively", or maybe "I want a reproducible environment to do Python development in, where the reproducibility isn't just tagged versions in a requirements/pyproject file, but also includes the full underlay of everything I'm depending on from the base system.

Whatever the goal is here, you're inevitably going to find your way to override-related tasks, like "okay, I want to upgrade this package" or "I want to add patches to this package" or "I want to change the settings on one of my dependencies", and that's where the extraordinary power of Nix really starts to sink in, when you realise how much can be accomplished with so little, and you try to imagine what accomplishing that under a conventional apt- or dnf-type system would look like.


I love nix, but you're a lot more optimistic and forgiving of its cons than I am :D

To be clear, I've not used NixOS and it might be a more reasonable experience. Nix package manager though is the most useful tool I have integrated into my dev-life and also the one I am most hesitant to recommend to anyone else. The UX of the whole thing still has a long way to go and the initial installation experience on Mac has been all over the place quality-wise in recent years. Now that flakes are officially a thing (even though they are still unofficial!!!) it again will take a while to stabilize.

My hope is that once that flake transition stabilizes we will have a much more reasonable baseline of UX to begin recommending to others without nearly as many caveats.


> and the initial installation experience on Mac has been all over the place quality-wise in recent years.

In minor defense of this, Apple making the root read-only threw a pretty big wrench in things. There are a lot of little reasons (some understandable and some exasperating) it took so long to adjust, but all of them were exacerbated by the amount of new logistical complexity required (and the amount of experimentation necessary to figure it out).


Most of the issues come from the lack of enthusiasm for moving the Nix store somewhere else than /nix. Even with read-only root, macOS has some designated locations where you can write. E.g. Homebrew uses /opt/homebrew, which is fine because /opt is writable.

I understand the reasoning of avoiding this on Intel Macs, since there there are years of cached derivations which would become useless without hacks. However, Apple Silicon Macs are a clean slate and the transition would've been a good occasion to move the store to /opt/nix.

(I did suggest this in some places, but there didn't seem to be much interest in switching over, unless I missed something.)

By the way, this isn't only an issue with macOS. Nix also doesn't work on Fedora Silverblue because it uses read-only root and the Nix store path violates FHS.


Yup, this. I think a big part of this is that Eelco is really only interested in NixOS. Nix is a large community and there are plenty of people that do care about other platforms, so these things do tend to get sorted out. Still, the core devs will choose to avoid short-term, medium-painful transitions for NixOS even at the expense of killing all the other platforms.


It's not that easy to change the default store dir.

https://cache.nixos.org/nix-cache-info has it hard-coded to /nix/store. If you want another one you'll need a whole new cache. But Hydra only works with one cache, so now you're deploying a second Hydra build farm.

One of the cool features of Nix is that you can evaluate some nix code on macOS even if the target build host is Linux. And then ship the .drv over to the build host. But that only works if both hosts share the same store dir.

So now you're looking at moving the whole community to use /opt/nix. And thinking of how to upgrade the existing users to it. And fix all of the tooling we built that assumes /nix/store as the store dir.

So far nobody had the courage to tackle this huge task.


I guess this is a valid way to frame the problem (and I personally agreed with moving it), but I'd also quibble a bit...

- IIRC, ~stakeholders weren't keen on relocating it just on macOS because it would require a separate set of build/cache infrastructure (and it sounded like the macOS+Nix community would be on the hook for supporting it).

- There did actually seem to be a fair amount of support for moving it globally (and Eelco, while skeptical, didn't sound like he was going to stand in the way), but the coordination work sounded significant to me.

- Also, I think the circular arguments around relocating /nix played their own role in the inaction/bystanding that let the problem fester (as did, to be fair, fear/uncertainty about whether Apple would later ~secure whatever new location was chosen).

For some general references on the above, see

https://github.com/NixOS/nix/issues/2925#issuecomment-499517...

https://github.com/NixOS/nix/issues/2925#issuecomment-523340...

https://github.com/NixOS/nix/issues/2925#issuecomment-549184...

https://github.com/NixOS/nix/issues/2925#issuecomment-549858...

https://github.com/NixOS/nix/issues/2925#issuecomment-550106...

https://github.com/NixOS/nix/issues/2925#issuecomment-550211...

https://github.com/NixOS/nix/issues/2925#issuecomment-625855...

https://github.com/NixOS/nixpkgs/issues/95903#issuecomment-7...


For sure, that part is understandable and the current install is pretty simple since FDE avoids previous steps. But for a while it was _really_ difficult to find the needed info (had to read through Github issues and find specific comments embedded in extremely long threads)


Yes--it was bleak. Getting tired of watching people run into sharp corners in the process of that was part of why I got involved with trying to fix it up, myself.

There were some dumb (but common in informally-organized OSS...) reasons it malingered. Most of them are still problems (though a few have improved or are slowly moving towards it). A lot of the fixes entail investments in improving leverage of time spent on the installer. Better automated testing, more organizational structure/memory/accountability, etc.

But those issues wouldn't have been enough headwind to drag the situation out like it did if there was always one obvious straightforward politically-acceptable solution from day-1 that just needed to get implemented.


To all the folks using Nix at scale: How are you doing it? Are you still using Kubernetes and friends, custom images, etc? How are you deploying apps or making changes to your Nix instances across a fleet of servers?


I a know a group which deploys ~10 machines and ~30 Containers mostly with shell and custom CI. Nothing fancy.


qq: Any good tutorials to get up to speed with NixOS with 64-bit Raspberry Pi Zero 2 please?

I have a plan to thinker with some off-grid IoT this year, good moment to try something new, like Nix, than continue with Ansible which unfortunately gets out of hand with very long lasting installations - no implicit cleanup of removed resources and lack of build reproducibility (apt-get update issue really).


Different model, but I just (about 2 days ago) setup my Pi 4B using this, and was done in about an hour:

https://nix.dev/tutorials/installing-nixos-on-a-raspberry-pi

I made it difficult for myself by using flakes, but if you don't care about that, just follow the instructions and you'll be set in 10 min flat.


Not a tutorial, but the unofficial wiki has info on Raspberry Pi and other ARM SBCs: https://nixos.wiki/wiki/NixOS_on_ARM


I've stopped taking it seriously after seeing blogs like this.

https://blog.wesleyac.com/posts/the-curse-of-nixos


Not tried that but I've heard it's even possible to build custom Android distros.


I know that there’s a very active matrix channel for Nix on ARM that is mostly dedicated to pinephone/smartphone projects, so I imagine there’s some android tooling there but I don’t think it’s “android” in the AOSP sense.


There is an AOSP/etc dist building on nix project called robotnix. It has little to do with nixOS on Arm, so it is rarely mentioned on that channel.


samueldr has been doing a lot of good work in that direction. See https://mobile.nixos.org/ and https://github.com/samueldr/


Just noting, using Nix it is also possible to build an actual real deal Android image using Robotnix:

- https://github.com/danielfullmer/robotnix/

This is different from a non-Android Linux on Mobile devices, which is what Mobile NixOS aims to achieve :).


This is really cool, but I don’t know if I see the appeal for actual nix users — if you are a nix user and have it set up in CI, you can easily build docker images yourself using buildLayeredImage.

And then if you aren’t a nix user, why would you use this? Installing packages with, say, apt, is decidedly not where my pains with docker have arose.


That's correct. I recently ended up using `buildLayerImage` (actually `buildLayerImageWithNixDb`) for CI, not only to run a single process, but also `systemd` and multiple processes. `podman` comes with built-in support for `systemd`.

[Here](https://github.com/fdb-rs/fdb/blob/fdb-0.2.2/nix/ci/flake.ni...) is relevant code.


If I need an image with a specific set of tools it's cumbersome to build a whole workflow to build, store and maintain these images. Having a service that can receive a custom list of Nix packages and returns an image that I can instantly use would be really, really, really nice.


That's what most users of nixery.dev (i.e. the public instance) do, afaict. Ad-hoc images for CI, and for debugging purposes.


yep, this is exactly how I use it. Nixery is so convenient for getting random one-off containers. Thank you for building and hosting it!


In the same spirit but in the form of a readily-usable command (rather than a service), 'guix pack' can produce application bundles in a reproducible fashion, in the Docker format as well as in other formats:

https://guix.gnu.org/manual/devel/en/html_node/Invoking-guix...

Hopefully the smart layering strategy that Nixery uses will eventually make it into 'guix pack'!


> Hopefully the smart layering strategy that Nixery uses will eventually make it into 'guix pack'!

I've been meaning to extract it into a more standalone tool that can output a layer distribution, that way it could also be used in things like Nix's `dockerTools.buildLayeredImage`. The main annoyance is that creating the popularity data inside of a build is not easily possible for the entirety of the package set. Still working on that one ...

I'm not sure how much Guix internals have diverged since they forked Nix, but if the dependency analysis of store paths can be done the same way then this should also be straightforward to port to Guix.


With https://github.com/nlewo/nix2container, I'm trying to make a more standalone tool. Basically, a Go binary takes a reference graph and produces a JSON file describing a container image. This JSON file is then ingested by a Skopeo fork (it adds a new `transport`) to produce images (to file, registries,...).

Currently, it supports the dockerTools layering algorithm and is designed to work with Guix [1] as well;)

[1] https://github.com/nlewo/nix2container/blob/065e5b108650ee4c...


Ah, I've actually seen this before. Since it's written in Go, you might be able to pretty much copy&paste the Nixery layering strategy into it. I wouldn't mind!


Guix is not a fork of Nix. Guix reuses an earlier version of the Nix daemon.


You're saying "it's not a fork, it just uses an earlier, modified version of the codebase". What is a fork if not that?

I don't know how divergent it is, probably quite a lot at this point. The concepts should still be very close though and that should mean that a lot of tooling is theoretically portable between the two.


Guix uses a fork of the daemon. Guix, however, is not a fork of Nix.


`nix bundle` is similar: https://nixos.org/manual/nix/stable/command-ref/new-cli/nix3...

Interestingly, it already supports Docker images:

    # nix bundle --bundler github:NixOS/bundlers#toDockerImage nixpkgs#hello


Oh great! More indirection. Now when I want to deploy my web app I can check my private nixery.dev deployment is properly configured in Nix to build my Docker images so I can deploy my containers to the cloud so someone can access me Rest API. And the cost of guaranteeing builds will probably work? Running your own nixery service, learning Nix, and learning Docker. I would love someone to do a cost-benefit analysis of these sorts of tools against the time of using (and sometimes debugging) Make and/or Bash. I’m so cynical of Nix lol - no disrespect to the OP/Author, I just want to work on problems but most of my time is spent on building and deploying stuff.


Nix makes a lot of sense if you really understand build system.

But makes less sense if you understand operations, or management.

Real world engineering is about trade-offs, and Nix has no wiggle room for compromise. It’s optimizes on one dimension: Reproducible builds. But an organisation won’t succeed when it places the needs of build system engineers on a pedestal.


It adds some effort and complexity to the "build" dimension of your multi-dimension optimization, yes.

But it also removes a whole bunch of complexity from every other dimension, by removing variability from the equation.

If you cannot rely on /what/ you are running, then what, exactly, are you testing? Do you really know?

I've found that most "managers" (in fact, most programmers) don't seem to appreciate this. The "well, I don't know what happened -- maybe reboot the system, and it'll work?" approach is insane.


Id disagree its uni-dimensional. It optimizes for reproducibility and hermeticity without using virtualization (i.e. better performance).


You can always set `__noChroot = true;` on your derivations and forgo the sandbox. Then it's not more difficult than a Dockerfile really.


This really isn't too different than what is already done, but it does as you say add more layers of indirection.

Consider: Prior to Docker, you'd typically go to AWS or a virtual host provider, spin up your OS of choice, install any relevant system dependencies, language runtimes, set up a CI/CD pipeline, and finally deploy.

The only real difference between what I just described and Docker / Nix / additional layers is that we (as an industry/profession) have not yet built sufficiently ergonomic tooling to make this trivial.

AWS and similar cloud providers did away with much of the server and network setup. Docker has done away with some of the application environment setup.

All that said, Nix does seem to be trying to replace something we already have a workable answer to (host/app config). Whether or not the additional overhead is worthwhile even after ergonomics have caught up will probably depend on your own use-cases.

I can see it being useful for high-trust environments (finance, medicine, anything else regulated). It could also do a lot to improve the general security of the OSS ecosystem by giving projects a path forward to truly reproducible binaries. Outside of those contexts, you probably don't care until tooling gets to the point where you can opt-in and get those guarantees "for free".


> I would love someone to do a cost-benefit analysis of these sorts of tools against the time of using (and sometimes debugging) Make and/or Bash.

Nix does essentially the same job as Make. The differences are:

- Make embeds a shell code interpreter, whilst Nix just execs a binary; given its path, a list of args and a set of env vars. (Note that almost all Nix definitions use bash as their binary!)

- Make does meta-programming with a mixture of "automatic variables" ('$<', '$^', etc.), 'eval', macros, etc. whilst Nix uses a programming language.

- Make relies on timestamps to figure out whether to re-use existing outputs; Nix relies on the hash of the definition (this works recursively, since hashes are included in filenames; hence changing a reference will alter all the hashes up the dependency tree).

- Make runs commands in the directory where 'make' was invoked, Nix runs commands in a temp folder (and optionally restricts network and filesystem access)

- Make runs commands with the same environment it was invoked with, Nix specifies the environment of commands in the build definition

Nix also has a killer feature that Make can't do, called "import from derivation". This lets us define a build process, like 'fetch this git repo', then import and use Nix definitions from its result. In comparison, Makefiles can't (reliably) fetch and import each other; e.g. my C project's Makefile can't fetch the GCC source tarball, and depend on its Makefile's "install" rule to provide a compiler.

My hypothesis is that this deficiency of Make is the reason for a whole bunch of unneeded complexity in the software world (e.g. "package managers", "OS distributions", "configuration managers", etc.)

From a practical point of view, Nix is almost always used as a wrapper on top of something else (Make/Ant/Maven/Cabal/etc.); but that's just because most projects benefit from those "ecosystems". Note that we could just as well wrap such Ant/Maven/Cabal/etc. projects in a layer of Make instead of Nix, but nobody does since it wouldn't give us any benefit ;)

If you're happy to ignore those "ecosystems" and just have a simple "bash + Make" project, you could instead have a simple "bash + Nix" project and avoid all the layers of Make/Ant/Maven/Cabal/etc. (as well as any Docker, Ansible, Apt/RPM, etc. that others might also decide to layer on top!)


Thank you for this post. This is the kind of authoritative, insightful, contextually relevant information that makes HN so valuable.


Is the extra layer of indirection you don't like Docker or this nix -> Docker integration?

Compared to just installing all your nix packages in one Docker layer, this does introduce build complexity. But it's analogous to the complexity of a compiler for a static language... the thing that comes out is not more complex than what went in, so at least the complexity doesn't propagate. The images produced by this should be interchangeable with their single-layer counterparts; the caching will just be better when the code is rebuilt and re-distributed.

If you're all-in on nix, does the container ecosystem even add value? The author of this software thought so, at least when he posted in 2018: "Tying in to the schedulers, orchestration, and monitoring is very valuable"

Note, I have not used this; I'm just also frustrated by software development getting eaten by incidental complexity.


Do you actually use nix and have an experience to share?


Yeah I was in a company using Nix for around 18 months where some of the DevOps team were contributors so everything was Nix’d. CI, dev environments, deployments, cluster management. Want to add a library to your Python app? Don’t use pip or poetry, update Nix. But because DRY this nix isn’t even in your project, it’s in another repo somewhere. Want to update Haskell? Well you can’t use Cabal or Stack, you need to use Nix2Cabal or whatever. It slowed down so much stuff that anyone could usually do to a ticket for a Nix-versed engineer to fix and was a choke point on everything. I’ve since vetoed it very hard at two startups I’ve worked at.

I want to like Nix, I made my primary computer NixOS, but it’s just so much complexity. For small dev environments I kind of get it - I might still use it to spin up a Haskell env - but when the total Nix lines of code > 10k for a project then have fun!


At my company, I use nix to manage "external dependencies" but still just use language tools for language-specific dependencies. So for instance, I pull Ruby, plus all the common C libraries used in Ruby native extensions into a shell.nix, but otherwise the Ruby workflow is identical to any other.

Same with JVM stuff. We use Bazel for that. But I use nix to install bazel + a java toolchain.

Nix works so much better than Homebrew, since its easy to pin to an exact commit of nixpkgs.

I definitely wouldn't want to force Nix into well established language workflows, but I am extremely pleased with it for managing package dependencies and development environments in a reproducible way. I'd love to extend it to building Docker images, but I haven't made it that far yet. :)


> when the total Nix lines of code > 10k for a project then have fun!

I have the suspicion that you’re talking about auto-generated Nix code. For comparison, lock files for language-specific package managers can easily exceed 10k lines. But I have yet to see any hand-written Nix build description for a single software project reach nearly as much LOC.


I just checked and without giving too much info I’ll round, but there’s 20 repos with 10K commits of pure Nix repos, 3k issues across them, and 30k references to the word nix lol. Random check and none of the files are generated in these repos - it’s all reasoned code judging from commits. These are then used across all projects which includes additional nix stuff.

Going for the entire org and these numbers increase drastically but could include auto generated nix code.


Assuming your team consolidates Nix code in a few repos, that sounds fairly normal. In contrast, my team maintains a bunch of RPM builds and it's messier business than Nix. More boilerplate, breakage, and manual work.


Not OP, but the evangelism about "never having build problems anymore" does perplex me a bit. In the languages I have been programming in (Haskell, Ruby, some Python and elm and JS and Rust), I can't recall having any significant build problems in the last ~8 years or so anyway. What does everyone do that their build keeps breaking?


Pull in a lot of transitive dependencies, without a system in place that automatically and strictly vendors or pins the versions of everything, and the probably of your build succeeding will converge to zero over time. Humans screw up semver all the time even when they're aware of hyrum's law and are doing their very best not to break user code.

I would say this starts becoming an issue when you're around 20-30 devs maintaining software that's a few years old.

All that's just for rebuilding when their are no changes to your code... pulling in security updates is a whole additional mess if your software is exposed to adversaries.


Probably you are lucky to not run into the nokogiri compilation issue. For production codebases, usually, this is not an issue because the build breakages are fixed one at a time by upgrading gem, etc. This is a big issue for personal projects, I am no longer able to run many old projects (last commit made before 5 years), because they depend on older library version (like libglew), which is not supported by any recent distros.


Those languages all have sensible package managers built in, nix adds more value with languages like C/C++, or when you have multiple binaries interacting.

For example, I had a C++ project break catastrophically when upgrading from Ubuntu 19.10 to 20.04. It built fine, but wouldn't boot. Undoubtedly the root cause was my fault, but I couldn't trace it and the timing was terrible, so I had a hack to build inside a 19.10 container. Nix would have saved me a lot of pain in that case by pinning the compiler or whatever dependency broke it.

Nix is also useful in that it pins the entire ecosystem in the case that your project isn't fully contained in one language. To take your Haskell example, if you use Stack it will pin every piece of Haskell code you might import, but I've had problems where an external tool changes its output, requiring changes to my tool. In that case nix is like Stack, but for your entire OS, building that tool in a reproducible manner.


One reason you might have avoided problems in Haskell is that it (specifically: Cabal) added "Nix-style builds" in 2016 ;)

https://cabal.readthedocs.io/en/latest/nix-local-build-overv...


> I would love someone to do a cost-benefit analysis of these sorts of tools against the time of using (and sometimes debugging) Make and/or Bash

Spot on! But HN is so biased towards new or flashy stuff...


This looks really nice, it saves making the Dockerfile yourself. And just sometimes you want an image but with a bit of extra in it for debugging. If normally you pull that image directly with this you don’t need to setup a build to build and push the custom image only to scuttle it again later.

What is interesting though is that nix is all about reproducible builds but I don’t see a way to specific packaged versions here.


I belive that if you host your own nixery instance, you can pass a nixpkgs commit hash as a docker tag


This is amazing because it takes the promise of "caching Docker layers", which doesn't usually work out well in practice, and actually delivers on it.


This is absolutely fantastic, I can't count the times where I needed to debug some sidecar/envoy/tls thing or some network connectivity between two different systems and needed a specific tool (nmap, telnet, ..) within a docker container to debug that and couldn't (or didn't want to) rebuild the container with the missing dependency or package. Really hits a sweet spot and might make life a bit easier for me. Thanks for sharing it!


I like how they optimize for layer reuse, but at the same time, nix doesn't really fit docker layer caching because it's better. With a unique prefix per package, you don't need stacking of layers. You can just download a bunch of packages in parallel, extract them in parallel, and finally "merge" the prefixes to get those combined bin/ and lib/ dirs.


> You can just download a bunch of packages in parallel, extract them in parallel, and finally "merge" the prefixes to get those combined bin/ and lib/ dirs.

docker pull achieves the same result: layers are fetched in parallel, and they are extracted using pgiz (parallel gzip). It just uses a pre-defined order, which does not harm performances, but it is not useful either in case nixery is used.


The point is not about parallelization, it's that nixery has to optimize for cache reuse, which is an artificial problem created by docker.

If you have two layers installing an individual packages like /nix/store/x and /nix/store/y, stacking them as [x, y] and [y, x] would result in the same docker image contents, but docker will generate different hashes.


Thanks for clarify your point.

> If you have two layers installing an individual packages like /nix/store/x and /nix/store/y, stacking them as [x, y] and [y, x] would result in the same docker image contents

This is an assumption which is valid for nix, but not for most of the package managers. Whenever such assumption can be considered correct, Dockerfiles can achieve similar results using multiple stages, but you would probably need a pre-processor to have a stage for each package. Something like an `INCLUDE` directive could help too: https://github.com/moby/moby/issues/3378.


Is there a clean way to do reuse this for multistage builds?

```

FROM nixery.dev/shell/git/node14/python3.8 as debug_extras

FROM our/production:1.2.3

COPY --from=debug_extras /nixstuff /ubuntu/stuff

RUN python -c "print('nice!')"

```


It depends on what you are trying to get out of the builder pattern. I think it wouldn't provide any benefit to you but hard to say without knowing exactly what you want to achieve.

e.g. if you want to remove all the stuff that you aren't using then nix already does that with a GC


You could copy over the nix store, though your path wouldn’t be set up correctly to find the programs you want.


Would it be possible to implement something like similar using different distributions? I am thinking of Fedora with rpm-ostree, for example.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: