As someone out of the loop on this, why is Nix so popular right now? What does it do and why are people installing it on everything? Is this something to pay attention to or is it a rehash of the 'I put linux on my toaster' 2000s-era stunts?
These aren't precisely the specified goals of Nix, but things I appreciate as a consequence of its design:
(1) If a build works on any machine, it will work on yours. Package builds are isolated and reproducible. For example, setting up Plasma is as simple as `services.xserver.desktopEnvironment.plasma5.enable = true;`, every time, in every environment.
(2) Environment configurations are declarative. You will always know what a given host or shell has installed because you have to write it down.
(3) Nixpkgs is _by far_ the largest package repository of any package manager, and packages are updated very quickly.
(4) With home-manager, software can also be configured declaratively; my preferred tmux configuration will always stay in sync across all of my machines because it's managed by Nix, not ~/.tmux.conf.
Nixpkgs is the largest repository only because they let anyone throw anything they want in there with minimal accountability. Nixpkgs is the NPM of Linux.
While I love the determinism of NixOS, their refusal to mandate strict code review, code signing, and package signing makes it unsuitable for non-hobby use cases.
I do hope to see a fork of NixOS with the supply chain integrity of OpenBSD or at the very least that of Debian or Arch.
Could you expand on those points? Specifically, what do you want to see for code or package signing? Has there been opposition to this before?
It should be trivial to sign a trusted derivation, but the signed artifact is either an out-of-band signature that has to be verified, or a new signed derivation is produced which creates problems for anything consuming the non-signed derivation.
It is trivial, but too many people want NixPkgs to be like NPM with minimal friction so random inexperienced devs that do not know how to generate a keypair can contribute. That lack of friction is why it has so many more packages than most distros, and also why you cannot trust any of them.
I tried to appeal to fix this in 2018 as a total outsider but it was endless bike shedding. Most would rather have no signing at all than use the well supported tools every other distro uses with success.
As a security engineer I lost all interest in NixOS after that. They want it to be a hobby distro run like a wiki, and that is totally fine. It just means we need to discourage it from being used in high security applications.
NixOS is a massive step forward in Linux distro design, and a massive step backwards in supply chain trust.
Git commits? That doesn't correspond to supply-chain verification, just committer verification, so it's not as big a benefit as package-signing in other distros.
Package sources? All sources in nixpkgs are verified against their content hash, which is committed along with the source. To pull off a supply-chain attack through substituting a malicious upstream, you'd create extremely obvious breakage when the package built by Hydra doesn't have the same output hash that you asked for at build time.
Binary substitutes? Nixpkgs doesn't use a "mirrors" system that is the traditional source of distro supply-chain vulnerabilities. Packages are input-addressed, so nix knows the hash of the output it wants, and all substitutes are signed in nixpkgs. So it is difficult to alter the outputs without breaking the dependency and falling back to source builds, and still more difficult to forge a signature for the altered output.
I agree that distros need to focus on supply-chain security as a core competency. I disagree that NixOS (rather, nixpkgs) needs to use the same mechanisms as other distros to attain it, especially when doing so would impact another core competency: simply having the latest packages available as soon as practicable, because outdated packages are another source of vulnerabilities.
Commits are not signed and approvers do not sign anything either. Nothing stops a malicious Github employee, bribed maintainer, or someone who simply phished Github credentials from making a fake PR as someone else then approving it themselves or serving manipulated git history only to CI/CD systems.
Major supply chain attacks like this have happened in lots of other package managers and most OS package managers at least learned their lesson and signs everything. Most package managers are blindly used in multi billion dollar applications, so they are a huge target for attack.
That will come with time. Nix has to overcome enormous barriers to adoption. Gate keeping to limit the long tail of desktop software is not going to help anybody. Better to get working software today and clean it up tomorrow.
People are using it for high risk applications today, when anyone with phished Github credentials can push any code they want. I have to push back on that.
Run it on a steam deck for gaming, sure, but it is only suited for hobby use cases at this stage of development.
Not exactly a fork, and you are probably aware, but there is at least one Nix-like system with full supply chain integrity and rigid packaging standards.
As far as I can tell, the only benefit of this scheme over content hashes and signed outputs is that Guix can serve their updates over HTTP. Otherwise it's vulnerable to the exact same scenarios as Nix: the rogue Git forge employee/maintainer can replace signatures just as easily as they can package sources.
I love Nix, but point #1 is arch specific. A lot of packages won’t build on Arm; although, many of those are simply limited because of unnecessary restrictions imposed by the package declaration.
Because Nix is a functional package manager, and functional package managers are the future of package managing (whether nix, guix, or something we don't have right now). It makes it easier to deal with supply-chain, to deal with rollback on your software updates, to distribute the same configuration everywhere…
I found NixOS (the OS centered around Nix) having similarities, in the way you configure things, with how you configure stuff in the networking world (with options that do some stuff behind the scenes you don't really care about). The difference is that with NixOS, you can dig in if you want to.
Also, once you used NixOS for a little while, it's very hard to go back to, let's say, Debian, Ubuntu, or Archlinux : manually configuring things in multiple configuration files in /etc seems really primitive.
While not designed to be a system package manager, Conan works in a very similar way for C/C++ package management, and it's great. All binary packages live in the cache folder, identified by a hash of the particular permutation of inputs used to build it. It even supports recipe revisions, so if the recipe (build script) changes, you don't have to worry about changes in the generated binary since you can use a lock file to keep using the specific revision of that build script that you know works.
pnpm for Node packages also has a similar design. Once upon a time, it cited Nix as a source of inspiration in its docs!
I hope that basic design continues to propagate. Better-behaved package management for language ecosystems is easier to work with for Linux distros, including NixOS. :)
> Also, once you used NixOS for a little while, it's very hard to go back to, let's say, Debian, Ubuntu, or Archlinux : manually configuring things in multiple configuration files in /etc seems really primitive.
I don't use nixos, but I use ansible to configure my desktop and homeserver environments. Completely different approach but same result in the end.
I am aware than Nix has so many other benefits but I'm talking about configuration management in particular.
> Completely different approach but same result in the end.
Well if you lock everything down with hashes, maybe in some sense, but other than declaring your configuration in a text file and deploying software with it, it's pretty much different (imperative approach with side effects everywhere, vs functional declarative approach).
* Nix being a real programming language (and thus allows far better composition, and abstraction)
* You can run any configuration you want, and easily jump between configurations (e.g. rollback), advantage of being stateless (well obviously to a degree, as a lot of software itself creates state, but normally you're not jumping between multiple major versions of the same software anyway).
* Great caching as every built derivation is cached in `nix/store`, thus only things get rebuild, that are actually changed
With Ansible you may achieve something similar, but afaik require way more setup and discipline to keep it clean, and the "programming" in Ansible feels rather painful, if you're used to a real (functional) programming language.
> With Ansible you may achieve something similar, but afaik require way more setup and discipline to keep it clean, and the "programming" in Ansible feels rather painful, if you're used to a real (functional) programming language.
Not to mention that templating a language that uses spaces for logic (YAML) is just useless amounts of pain for no good reason.
I used Ansible in the past for the exact same purpose and it has one major flaw: Ansible is imperative. What I mean is: if I add a line in a config file and want to rollback then I have to manually handle revert (create playbooks with `delete` flag etc.) With Nix you get this for free.
Also, with Nix you can trivially create image for your configuration (even with slightly different options, e.g. only enable ssh on 0.0.0.0 for a fresh install, but disable it after first config apply) which I find useful when working with a cloud.
A few things that are nice about it once you've "nixified" your project
- Locally reproducible CI builds and tests
- Sharable developer environments/shells
- Extremely fast and efficient Docker image generation
- Composability with other projects that use Nix. It's trivial to add dependencies.
- Not dealing with VMs like you do with Docker
There is a lot more if you use NixOS and make your whole operating system functional and declarative, but I think that's more of a niche.
Why is it popular now? I'm not sure. It still has a steep learning curve (and rough edges) but over the last 1-2 years it has perhaps gotten to a point where the documentation and examples are plentiful enough that more people are willing to try it out. Also, MacOS support has improved (but still kind of sucks) and the new flake system is more intuitive than the old nix files.
Nix makes it really simple and easy to manage several machines. Builds are declarative and reproducible. For my homelab I’ve setup a git repo with config for several VMs: Plex, Bitwarden, torrents, Samba shares, etc. Deploying is as simple as pushing a new commit and then triggering a rebuild on a specific host.
NixOS builds in “generations”. This morning I managed to screw up something and it bricked a VM. No worries, just restart the VM, boot into the previous working generation. Fix my Nix config, commit and push, rebuild again. Easy. I love it for managing my personal stuff.
What I don’t like about Nix? The Nix language kind of sucks, has a steep learning curve, and lacks decent language tooling (LSP etc). If you’re on the beaten path everything is rosy, but as soon as you need to configure something or build a package that no one else has done before, it feels like you need to reverse-engineer Nix just to figure out how to do fairly basic tasks.
Which makes it well suited for "put in effort now, to save effort later".
My favourite use case for Nix is using it to declare what tools/libraries a project needs. Nix can make a bunch of packages available on PATH, without conflicting with what's already installed. -- This is like tools like Node Version Manager, or asdf.
Another feature I like is the command "nix run ...", which is similar to "docker run ..." in that it doesn't change what's installed system-wide, but runs the command on the host (and not inside a container).
The nix-based operating system NixOS allows for declaring the system configuration, and safely rolling back from changes to the system configuration.
> Is this something to pay attention to...
Right now, the learning curve is quite steep.
It used to be that threads mentioning Nix would attract many "I tried it, but it's too hard" comments.
Popular where? On HN, because it's functional, and people around here like functional, just as they like Emacs, for instance. It's not popular in the industry, as far as I can see (also, just like Emacs).
For the record, I like Nix (and also Emacs), but I would never introduce it in my team. The learning curve is very steep, and the problems that it solves are mostly handled "good enough" by Docker, which I actually dislike, but everyone and their dog know Docker, there's pretty much zero training for new hires needed, introducing Nix simply does not make any business sense. In a small startup team of like-minded functional programming people, sure, but in a large org? I just don't see it.
It's funny how "not popular in industry" is always overstated. I can't deny it has very small marketshare. And yet, I've had 4 jobs in the last 6 years that have all used Nix. None of them "small startups" either. So you don't need to be popular to have enough for people to get paid to use it. Haskell is the same way.
Also, I ignore people openly who try to evaluate "business sense" as if technology decisions can be quantified like that. The biggest benefit of these Nix and Haskell shops is it attracts enthusiasts who in turn train and excite other hires. Which is turn adds more Nix and Haskell lifers to the world. One "bad business decision" at a time. That's my MO at least :)
I'm deploying Nix to real ROS robots right now at my day job. We're switching to it because it solves real problems associated with scaling the codebase and development team, particularly for domains adjacent to scientific computing.
Waaaay back in the day, I packaged Gazebo for Nix when I had an internship at a company that made robots. That was when I first dove in with NixOS, and decided that whatever packaging issues I ran into were just something I had to get good at dealing with and solve along the way. Nixpkgs was just a tiny reaction of it's current size back then, so I ended up packaging several things! I ended up needing quite some help in IRC at the time for Gazebo, which I remember as having a very quirky and bespoke build system. Folks there were extremely kind and helpful.
I ended up abandoning the packages when the gig ended. Sorry about that! Pretty cool that a whole ROS environment is well-supported via Nix nowadays. :D
Awesome! Unfortunately the public story is not really that great, actually. There's a single maintainer doing most of it (@lopsided98) and we rolled our own based in part on his work, which we in turn open sourced in October for a ROSCon talk, but are not able to maintain in public long term unfortunately.
So it's doable for sure, but for most mere mortals, Ubuntu LTSes are definitely still the lowest friction path to working with and deploying ROS.
I would strongly suggest you not use Nix for mission critical computing. It is suitable only for hobby use cases unless you are prepared to review 100% of all code yourself, because no one else is.
For robots you should not need more than a Linux kernel and a minimal init shim to your own custom runtime binary anyway.
Bit of an odd take, to hold Nix to that standard when other systems happily pull the entire world down from pypi, npm, cargo, and other sites of unknown review-status. I actually find it easier to audit my dependencies and their patches, build logs, etc under Nix than I ever did under Ubuntu.
> For robots you should not need more than a Linux kernel and a minimal init shim to your own custom runtime binary anyway.
This might have been true in 2005, but it is IMO not aligned to modern realities, where:
- You have a huge list of dependencies, including painful-to-package stuff like OpenCV, PCL, CUDA, Tensorflow.
- You deal with proprietary things like TensorRT, GPU drivers, and vendor tools for flashing firmware onto sensors, PLCs, and the like.
- You rely on the fault isolation and self-monitoring/healing of a multi-process architecture.
- You need to cgroup portions of the system that are critical vs being more spectulative.
- You have a bunch of asynchronous comms stuff going on, like streaming telemetry, logs, crash reports, and other assets. All of this has to be queued up and prioritized.
- You have to supply a user-ready workflow for updating the entire system down to the kernel and bootloader, with downtime measured in single-digit minutes.
None of these requirements will be met by a single binary and init shim solution.
I would never suggest trusting pypi, npm, cargo, etc. Those are all effectively remote code execution as a service. Those tools save you some time -writing- code but you still are on the hook to review it all just as you would review code from a peer. Why would strangers be trusted more than peers?
Operating systems should have a higher standard than random dev libraries. You should be able to trust they already have had a strict cryptographically enforced review process. Distros like Debian and Arch actually have a maintainer application and review process that includes verifying the maintainers cryptographic signing keys. We can cryptographically prove who authored any given package, who approved it, and who approved the approvers.
When your threat model includes supply chain attacks, the only answer is to get really really specific about what you -need- to run your target jobs and ensure it comes from well signed and reviewed sources... then review the edge cases yourself.
As for your other points...
> - You have a huge list of dependencies, including painful-to-package stuff like OpenCV, PCL, CUDA, Tensorflow.
Those could be statically and deterministically compiled into your target application binary, or at a minimum the final build artifacts included in the cpio initramfs which in turn can be statically linked into the kernel. You do not need a full package manager, init system, or even a shell.
> - You deal with proprietary things like TensorRT, GPU drivers, and vendor tools for flashing firmware onto sensors, PLCs, and the like.
Sure. An init shim can do insmod to load custom kernel modules as needed in your initramfs.
> - You rely on the fault isolation and self-monitoring/healing of a multi-process architecture.
Nobody said you have to have a single process. Your pid1 binary can spin off any other processes or threads you need and run reapers for them. A few lines of code in most languages.
> - You need to cgroup portions of the system that are critical vs being more spectulative.
cgroup system calls are very simple to perform in most programming languages
> - You have a bunch of asynchronous comms stuff going on, like streaming telemetry, logs, crash reports, and other assets. All of this has to be queued up and prioritized.
You can include any syslog binary you want for this shipped in your initramfs, or have everything bundle into the kernel stdout over a network where something external does the parsing. I do not know your requirements but there are many many ways to do that. I do not see what NixOS gives you that buildroot, busybox, or a single explicit choice of log collecting daemon cant.
> - You have to supply a user-ready workflow for updating the entire system down to the kernel and bootloader, with downtime measured in single-digit minutes.
If the entire OS is just a lean bzImage with everything you need statically linked into it, then a new one is downloaded to /boot, and then you reboot or kexec pivot. If boot fails roll back. No need for a read/write filesytem other than some fixed directories you can mount in for cache/logs.
I realize a lot of this feels like handwaiving, but I have been doing embedded linux systems for over a decade and have found there is always a path to a super lean, immutable, and deterministic/reproducible unikernels with nothing more than a few easily understood makefiles and dockerfiles.
If you ever want to chat about this stuff feel free to drop in #!:matrix.org
Lot of talk about embedded linux approaches for satellites and hsms in recent weeks.
You don't have to take on all of Nixpkgs to use Nix in that kind of context. Nix hackers have in fact spun off their own, more focused repos for such applications (e.g., NixNG, Not-OS).
It's pretty easy to audit your whole dependency tree with Nix if you want to.
> Lot of talk about embedded linux approaches for satellites and hsms in recent weeks.
There was a talk at this year's NixCon about migrating to NixOS for a weather satellite system: https://youtu.be/RL2xuhU9Nhk
Not taking away from the rest of your post, but the reason you'd trust strangers more than peers for random packages is that the strangers are often some of the best in the world at what they do and your peers are most likely not.
Most programming language package maintainers are hobbists new in their career with no idea what they are doing when it comes to security, or worse, actively malicious.
See the dozens of serious supply chain attacks or massive security oversights in recent years. The overwhelming majority of code in open source is not reviewed by anyone.
I'm talking about the most commonly used packages, not the long tail here.
Tokio for example is clearly maintained by some of the best people in the world at writing async runtimes. It is extremely unlikely that your peers would be able to do a better job at it than the Tokio team.
Someone being good at async runtimes does not mean they are versed in security. Also you have no easy proof the code that the Tokio team wrote is what actually made it into the binaries hosted by the Nix project. That is the nature of increasingly common supply chain attacks. The Nix tooling and package definitions themselves have very minimal supply chain integrity evidence. No author or reviewer signing, etc.
As for my peers, I work with some of the best security researchers in the world, and I myself have found and filed critical CVEs in widely depended on and trusted software like gnupg and terraform. I am not an expert by any means, but just a technical person willing to actually read some of the code we all rely on.
No one bothered to carefully review openssl before heartbleed.
Everyone assumes someone else is reviewing critical code with a security lens. It is always a bad assumption and it gives dangerous people that actually -do- review code a massive advantage.
If you ship you copied off the internet for a critical use case without ensuring it receives qualified review, then you are as responsible for any bad outcomes as a chef who failed to identify toxic ingredients.
The current industry standard on software supply chain integrity is about as negligent as the medical industry before the normalization of basic sanitation practices. Yeah, it takes a lot of extra work, but that is the job.
Most supply chain attacks are pretty orthogonal to whether there's a chain of trust on the git repo containing the package definitions, as far as stuff like poisoning cache.nixos.org with a backdoored binary that doesn't actually match the build definition given.
Anyway, as far as robotics in particular, no one worth their salt is treating the computer or ROS as "trusted" for the purposes of last-mile safety— we're using safety-rated lasers, PLCs, and motor controllers for the physical safety part of the equation. The computer is critical in the sense that it's critical to keep the robot driving and therefore critical for business operations, but it's deliberately not in the loop that keeps humans or property from being physically harmed.
Two things (not exhaustive). First, as the article notes, nixpgs has a lot of packages. Second, nix matches Gentoo's customizability but with good binary caching for common builds. (Kind of; that's very approximate)
Edit: I should amend: nixpkgs is already huge, and with flakes you can trivially get packages from anybody you trust (like Arch's AUR but distributed and easier), which covers even more packages.
It still doesn't have a concept akin to use-flags, which is what distinguishes Gentoo from Linux from Scratch, see https://github.com/NixOS/nixpkgs/issues/12877. The fact that Nix doesn't have these is also one reason why it's easier to cache than Gentoo packages.
Well, it doesn’t have global ones, but you can very trivially override attributes for a specific package if you want.
Flags are a great concept, but basically will exponentially increase the required build cache size, and imo having everything in the binary cache and optionally compile only some specific package is a great tradeoff.
Someone announced a fork of NixOS to add USE flags and remove Systemd ages ago on the mailing list, before the Nix community added Discourse. They called what they were starting 'The Church of Suckless NixOS'.
I never saw a sign of them after the initial announcement.
Agreed that there is nothing quite so unified and easy as USE flags for compile time options, although nix happily lets you ex. override build options per-package, and I think overlays let you do large scale changes.
I actually dropped NixOS after using it for 2 years. It's just not practical for day to day work unless you want to be messing around with Nix config or fighting shared lib issues all the time. I'm using OpenSUSE atm and couldn't be happier - yeah it's a lot more click ops, but I get to work so much faster and everything just works. Fedora is another good option. (If you want to know, the breaking point for me was trying to get QtCreator working properly on NixOS which I sunk about 2 hours into before giving up - but took me about 5 mins on OpenSUSE to install).
Nix, the package manager, is _amazing_ though, despite the wonky language. I use it for providing hermetic/reproducible environments for my dev projects. You create a shell.nix, auto load it with direnv, and the development environment is 100% replicated for everyone doing dev in that repo. Hook it into your CI too and now you have an identical dev environment everywhere. It's like virtualenv on steroids for any kind of project/tooling.
It is "just" another package manager, but with quite a few cool features. E.g. easily install different package versions in parallel, got new computer->just copy over the configuration and nix takes care of the rest, want to make a docker container out of this app-> here you go. Neat, hence popular.
Well, it is “just” a package manager in that all previous ones couldn’t really be called that as they were fundamentally faulty.
Nix is the first package manager that actually solves dependency hell properly.
Nah, not really. Neither the first, nor properly, e.g grafting. Not saying it is not good at that, but not even nixOS claims this as a main selling point.
The binary cache works by signing, not sure if this is what you mean? By default you get the “upstream” channels as a trusted store, but you are free to add new ones and only ones that you added their keys of. Pretty sinilar to PPAs.
Nix does blind automated signing of binaries which only helps prevent the cache from being mutated between builds. It does not ensure the code that went into those builds was accurate.
Nix allows for composability in the OS, which is nice for automation and reproducibility.
Recently a new and even bigger advantage to composability has appeared. ChatGPT. If you can think up a configuration of an OS you can ask ChatGPT to create it with Nix (or other frameworks for other relevant cases) and then nearly all the work is already done for you. I find this workflow to be even better than GitHub Copilot.
Steam deck is interesting here because it leaves me wondering about what I could ask Nix to configure on it. I don't know very much about steamos.
I think a lot of people love the principles behind Nix -- I love them enough I've tried (and failed miserably) to install and use it as my regular Linux on 3 occasions.
This is a good analogy, right down to Git's painful UX. Nix is powerful and difficult to understand. Often you'll find yourself in a spot where either you need to take hours or days to understand some fundamental component, or you can just paste a magic incantaion you found soewhere and move on. Good luck googling, it's almost as if they designed the Nix language to be unsearchable. When you do find an answer, it's often 3 years old and completely obsolete. Definitely recommend having a chat window open with other Nix users while working with it. When starting a meaningful project with Nix, make sure you're not on a deadline.
I like Nix, a lot. But I'd currently recommend it for a pretty narrow type of user/use case. Like Git.
I have to agree regarding the UX, though I would like to add that it is the language per se that is complex, but its standard lib/“nixpkgs” lib.
Also, not sure I can see where would you not recommend git? Is that a typo? It sure has a bad UX, but it is the lingua franca of version management either way, that even junior devs will have to fight to understand to become even remotely useful. And this might well be the case with nix as well.
I would not recommend git for organizations with monorepos, and not for companies that need to store large binaries (like games; git-lfs is a start but still a royal pain). Also I probably wouldn't recommend git for projects that are significantly more media than code. Depends on the team.
Git dominates open source, yes, but I wouldn't call it a lingua franca in general. Most companies I've consulted with were doing just fine with something else.
Time to start shilling it where you work. It has a pretty steep learning curve but it's worth it most of the time. In some industries (e.g. Haskell-adjacent industries) almost everyone uses Nix.