As much as I admire the people involved with Nix, taking the initiative to solve what they believe can be improved and the other developers like her, toying with new things and successfully managing to figure all this stuff out... I'm actually disappointed.
The thing is, Nix is an absurdly complex piece of software, perhaps on a similar order of magnitude as maybe Kubernetes or other gizmos, requiring a very large time commitment to understand and to use properly.
Just for fun, have any of you tried to keep up during the Docker bigbang, with all these technologies popping up left and right, switching names, dying abruptly, getting super-seeded or hacked with vulnerabilities? That was one of my most mentally draining year for me and I've been in the field for a while now.
See... along the way, I've learned that Computer Science is always about tradeoffs. Now when I'm being asked to trade away my sanity learning this nonsense for in return a couple of megabyte of disk space that could've been wasted, I just don't see the value.
Seriously, are we optimizing for the right stuff here? I know Nix is doing package isolation with a programmable environment defined by text files, it's immutable and all. I get it. But we have alternatives for these problems. They do a pretty good job overall and we understand them.
Ironically, even those solutions I'm comfortable with are still struggling to get good adoption! How the heck is our field going to avoid fragmentation when it keep growing exponentially like this.
Perhaps it's just me aging and getting groggy; though there must be an explanation for this phenomenon. Please, it definitely gnaws at me.
> Seriously, are we optimizing for the right stuff here? I know Nix is doing package isolation with a programmable environment defined by text files, it's immutable and all. I get it. But we have alternatives for these problems. They do a pretty good job overall and we understand them.
Setting up a development environment five years ago: here's a Word document that tells you which tools to install. Forget about updating the tools - good luck trying to get everyone to update their systems.
Setting up a development environment two years ago: here's a README that tells you which commands to run to build Docker images locally which will build and test your code. OS X users have fun burning RAM to the unnecessary VM gods when they have little to spare to begin with because MacBook Pros. No reproduceability between Dev and CI despite using container images because Dev will have debugging tools made available in the image that will be removed from the final image. Be limited while debugging locally because all debugging must happen over a network connection.
Today: Install Nix. Run build.sh. Run test.sh. Run nix-shell if you want more freedom.
I'm missing something here. (Disclaimer: I've just tried to understand the OP, I don't know all the details about what is going on there).
The text starts with "In my last post about Nix, I didn’t see the light yet." ... Then it continues with "A popular way to model this is with a Dockerfile." So I have expected that the post will demonstrate how Nix can be used without docker... But then later:
"so let’s make docker.nix:"
and continues in that way. So there is some dockering involved? To this casual reader, it doesn't seem that using Nix allows one to avoid docker, as a technology, but just that some additional goal is achieved by using Nix over that.
I just missed what was actually achieved, other than that the text mentions a few megabytes less here or there, of 100 MB, and that I also miss the information if the megabytes were traded in the build time or, if I understand correctly, in some way even more dependencies (more places from which something has to download something). Can anybody explain?
I'm sure that these tradeoffs were obvious to the author, but I as a reader hoped to somehow get the idea of those, and I missed that (yes, it's hard to "see the light" especially indirectly).
As a person that uses Nix to build Docker images in production, let me explain.
Docker is a tool that lets you package a Linux distribution and roll it out on any random server cleanly and safely. But the stuff inside that distribution is still very much bespoke and unmaintainable.
Nix is a tool that lets you create a custom Linux distribution with absolutely minimal effort. (Basically, just list your packages in a file and hit 'go'.) But the packaging story for Nix is pretty bad.
To bridge that gap, Nix has code that puts a Nix package with all dependencies into a Docker container. It works, but of course kind of icky; something more integrated and smart would be preferred.
> Docker is a tool that lets you package a Linux distribution and roll it out on any random server cleanly and safely. But the stuff inside that distribution is still very much bespoke and unmaintainable.
This, a thousand times. If you are deploying the same container 10,000 times, with no modifications - well it makes sense to spend time maintaining that distribution. But if you're using Docker for dev environments (for example) and each one is different... hiiiya you haven't really moved on in terms of maintainability than using Vagrant or any VM setup.
For development, using Nix or Guix is in my opinion extremely nice. Using Docker in development would mean mounting your dev directory as a volume into a dev-container and depending on your application, this might end up being a pain in the ass. Editors often depend on the same dependencies to compile your code on the go and check for errors - if you don't have these on the host, you will then have to start solving new problems.
With Nix, you can have a package definition and either install all the necessary dependencies globally on your machine or spawn a shell with all of the needed binares in your PATH. Recently an application needed a different Node version than the one that was globally installed on my machine. Instead of having to build a Dockerfile or whatever, I just spawned a shell with the newer Node version, ran the command that I needed and was done.
For production, you might still want to use Docker, as there's a lot of great software built on top of it (Kubernetes and other platform-specific managed services). You can turn a Nix/Guix package definition into a Docker image quite easily, if you already have one. As extra benefit, you remove the small chance of still ending up with incompatible dependencies that you get when using a traditional package manager in a Dockerfile.
Nix itself doesn't use Docker, and you can deploy it like that just fine (with NixOS, and NixOps if you have multiple machines).
But sometimes your ops team has standardized on Docker, maybe because you're using Kubernetes. In that case, Nix will happily build those images for you.
The big one that I got is that the resulting Nix image has just the Go executable in it, and so the server is safer because if anyone hacks into it they'd need to bring their own copy of any tools they wanted with them. I'm a huge fan of reducing attack surfaces wherever possible, and getting a container that will only run the program required and nothing else is a win for me.
> The big one that I got is that the resulting Nix image has just the Go executable in it
Now I miss the point of that one too: if just a Go executable alone can be enough anyway, as it is statically linked, why not just copying it, instead of complicating around?
A lot of people have standardised on Docker images as the default distribution/packaging format as Kubernetes etc make deploying/running them more standardised across orgs.
You can build the binary then have a Dockerfile copy it in from the scratch base image. However if you are using nix for deterministic builds you might as well add the few lines of code to have nix build the docker image to vs a Docker file with a single copy command.
If you are not building a static binary you get the advantage nix will copy in only the dependencies needed. You also get the advantage when you are building images you not randomly downloading urls from the internet that may be dead. Artifacts can come from a nix build cache which is cryptographically signed based on the build inputs so you know building the same image every time produces the same output.
With typical Dockerfiles that is not true. Docker images are not immutable so fetching the same Docker image may result in a different image being fetched. Likewise a lot of Docker files just wget / yum install random packages from places that may not exist anymore. If you maintain your own nix build cache you will always be able to build, get speedup from hitting the build cache vs compiling and know the build is deterministic. Running the same build multiple times will result in the exact same output.
Because you get to use the same tooling to build Docker images that need more than that. Depend on a C shared library via cgo somewhere? Have a directory full of templates and other resource files that need to ship with it? Maybe the Go program needs to shell out to something else? You don’t have to rework your tooling or hack a random shell script up.
Look up nix home manger, you might be able to have nix configure everything in your home directory.
Also nix makes it pretty trivial to package and patch propriety binary vendor software. It would take at least one person on the development team feeling comfortable with nix, but they could craft a nix file that does everything.
If you find something that can do all that reliably I would love to hear about it! I've got some Powershell scripts but it's still a several step process.
Why should I be embarrassed for dumping some weekend coding garbage just to keep HR happy that everyone is expected to have something on Github?
Everyone here knows my stuff long time ago, or is knowledgeable enough to find it, given that my online life goes back to the BBS days, do you think it embarrasses me, really?
Corporate pays my bills, not songs about birds, rainbows and how everyone should stick it to the man.
You skipped the "spend 3 hours figuring out why libgcc_s.so.1 won't link when trying to compile that one protobuf tool you need right now" step.
Seriously, I dread every time I have to compile C++ in NixOS and there's no existing derivation already.
Oh, and when the Omnisharp guys start requiring a new version of mono that won't build the same way for whatever reason, so you're stuck on old versions of your IDE plugins if you want C# support until someone else figure out what broke.
Well, there is a well defined pattern using configure and make. When very complex builds like GNU Emacs and operating systems have done quite well with this pattern, I wonder what problem we are trying to solve.
That's a very apple to oranges comparison, wouldn't you agree?
GNU autotools assists in making source code packages portable accross Unix-like systems, while make is for build automation and has fine-level dependencies (inter-files).
Nix is a package manager, and though is can build your software for you transparently, it is not tied to and specific software configurationand building toolset, can manage buildtime and runtime dependencies (including downloading the sources), caching build results transparently, etc.
I'd argue the opposite—it's beautiful in it's simplicity. It's a tool that lets you express how to create something in a deterministic way.
A large part of what creates the learning curve that's best described as a wall is that this simplicity is due to building on a number of concepts that themselves are likely initially foreign: a pure functional language, content addressable storage etc. These are all good concepts to learn about in isolation, but as with any domain, learning a large number of new things at the same time (particularly those that may directly conflict with your current mental models) is hard.
What's truly nice about the nix approach is that it is not bound to any stack. You can use it for any language, for building any software (Nix - the package manager). You can also use it to define a entire machine config (NixOS), or even an entire collection of machines (NixOps). It is not something that you need to re-learn every few months, or learn then not use because some of your other tooling changed. Yes, the initial mental load is high, but the return on it is continuous and you'll likely learn about some other useful things along the way.
>I'd argue the opposite—it's beautiful in it's simplicity. It's a tool that lets you express how to create something in a deterministic way.
I'd take a different angle and argue that Nix is Version 1 of what to me looks like a very cool idea, and it's both beautiful and not straightforward to use.
If that matters, wait for version 5. Look at the progression before Docker came on the market - it does what other tools could do, but packaged in a nice, developer-friendly toolkit. I expect the same for Nix in a few years time.
You have the right idea, but you'll be waiting for a long time. Nix is transitional, and a version of Nix which appropriately enforces package-capability discipline likely will not give a traditional Linux/BSD userland. Instead, use Nix today, and shape its future; Nix is currently on version 2.
I think much of your criticism against Nix is undeserved. None of your arguments against Nix have any concrete specifics, and your rant about Docker doesn't even have anything to do with Nix. It's fine if you don't want to take the time to learn every single new technology that pops up; everyone has to be selective about what they choose to learn because we can only have so much time. But why criticize what you didn't take the time to sit down and learn, especially when it's not even being forced upon you?
I'm not going to go deploying anything in this blog post to production, if I deploy a Go app it will probably look like the first Dockerfile (and probably with an official Go image rather than a homebrewed one.) That said I really appreciate people like the author doing all this work because in 10-20 years I think most people will be using, though maybe not Nix/Kubernetes/Docker, something that combines their best features and requires none of this fiddling.
But we're not going to get there without people doing this sort of stuff that can't possibly be worth the effort today.
Except I do do that stuff? I'm currently writing a draft of how to incorporate formal design semantics into "devops" flows. I also have a half-completed set of instructions on how to make a private Homebrew repo and have my own private Alpine Linux repo.
I don't think anybody who actually read the post has the misgivings that the poster to whom you replied so liberally sprinkles into every comment. YHBT. Sorry about that.
No... that is not system engineering. System engineering would be thinking about how the program to enable configuration templating with shell variables will work, how you can consistently add and remove templated, variable expandable configuration excerpts to be able to have configuration overlays, how you could design a configuration self-assembly service other OS configuration packages could call, writing that down, then implementing and packaging that in an OS package, so your other OS packages could depend, call and use it in a consistent fashion. Or thinking about how your standard database settings should be configured, writing a formal specification, then writing a tool to implement that, packaging it into an OS package, and then having your OS database configuration packages formally depend on it and call it to consistently create databases. These are just few examples of thinking about architecture, as opposed to writing how to documents.
We just keep trying to hide complexity and fragility with more layers of redirection and abstraction - it doesnt solve the problem, but it makes it tomorrows problem.
In the end, excessively clever people make me nervous, because they leave complex problems in their wake, which I'm not clever enough by half to solve.
I agree with you, mostly, but I really think Nix is one of those rare abstractions that is easily worth it: it has the potential to simultaneously solve and unify a bunch of different issues: reproducable builds, package management (system & language), container creation, system config, and so on. One language across all those domains is great, and the ability to rollback an entire system is amazing. Also: the ability to add your own software, GitHub tarballs, and piles as PHP as first-class citizens (you can cleanly remove them!) isn't emphasized enough.
You can keep your wget, build script, db config, and nginx config all in the same file!
I think if you weighed the complexity of Nix, and NixOS in particular, against the tools it replaces (or anyway, could potentially replace) it's a no-brainer. I'm a convert.
Nix, unlike say docker, actually solves a problem correctly in a principled way: reliably and reproducibly going from source artefacts and their dependencies to installed binaries/libraries/etc and without conflicts or undesired interactions between other installed binaries/libraries/etc, even different versions of the same thing.
It represents a major innovation and step forward in the depressingly dysfunctional mainstream OS landscape, that still is stuck with all the abstractive power of 80's BASIC but with concurrency and several orders of magnitude more state to PEEK and POKE.
Nonsense. You can run nix on basically any linux system (not just NixOS) and macOS. Not just theoretically, I'm doing both and am not running NixOS itself at all ATM. It's also the only sane way to build docker images. Since nix itself of course runs fine inside docker, I suspect you should be able to build a docker image on Windows that way, too, but I haven't tried.
Also, even if you come up with a forward facing solution and demonstrate it only on a particular domain you still, IMO, deserve most of the credit for solving the problem even if it takes others to translate it straightforwardly to other domains.
In what way does docker on macOS interoperate better with the rest of the system than a nix package? I'm pretty sure the answer is going to be "none" – you can build native, including UI apps with nix (not that I recommend throwing away xcode for your app store development and switching to nix). How do you do that with docker?
I'm less sure about windows, can you explain a bit more how you use docker containers for providing "native" windows stuff (as opposed to as a more lightweight linux VM replacement when developing something that you really want to deploy on linux)?
Out of curiosity, why are you using docker containers on windows? For emulate static linking or to provide network/process isolation (and does docker under windows provide "proper" i.e. secure isolation?) or something else entirely?
Nix, the language, is not tied to any platform. Anyone willing to make it run in a new platform, can make the appropriate pull request(s) to make it happen.
Nix, the package manager, can be installed on any Linux distribution or on macOs. It's not confined to NixOS. It runs pretty much anywhere Docker does.
Will it? Nix solves a unixy problem in a unixy way. The Nix/WSL story might get happier at some point to solve a problem for Windows based developers of Linux based servers, but that's effectively just "running nix in docker", except that the interface is WSL instead of Docker.
I know some people write Windows software from Nix, but this is a way for Linux developers to make desktop apps for Windows users, not a solution to any problem Windows developers have.
I can see three broad reasons people develop on Windows:
1. They want to create a Game or Windows end user UI App (e.g. Overwatch or Photoshop)
2. They write some internal corp tooling or B2B software for windows shops (e.g. gluing some SAP or Oracle garbage together)
3. They want to write a server application (which means deploying on linux, which is basically the only server OS left). But they do not want to run linux as their desktop OS for corp or private reasons.
The 1st category probably has limited use for either docker or nix (in the absence of first class windows support). Might still be useful for tooling though.
The second category probably has use for docker mostly as a shitty linker, maybe also for isolation/security (I don't know the windows docker ecosystem).
The 3rd category can and totally should be using nix and I'd guess is at least double digit market share (so not insignificant; e.g. before Google banned them, a large fraction of Googlers had windows notebooks).
Yes. If you're trying to develop Linux software from Windows, Nix works to the extent that the rest of your software works. But the experience of working on WSL is not the experience of working on Linux. And it isn't likely to help you build your Windows based programs.
No Nix is not like the other lying leaky abstractions. Nixpkgs takes great effort not to slap another layer in top, but actually wade through the muck and actually fix problems at their source.
So sound very alienated and have a realistically cynical that accurately reflects of most of the industry. But please don't assume everything has to be that way. Tools like Nix and community efforts like Nixpkgs really are the exception.
> In the end, excessively clever people make me nervous, because they leave complex problems in their wake, which I'm not clever enough by half to solve.
I absolutely love this quote and very much feel the same. Often times I feel like the industry at large has a bit of a tricky problem, and then in fixing that problem they forget about a whole set of issues the thing they are trying to fix dealt with, so often times they've just traded one set of problems for another, just now with more complexity. Monolith vs. microservices is probably the best example of this: monoliths have a bunch of problems that make developing with large teams hard, but microservices add a whole host (e.g. transactional consistency, performance) of issues that are much more difficult to solve for most dev teams.
I actually disagree with GP. Nix doesn't add a layer of indirection, instead replaces many of the existing layers (common development environment, locking dependencies, reproducible environment, package managment, configuration managment system, image building, CI/CD).
Nix real power is being a language that allows to describe all of those things in a declarative way.
> excessively clever people make me nervous, because they leave complex problems in their wake, which I'm not clever enough by half to solve.
They certainly do! And quite often, the reason they leave those problems is either a) they're not clever enough either, b) there's enough actual work that solving them doesn't make them feel clever, and/or c) there are other things they could do that make them look more clever.
I have made some really good money ripping out somebody's too-clever-by-half solution and replacing it with something simple, well supported, and dreadfully unfashionable.
Same here. Then I come back to the site ten years later on another assignment and the solutions I put in are still there. I ask why and am then told that they work reliably, so why rip them out? There is nothing I can say to that but be glad.
Yes, and while they might be very clever, they are still not clever enough to make complex problems simple, as that requires extreme cleverness, experience and insight.
I know it's less of an issue for go users, but one of the great things about nix is how your runtime dependencies are (mostly) defined by inference from the programs you've built. So, if you build a program that links against libpg.so, the runtime requirements will automatically include that library!
Since your runtime requirements depend on how you compile a system, you usually have to be quite careful with keeping your Dockerfile in sync with what you're building. This busywork just goes away.
Nix involves quite a large upfront time commitment to understand it, but it solves problems that I haven't seen solved elsewhere (well, I guess guix is similar), and does it across all the languages you write for. That it can work across toolchains and languages is a unifying force, and so I think it's one of the better systems for reducing the "exponential fragmentation" referred to above.
Actually, the thing we started using nix for was reliable caching of compiled artifacts, including not just your code, but all the programs and libraries that your code depends on. It's another thing that's difficult to do in a general sense, but if you have a fairly strong notion of reproducible builds, it's possible.
I don't see your point. Kubernetes may be awfully complicated and the wrong solution all along (which, unforntunatelly, doesn't mean it'll die quickliy — C++ was always pretty awful), Nix may be as well, but their actual point is to help your sanity, not "save up a few megabytes of space".
"There are other solutions"?
Well, look, well defined, easily reproducible, immutable environment configurable by text files is something that pretty much everybody wants, even if he never even thought about concepts like this. I mean, literally, your grandma would want it on her iPad, because it would mean you could buy her a new iPad and she couldn't see the difference aside from the fact the glass is not broken anymore. Every programmer wants it, because it would free them from that awful system administrator hegemony. I want it, because getting a new laptop that I plan to use for slightly different purposes than my current hardware, instead of building up my environment from scratch gradually installing more and more shit I forgot years ago even exists, I could review a couple of 200-line config files (which I possibly didn't even write myself, the system just managed to express in a well-readable format what I got by experimenting in the shell years ago), remove some lines, add a couple of others, and get exactly what I want on that laptop.
And we don't really want any tradeoffs here. Yeah, as a real rock'n'rollers we want a fucking lot, but that's the point, we want to find some way to make all problems caused by the complexity of making lots of software and hardware work together disappear.
Yet, I don't see this ultimate dream come true on every — be it real or virtual — device, for every purpose. So be it the problem of marketing, or the "other solutions" being a bit more of a tradeoff than we are ready to accept, or something else, but these "other solutions" don't seem to do as much of a good job as you imply. So, my understanding is that we are just "not there yet". Hence "all these technologies popping up left and right, switching names, dying abruptly, getting super-seeded or hacked with vulnerabilities". There is a dream, maybe even a vision, but there is no solution. Not that I'm aware of. (And, once more, the real solution is not just something that is "possibly possible", but also includes solving the problem of user-adoption.)
Thank you for making me feel like I'm not taking crazy pills.
I share the parent's frustration in trying to keep my head wrapped around what all the different players are in the space. I tend to agree that the more I look into Nix, the less it really seems like it will actually take off. But it's utterly confused to segue from that to "Therefore, they must not be solving any problems we really have."
I'm always wondering when I read one of these posts what setup they must have to get work done. Seriously, what do most people do? How is it apparently so easy to share development environments between your co-workers in a manageable way? And if the answer is "each employee starts out with the same laptop baselined in the same way, and then they work on one project which is in a monorepo, whose system dependencies are already installed on the laptop, and those dependencies never change" then... hello? You don't see that this is not a solution? I fundamentally _do not_ understand how anybody can live in the world and not be annoyed by this. What do they do? WHAT DO THEY DO?!
Creating a docker image is one of many things Nix do. IMO one of the killer feature is actually providing a reproducible dev environment.
Nix besides the build, can provide a shell that contains all the tooling necessary to build the application. The definition lives together in the same repo as application any everyone who uses it will get the exact same tools with the same versions.
It’s always a good idea to pin the nixpkg version in nix shell.
Once you do this it saves all the “it runs on my machine”. No more days following a readme setting up a vagrant/virtualbox machine, which is then slow as a vm with limited memory on a laptop.
Instead you type one command, “nix-shell” and get everything installed locally as required in a deterministic way. Agree, this has been a revelation to teams I’ve worked on.
I've had 40 years to observe and think about why this is all happening.
What I think happens: new blood enters the industry, ignores the accrued wisdom of the elder old blood, re-invents things, regurgitates, recycle (every time someone makes money with it), and then .. becomes the old blood.
Like, there seriously is some sort of industry-bound amnesia affect at 3 and 4 orders of scale, in my opinion.
There are those who read the docs, and write new ones/edit old ones. There are those who don't read the docs, but write new ones only. There are also those who read the docs, and don't write anything new, too. Then, there are those who don't read the docs and don't write new docs, either.
Alas, all these are tied up in a universal struggle to gain dominance over each other, and its called the technology startup battle.
Indeed. You can still read Eelco Dolstra's thesis [1] from 2006 and the general principles still apply. Some details have changed (e.g. the builder.sh scripts have mostly been abstracted away), but for understanding the Nix store, the Nix language, etc. it's still completely relevant.
Problem with Docker is that it abstracts away one level too much. Somehow it's there to provide container isolation but in an abstraction that almost feels like a VM. So caching is pretty hard, in fact caching is more or less based on the source code layout. Nix doesn't provide this level of isolation, it doesn't pretend to provide a separate machine. Which allows caching of packages, thus eating less disk space and making rebuilds faster.
Not sure how Nix(OS) handles isolation in terms of cgroups/namespaces though, seems that it relies on systemd for that.
> Now when I'm being asked to trade away my sanity learning this nonsense for in return a couple of megabyte of disk space that could've been wasted, I just don't see the value.
Well said. And not only your sanity, but by using this stuff in production you’re also dragging everyone else’s sanity with you. The benefits need to be compelling and aligned with the business needs, and saving a few MBs here and there at the cost of time is definitely not in alignment except for in the most exceptional cases.
Fun to read about and toy around with, but let’s leave it at that. The first example using multi-stage docker build and alpine is fantastic.
> How the heck is our field going to avoid fragmentation when it keep growing exponentially like this.
It will be corrected the next time we see a major bubble burst/recession. Right now we have literally millions of people around the world working in tech, and everyone has ideas.
If you like Nix than probably try GNU Guix [1] and if possible use Guix System on server side. This is one of the most modern development in operating systems.
In the modern security conscious world where most companies run their workloads on virtual machines controlled by other companies, it's imperative that applications are deployed on such predictable, secure and reproducible operating systems.
Guix supports transactional upgrades and roll-backs, unprivileged package management, and more. When used as a standalone distribution, Guix supports declarative system configuration for transparent and reproducible operating systems.
I still think Amazon, Google and Azure will take a decade or two to build and offer something like this to it's customers. Indeed Google's Fuschia is trying to do the same, but I feel it's at least a decade away.
It has one of the best wonderful documentation [2] to get started and explore with all the details.
As a Nix user, Guix interests me but I have three concerns, namely my pain points with Nix:
1. Small package base.
My workflow includes running Julia on CUDA, editing from VS Code. Remarkably, Guix has both Julia and (on a third-party PKG repo) CUDA. What it's missing is VS Code. I understand that VS Code is non-free (like Chrome, it attaches trademarks on its branding), but there's no VS Codium I could find either. I also can't find Atom. Am I missing something?
2. Package index mirroring.
Nix has an issue where Python, Node and other "third-party" packages have to be manually added to nixpkgs. If you want celery or requests, you're in luck because those are popular. If you want something more obscure, like pywikibot, you have to add it yourself (or run pip/npm yourself, and forego the benefits of Nix/Guix.) Does Guix address this?
3. Ease of scripting.
This might be a plus for Guix, since Nix design patterns are arcane (the language is easy, things like overrides and needing to replace broken URLs is hard), but how easy is it to debug failing builds? Is there something like a debugger, or at least a way to shell-in and call things yourself?
Bonus #4: broken URLs. is there a cache? what can be done?
The solution to Python packages is the 'pypi2nix' tool. This is an alternate approach that you see also used in the blog post, this time with vgo2nix. These tools take a set of dependencies (e.g. requirements.txt) and generate Nix expressions to build all of them from source.
Guix has `guix import pypi PACKAGE`, which spits out a complete definition that you can pop into a file or the main Guix repository.
'guix import' also takes a --recursive argument, which makes it recursively consider any dependencies and produce package definitions for those not already available.
I don't think you can get away from explicit package mirroring without compromising having a nixpkgs pinned to a rev.
Luckily it's pretty automatable and painless if your language has a decent packaging syste
- the way Hackage is mirrored is very nice and automated. And then you can just ask for "vector version X.Y.Z" and if that version is there (probably is if nixpkgs is recent and the library is on Hackage), it finds it. And if you don't want to wait for nixpkgs to maintain the Hackage hashes, you can pretty easily maintain it yourself.
I'd really love to use GuixSD. I'd much prefer Lisp to Nix. But the strict licensing is just too much of a handicap. You've really got to build your system around the underlying licenses. I've tried installing it on 3 or 4 systems, and was always tripped up by some critical missing driver.
I admire the ideological purity, but it kinda renders it irrelevant to me.
If I could trouble you for the common discourse here, would you mind summarizing why one may prefer to use Guix in the place of Nix? They seem to be based on the very same ideas and Guix even admits to being inspired by Nix.
Scheme instead of a bespoke programming language, and more focus on reducing the bootstrap set that compromises the goal of reproducibility shared by both systems.
But the bespoke language has some benefits. It’s concise and specifically designed for the problem. Also Nix has more contributors and that is very important to have a large collection of recipes to start from.
Ok I’ll give another reason. Nix community is more relaxed and doesn’t downvote just because they don’t like some comment or somebody disagrees with them.
Do Nix and Guix expressions not interoperate? Is that a fundamental limitation of the system (at lest as fundamental as, say, dpkg
and rpm) or could one write a source-level translator?
There are obviously some differences between the two, but I'm not really perceiving any real verbosity gap between the two, one way or the other.
Here is a guix package declaration for tmux:
[Edit: fixed indentation]
(define-module (gnu packages tmux)
#:use-module ((guix licenses) #:prefix license:)
#:use-module (guix packages)
#:use-module (guix download)
#:use-module (guix git-download)
#:use-module (guix build-system gnu)
#:use-module (guix build-system trivial)
#:use-module (gnu packages)
#:use-module (gnu packages bash)
#:use-module (gnu packages libevent)
#:use-module (gnu packages ncurses))
(define-public tmux
(package
(name "tmux")
(version "3.0a")
(source (origin
(method url-fetch)
(uri (string-append
"https://github.com/tmux/tmux/releases/download/"
version "/tmux-" version ".tar.gz"))
(sha256
(base32
"1fcdbw77nz918f7gqc1ga7zlkp1g112in1h8kkjnkadgnhldzlaa"))))
(build-system gnu-build-system)
(inputs
`(("libevent" ,libevent)
("ncurses" ,ncurses)))
(home-page "https://tmux.github.io/")
(synopsis "Terminal multiplexer")
(description
"tmux is a terminal multiplexer: it enables a number of terminals (or
windows), each running a separate program, to be created, accessed, and
controlled from a single screen. tmux may be detached from a screen and
continue running in the background, then later reattached.")
(license license:isc)))
Here is (roughtly) the equivalent in Nix:
{ stdenv, fetchFromGitHub, autoreconfHook, ncurses, libevent, pkgconfig, makeWrapper }:
let
bashCompletion = fetchFromGitHub {
owner = "imomaliev";
repo = "tmux-bash-completion";
rev = "fcda450d452f07d36d2f9f27e7e863ba5241200d";
sha256 = "092jpkhggjqspmknw7h3icm0154rg21mkhbc71j5bxfmfjdxmya8";
};
in
stdenv.mkDerivation rec {
pname = "tmux";
version = "2.9a";
outputs = [ "out" "man" ];
src = fetchFromGitHub {
owner = pname;
repo = pname;
rev = version;
sha256 = "040plbgxlz14q5p0p3wapr576jbirwripmsjyq3g1nxh76jh1ipg";
};
nativeBuildInputs = [ pkgconfig autoreconfHook ];
buildInputs = [ ncurses libevent makeWrapper ];
configureFlags = [
"--sysconfdir=/etc"
"--localstatedir=/var"
];
postInstall = ''
mkdir -p $out/share/bash-completion/completions
cp -v ${bashCompletion}/completions/tmux $out/share/bash-completion/completions/tmux
'';
meta = {
homepage = http://tmux.github.io/;
description = "Terminal multiplexer";
longDescription =
'' tmux is intended to be a modern, BSD-licensed alternative to programs such as GNU screen. Major features include:
* A powerful, consistent, well-documented and easily scriptable command interface.
* A window may be split horizontally and vertically into panes.
* Panes can be freely moved and resized, or arranged into preset layouts.
* Support for UTF-8 and 256-colour terminals.
* Copy and paste with multiple buffers.
* Interactive menus to select windows, sessions or clients.
* Change the current window by searching for text in the target.
* Terminal locking, manually or after a timeout.
* A clean, easily extended, BSD-licensed codebase, under active development.
'';
license = stdenv.lib.licenses.bsd3;
platforms = stdenv.lib.platforms.unix;
maintainers = with stdenv.lib.maintainers; [ thammers fpletz ];
};
}
I feel like (inputs `(("libevent" ,libevent) ("ncurses" ,ncurses))) is pretty bad compared to buildInputs = [ ncurses libevent makeWrapper ]; even if you go by token count. (I think it's 16 vs. 8.) A different problem is that syntactically the Scheme version looks like a call to a function called “inputs” and I don't think it is; that depends on context. In general in Lisps the interpretation of everything depends on syntactic context, so you have to do a lot of processing consciously that you can do subconsciously in languages that have a syntax.
(There's an indentation error in either your example or my browser that makes that input clause appear to belong to the origin clause rather than the package clause, btw. The extra redundancy of the different kinds of delimiters makes that error harder to make in Nix. I wrote about this more at length in http://www.paulgraham.com/redund.html )
The module imports at the top are a lot more egregious but that's because they're using Guile’s module system naked; it's not really the fault of Scheme's syntax per se and I think you could hack together some kind of macrological solution.
I think Scheme is brilliant and probably a better choice, but I think the syntactic cost is pretty heavy in your example.
When it comes to Nix and Guix, though, these are kind of minor details. More important questions are things like “does it have the software I want in it” and “how reproducible is it” and “how do I figure out what's broken”.
On the other hand you've got 'stdenv' all over the place in the Nix example, e.g. stdenv.lib.licenses.bsd3 vs license:bsd3. Also stdenv.mkDerivation is kind of an eyesore compared to define-public / package.
One is nicer than the other in a few different minor ways, but overall I think it's basically a wash. I'd not consider verbosity a factor if choosing between the two.
>(There's an indentation error in either your example or my browser that makes that input clause appear to belong to the origin clause rather than the package clause, btw.
Sorry about that, I botched the indentation when I pasted that into my scratch buffer, which had unbalanced parens in it. That's on me.
Those two package descriptions don't appear equivalent. The Nix one includes tmux-bash-completion and some extra build configuration compared to the Guix one, as well as a much more verbose description.
I meant both to be examples of the general look and feel of each DSL. They aren't precisely equivalent, but I do think they're illustrative examples of the two DSLs.
After deleting the bash completion stuff and replacing the verbose description with Guix's, it cut the package from 63 lines down to 36. Deleting the blank lines cut it down to a further 27. For comparison the Guix package (which had no blank lines to begin with) is 35 lines.
Here's the trimmed Nix derivation:
{ stdenv, fetchFromGitHub, autoreconfHook, ncurses, libevent, pkgconfig, makeWrapper }:
stdenv.mkDerivation rec {
pname = "tmux";
version = "2.9a";
outputs = [ "out" "man" ];
src = fetchFromGitHub {
owner = pname;
repo = pname;
rev = version;
sha256 = "040plbgxlz14q5p0p3wapr576jbirwripmsjyq3g1nxh76jh1ipg";
};
nativeBuildInputs = [ pkgconfig autoreconfHook ];
buildInputs = [ ncurses libevent makeWrapper ];
meta = {
homepage = http://tmux.github.io/;
description = "Terminal multiplexer";
longDescription =
'' tmux is a terminal multiplexer: it enables a number of terminals (or
windows), each running a separate program, to be created, accessed, and
controlled from a single screen. tmux may be detached from a screen and
continue running in the background, then later reattached.
'';
license = stdenv.lib.licenses.bsd3;
platforms = stdenv.lib.platforms.unix;
maintainers = with stdenv.lib.maintainers; [ thammers fpletz ];
};
}
That looks almost identical to the Scheme one with the only real difference being foo=bar; vs (foo bar).
Hardly enough of a difference to change anything "a lot" either way.
Guix package definitions were unreadable to me initially, as I had never used Scheme/Lisp before.
I've written a couple of them now and the definition above is extremely easy to read. Big part is just formatting & parentheses, I think my eyes just needed a little bit of adjustment time.
Guix is a GNU project which underpins today’s largest eco-system of OS, utilities, tools, apps and language related work. It has larger community as well.
So although it’s inspired by Nix, I personally will chose it as it has evolved quickly and if you look at all three aspects Guix, Guix System and documentation it’s now better than Nix. Also last but not the least I work with emacs lisp, so I feel at home with Guile Scheme so I will prefer Guix over Nix.
Personally I will like Nix also to flourish and being a non GNU project it will be able to provide closed source proprietary packages which is not part of core Guix. I think a healthy competition between the two is good, and whichever gets popular is overall good for advances in OS eco-system. Guix System is a new OS, not just package manager.
But by making Guix package manager available to other systems, it might move people who see benefits to move to transactional, predictable, secure OS like NixOS or Guix System.
> Guix is a GNU project which underpins today’s largest eco-system of OS, utilities, tools, apps and language related work. It has larger community as well.
To be clear, this is talking about the entire GNU project compared to just Nix? All metrics I can find show Nix's adoption is significantly above that of Guix.
I actually tried guix several times, and for my trouble I got my tmux sessions (and random processes on my box) killed on a box with 32 GB of ram. #guix on freenode was confused. I intend to wait until guix is a bit more stable on Ubuntu.
Guix is technically interesting, but as a fsf endorsed distro it makes running common hardware a challenge. Are there linux-antilibre packages out there for guix?
I've been using NixOS on my laptop for a couple of years now and I am still embarrasingly undereducated on how to do some of even basic things. I feel I still haven't fully grokked "the nix way". You will need to understand it pretty well to do even basic things like running specific versions of ruby or python with specific native libraries. I have spent a couple of hours here and there on nix pills but I think one really needs to set aside some concerted time to get into it properly. I consider myself fairly proficient with Linux in general and never had this kind of steep learning curve with other distros.
(This is a not me criticizing NixOS, just noting that they do things very differently and you really need to invest some time and effort to start being able to solve your own problems rather than relying on what's provided by others)
I used NixOS for 6 months and dropped it for the same reasons. I absolutely love the idea of my entire system being configured through text files, and being easily reproducible, but the amount of work you have to put in to get anything working in that distribution is insane.
Please don't take this personally, but as somebody who programs Rust and quite likes it comments like yours remind me of people who come to Rust with the mindset of their own language, try around for a bit and say nothing works and things are hard to get done.
They are right — however it just shows they didn't understand it yet. A bit like if you gave a pianist a saxophone, and after they got the first decent tone out of it they tell you: "Yeah it is nice and all, but I can't see how you can make real music on it."
I think in IT we tend to forget sometimes that new things that work sufficiently different take their time to master. If you were a nix master and had to learn how to deal with traditional dependency managment you would probably also resign after a while and declare it to hairy to get things done.
Note: there is a chance I completely misrepresent your experience, this isn't meant to talk about you specifically
I think there's a big difference between "I can't see how you can make real music on it." (didn't see anyone say something similar in this thread) versus "I'd have to practice a lot more before I can make real music like I already do on a piano".
Maybe not everyone is ready to invest the required time to see the benefits, and there's nothing wrong with that.
I agree. I still think NixOS is a fantastic and didn't mean to make it sound otherwise. I just don't think the time I'd need to invest to be as comfortable with it as I am with my current setup is worth it, and hope that can change in the future, either through the software itself or me getting more time to devote to it.
As a side note, it's funny that you mentioned Rust. I gave up on Nix when I couldn't for the life of me to get Rust to compile one of my dependencies that needed something to be installed on the OS (it's been a while, I don't remember exactly what it was).
Indeed. I stopped worrying about how I exactly set up a server, because it is all in the configuration.nix of that particular machine. If I want another machine with the same configuration, I just copy it, run morph deploy and it's done.
I bounced off it two or three years ago after trying to get it running a basic GUI desktop environment by following a guide (IIRC an official one of some sort? It seemed legit, certainly) and after an hour or so of installing and faffing about was nowhere near having it working or even feeling like I might, after another hour, get there. Nothing seemed to work right or do what it was supposed to, troubleshooting was hard, and learning some new (and, at least initially, quite unpleasant) language just to configure this one package manager... ugh, no.
For reference I ran Gentoo on an IBM Thinkpad as my main computer for a few years in the oughts, among a bunch of other Linux work, so I, by necessity, at least kind of know what I'm doing with desktop Linux installation and config, from a fairly low level on up. At least that knowledge has proven highly transferable and, over and over again, relevant and useful. I didn't get the impression I was learning anything from NixOS other than how to make NixOS work, and I wasn't even making much headway at that.
You frighten me a bit here. I looked at the article and have the ability to build a web site in a minimum locked env is great. Just try to taste it and it seemed not as he said. Wonder now.
Not sure I understand your question, but NixOS does have option to test things "nixos-rebuild test" (to test without saving it permanently, so reboot should restore it) or "nixos-rebuild build-vm" (which builds a vm image with the config)
I don't see a Nix example there, but the NixOS github looks to have some issues already surrounding provisioning Nix for use in containers.
Containers are a great way to explore a distro without messing with your host, you can at least check out their user space, filesystem layout, and package tooling. The main shortcoming is you don't get to see any kernel-specific features, since you're essentially chrooting into the other distro's rootfs.
I haven't used NixOS outside of VirtualBox, but plan to use it for my next Linux setup.
I believe the reason for build-vm is that systemd is central part of NixOS and you can't test it by starting it inside of a container (I don't think systemd even allows it).
The nixos-rebuild command actually creates a path with a new configuration, so you could similarly use nspawn and go there. When you use "nixos-rebuild switch" it updates "/run/current-system" symlink to that new location.
systemd is container-aware, you should be able to directly "boot" nix containers then using tooling like systemd-nspawn, and login/manage using machinectl. It will all "just work" with systemd on both sides.
It's been years since I last tried NixOS. One of the things that I really didn't like was that changing something that had a lot of dependencies meant that everything that depended on it, needed to be rebuilt. That's so even if I knew the change couldn't affect how the dependencies would be built.
I know why NixOS requires that. I think that's really cool from a safety side, but it was also really annoying to have to wait so, so long to use the system after an insignificant change. I wish there were a "I know what I'm doing; let me risk screwing up" mechanism. I don't know if there's one now, but I kind of doubt it.
Well, it depends on the change. For example, fixing a misspelling in a manpage could technically break something, but I think anyone would agree that it's fairly low risk. Likewise, there are changes one could make to code that could be easily judged to be low risk at breaking something. Being able to take the risk at your own judgment is what would be wonderful.
In any case, such "bite your face" moments wouldn't necessarily be visible at the build-time of dependent packages. Rather, they'd be visible when running them. So, you end up waiting for all your dependents to be built exactly like they were already and still exhibit the bug you would have also seen using the previous build.
Happy to see more people interest in Nix. At the end of the post there are some links on learning resources, that list is 100% the best way to get started.
If they built their app differently, such as using a temporary directory rather than building it in tree in the release path, they could've used staged builds to cache their steps in docker. What they do get from Nix package management is freedom from dependency hell.
Actually, I prefer habitat (hab) for portable Linux packaging because it supports a crazy amount of caching, it does GC on unneeded packages, it has an internet-visible package repo to publish to and has a simple DSL that I can teach to anyone. I don't use its other features that much, and I don't use it on FreeBSD but I use hab studio on macOS now and then.
Nix is pretty awesome because it almost marries package management and configuration management... if you think about it, the definition of a conventional OS package is an arbitrary one that limits configurability and customization. Gentoo, Arch and others try to work around it, but you really can't unless you can drive and hook intelligently into specification language that built the package and can accept modifications at different points without a lot of kludgy hacks or forks. Furthermore, the conventional package paradigm assumes one version and one configuration for everything, unless you want to get messy with variant packages. Nix and hab solve these issues.
The problem is I can't suggest it in a lot of usage contexts because it would be like if I recommended Rust, Elixir or Haskell to a PHP shop. ;) I really want to but I know it's probably not a good idea. Nix is awesome wherever the badassometer says "above average." Give it a try, but use what works best.
>If they built their app differently, such as using a temporary directory rather than building it in tree in the release path, they could've used staged builds to cache their steps in docker
Can you explain this just a bit more, please?
I was just reading the two articles you linked today, and your comment seems there is some piece of insight that's missing.
If you search the multi-stage build article for "cache" the only hits are related to not using cache.
What should I do if I want to leverage the cache as much as possible in a multi stage build?
I've tried to use Nix in the past, but I can't get past the layers of abstraction. Normally I go and consult the relevant docs for what I'm trying to configure (postgres, systemd, etc), but with Nix I also need to consult the Nix docs and figure out the discrepancies/map values between them. I love the idea of systems configuration like this, but I'd much rather see less abstraction ala cloud-init with nixos-rebuild.
For those who use Nix, how do you deal with this kind of abstraction? Additionally, how soon do you see security updates land?
I have to do a bit of DevOps at work, and I absolutely hate every second of it, hence I tried Nix, and I think it is absolutely awesome. As a software developer, I find the ability to take a cryptographically consistent closure of a program's environment and deploy it in a reproducible way across your dev, loadtest and prod machines _the only reasonable way_ to do things. Not to mention that Nix community is very knowledgeable and open to patches and suggestions.
However after a brief investigation period I abandoned this idea altogether, here's why.
In my opinion, Nix is only worth climbing its steep learning curve when you really utilize its "reproducible environment" concept. As a counterexample, take your home PC environment, which is 1) unique 2) quickly changing. You don't need to reproduce it very often (how often do you upgrade your laptop?). You don't really want a hash-perfect reproducibility there, and a simple Ansible script will work just fine. So, the target of Nix should be "serious enterprise", where you deploy hundreds of machines, right?
Well, "serious enterprise" where I am at (banking) will _never_ adopt Nix. There are two main reasons why:
1) NixOS doesn't follow standard Linux filesystem hierarchy. They get around this by patching everything, both manually and automatically. nixpkgs repo contains thousands of patches. To put it in the perspective, OS is one of your trust anchors. I don't want to sound as "no one got fired for choosing IBM" guy, but legal people will scream bloody murder if they see extensive level of patching going on in the distro, for a variety of reasons, not necessarily tied to security. Some of their patching goes very deeply, BTW, and straight up changes behavior of the software.
2) Secrets management: it doesn't exist in Nix. nix store itself is world-readable. There are fundamental problems with some cryptographic algorithms that require entropy, and therefore go against "reproducible environment" concept.
YMMV, but for me Nix didn't do the job, although I studied it deeply, and contributed some patches to nixpkgs. It looks like a futuristic research project. On the outside it is brilliant, but in brutal reality you need to run the ugliest scripts to patch shebangs and what not.
I don't mean to slight you or the effort you've put into this if you're an author or maintainer, but that's a terrible rationale. Given this choice, your tool doesn't fit well with other software, and from a systems perspective, it feels slightly egotistical.
In the future, file systems will be immutable and only a few sanctioned volumes will be writable (if any). It'd be better if systems like this abstracted away the notion of the file system and operating system altogether. Everything should be virtual and that is how hermeticity and reproducibility should be achieved.
That said, I admire the work you're doing and can't really point fingers as I'm not a contributor.
For what it's worth, the prefix is configurable, but if you can keep the default /nix, then you get benefits like sharing reproducible builds and build caches with everyone else using it.
Your "everything is virtual" will also make it increasingly trivial for a process tree to elect to see some arbitrary filesytem tree at /nix, so that physical volume configuration won't matter and there is no "canonical" filesystem layout to make /nix feel intrusive.
> In the future, file systems will be immutable and only a few sanctioned volumes will be writable (if any).
If any?! Where do I put my files if not on a filesystem? In some corporate cloud? Writable filesystems for corporations but not for the common plebian? Is that the sort of dystopian future you're envisioning?
If I've misunderstood what you're saying, then I apologize. But I'm alarmed by my perception of what you're saying.
Some IDEs allocate config directories in your homedir: .idea, .vscode, .vim
Some of them do it under .config.
Some systems write to /usr/local
Some things scope to the project you're working in. Python venv, node_modules, etc.
Build systems, such as Cargo, Maven, and Pants each do their own thing.
Make, autotools...
Have you looked at an Android filesystem? I don't even know how to describe it.
Everything everywhere is horribly inconsistent.
I'm thinking in terms of workloads, of which building and linking are merely types of work that produce artifacts. There are many types: server workloads, jobs, userland software.
If you semantically declare what a workload needs, what dependencies it has, where those dependencies must exist, and what outputs it produces, everything can be wired together logically and hermetically.
I'm hoping the immutable containers of the Docker/microservice world make their way onto the desktop.
If I can reduce my entire desktop environment into a single config file and have it reliably reconstruct itself, it would be magical. I would freeze and wipe my machine constantly.
Of course writable, mutable volumes will exist for humans, but software can't play in that realm or it will lose hermeticity.
I've sort of gotten off track and am no longer speaking about package managers / build systems. I think there's a more general solution to all of these problems.
> If I can reduce my entire desktop environment into a single config file and have it reliably reconstruct itself, it would be magical. I would freeze and wipe my machine constantly.
You can do this with Nix. I do this with Nix even on non-NixOS machines. The thing is that Nix has a certain "path of enlightenment" that one needs to walk until it becomes a tool that adds certain capabilities[0] to your mental model.
All the software is inconsistent, but we can wrap it consistently by whichever means are necessary for that particular thing. Anything from flags setting paths to dynamically patching the source (and describing that operation as code) is possible.
Taking a top-level directory isn't very good form IMHO. I understand wanting a hard-coded path but /opt/nix seems more in keeping with the Unix philosophy.
So, if you were to build a package assuming /opt/nix instead of /nix, the sets of packages would be entirely different. The "can't" is more of a "seems impractical" than "hardcoded assumption in the nix program itself".
- /nix has been the (default) location for at least 15 years without problems, changing this now just because Apple messed with this would be very strange
- With Nix, every path in /nix/store has its own ./bin, ./share, ./lib, etc. As such, putting /nix anywhere but in / doesn't make much sense
- And honestly, it's just short, fresh and distinctly different. The path symbolizes that Nix isn't bound by the conventions of other distro's
And problems with changing this now:
- It would be a huge amount of effort, there's many places where this location is assumed, not only in code, but also by people
- Finding the workforce to actually do this change will be near impossible, as the only benefit is a slightly easier installation for the latest macOS version, which is not Nix's main platform
I find it pretty stunning that Nix has decided to drop support for OS X over ideological reasons like this. There are workarounds, but the reality is that they're upset at apple's decision so they just dropped support. We're not close to a year in to this these issue threads. Any software which would be so anti-user does not deserve to be used. Nix might be magical but that sucks.
This comment confuses me. It makes it sound like some authoritative figure came in and said, "Nix shall not support OS X because Apple is evil." But to my eye, this is not at all borne out by the thread.
There's one or two early comments which float the idea that Apple/Catalina's problematic technical change "may force us to drop support for macOS...". That prompted lots of -1's, and then there's a gigantic thread where people are trying to figure out how to adapt, and that's an on-going/non-trivial process.
Did I miss something? Is there actually a policy decision buried mid-thread that's hard to see?
In your comment, I pick up some fear/vulnerability. There's no reassuring voice to say, "macOS Catalina support will be fixed yesterday!" I share that sense of fear/vulnerability, but I think it's a natural trade-off when you sign up for free/open/transparent/community-driven software. For better and worse, you get to watch how the sausage is made in real-time.
> I find it pretty stunning that Nix has decided to drop support for OS X over ideological reasons like this.
What are you talking about? There’s no such decision. The first comment mentions it as a possibility, but the rest of the (very long) thread is exploring different ways to solve the issue.
I think it's a lot more complicated than that. Yes there are plenty of people that use Docker as a package manager, but I think that's a failing of traditional package managers rather than a failing of Docker. You could, for exactly the same reasons, say that Nix is Nix Containers[1] Done Right but if that were true then Nix Containers wouldn't need to exist in the first place.
I use Nix for every project once it grows enough for me to have to pin a directory. And sometimes sooner.
I don't know it that well but I've picked up on design patterns, use the repl to figure stuff out, write small abstractions when necessary..and it works pretty great. And the gains grow as my dependencies become polyglot (even C + another language like Haskell..but when you add compile-to-JS, static asset generation into the mix it's amazing)
Yeah it's hard to learn. No denying it. But every other "alternative" I've found doesn't actually approach it. So I've kind of gotten sick of reading about "alternatives" because it always feels like they don't actually solve the problem Nix solves. I always end up using the alternative+Nix. Classic example of this is the Haskell stack build tool.
I could've bemoaned complexity and various capitalist criticisms when I first ran into it at a job. But I'm so much better off for not.
> The image age might look weird at first, but it’s part of the reproducibility Nix offers. The date an image was built is something that can change with time and is actually a part of the resulting file. This means that an image built one second after another has a different cryptographic hash. It helpfully pins all images to Unix timestamp 0, which just happens to be about 50 years ago.
This doesn’t mean builds done by Nix are bit-for-bit reproducible by default, does it?
There are a lot of other ways to introduce non-determinism in builds, like rand() or network requests (which I think could only be eliminated in a generic way by literally emulating the CPU of the machine doing the build and not allowing external communication?)
It provides the hash of files as a method of verifying the input identity. As long as the build file is pure function of build tools (gcc, make, etc), and the build tools themselves are deterministic, the build outputs should be reproducible. Though I've never seen anything like `-frandom-seed=<input-file-name>` in build files. I think bit-by-bit reproduciblity is one problem tackled by guix.
The nix by default makes builds in a sandbox. It is purposefully making certain operations unavailable (local home directory, local configuration, network access etc). I don't think it prevents Rand(), but that's not as common.
I still haven't figured out the nix language, how to find which package installs a particular binary if it isn't the name of the package, or how to find config options for packages I do have.
Those three problems are what's holding it back for me. I still enjoy it so far, though.
Most of their complaints about Docker, specifically:
* The package manager is included in the image
* The package manager’s database is included in the image
* An entire copy of the C library is included in the image (even though the binary was statically linked to specifically avoid this)
Can be solved with Docker multi-stage builds[0]. This is essentially a separate build stage inside your single Dockerfile that sets up, builds, and creates artifacts. Those artifacts are then copied over into a later build stage. The resulting docker image is lean and yet the artifacts were built at docker build time, and its all contained in a single file.
My mistake, you're right. I'm thrown off by the idea of someone using a multi-stage build and still having those complaints. Just use/build a final base image that is lean enough for your needs. You're not forced to have any cruft you don't want.
Looks like that author prefixes the month with "M". Dunno why.
Sometimes DD-MM-YYYY and MM-DD-YYYY can be confused, such that if one of those formats is used, it could be helpful to clarify. However, YYYY-MM-DD is typically pretty unambiguous, so it seems odd here.
I guess it's probably meant in analogy to, e.g., "2020-Q1" (which would be the first Quarter of 2020) or "2020-H1" (which would be the first Half of 2020).
I do it because it's visually unambiguous. It is clear what is the day number, what is the month number and what is the year number. I have no idea where Apple got this, but it's the date format used in the iOS Lojban locale. I kinda got used to it because it's also faster for me to type.
This is true, it takes quite a bit of effort to learn it. I think what Nix needs is perhaps series of tutorials targeting how to solve common problems that nix is useful for. There are plenty of things that are abstracted in nixpkgs, so perhaps use them as building blocks and explain how they are implemented in different articles.
After reading the web page posted above, and the comments in here, I've reached the conclusion that Docker, Nix, etc are trying to solve a problem that shouldn't exist, and that the actual problem is at the operating systems' provided mechanisms for dealing with programs and dependencies.
More specifically, it seems like the Unix filesystem layout, with its bin, etc, usr and other well-known directories is part of the problem.
Also, the environment variables mechanism is part of the problem.
Why couldn't we have a much simpler mechanism for running things? for example:
a) each application should live in its own directory.
b) each file or folder should be versioned.
c) common files for applications, for example, fonts, should be installed in the application folder themselves, and the filesystem shall provide hooks for external systems to be notified when these files are created, moved or deleted. With this system, important files, for example tool executables, would automatically be found and invoked without the need for adding a path to them.
d) runtime program dependencies should be provided via text file with parameters and not by environment variables. This text file would have to be specified each time a program runs, and the user could create specific scripts that run the program with specific parameters. Programs should have attached, in the same folder, the default parameters file. How's that different from environment variables? well, with this system there is no system-wide dependency of a program to a unique string, and therefore the requirement to override environment variables for specific programs wouldn't exist. I.e. programs shouldn't need environments to run, but list of dependencies.
With the above solution, making repeatable and sharable development and test environments would be extremely easy: just copy the application folders and the configuration files one needs. If something is missing, copying it from another source would be the solution.
Ultimately, having a set of programs at hand and wanting to combine them to produce some output is exactly the same as having a bunch of functions in a program and invoking them in a specific order to get some result. And while in programming we have recognized the problem of global state and we have taken measurements to minimize the problem, for example with functional programming, we haven't done so for programs.
In other words, the problem is that we are trying to compute things using global state, once more, this time at the operating system level!
For me, that's the fundamental issue. All these tools are welcomed, but unless the fundamental problem is solved, no real progress shall be made and there is always going to be tool fragmentation (i.e. do we use Nix? Docker? Some other solution? etc).
> the actual problem is at the operating systems' provided mechanisms for dealing with programs and dependencies.
More specifically, it seems like the Unix filesystem layout, with its bin, etc, usr and other well-known directories is part of the problem.
>
> Also, the environment variables mechanism is part of the problem
The way I see it, Nix exactly solves the "fundamental problem" as you describe it!
I still don't see the light. I'll try and lay out my reservations, starting with the pain points of the docker image (out of order to coalesce some points):
> The package manager’s database is included in the image
> Most of the files in the docker image are unrelated to my website’s functionality and are involved with the normal functioning of Linux systems
So remove it...? Docker multi-stage builds[0] make this really easy, though I still use separate containers for different times (building, testing, etc), and sometimes even multiple "stages" of containers for languages that take a long time to build (Haskell).
> The package manager is included in the image
Assuming you don't want the package manager itself, why not just use a scratch[1] or distroless[2] image?
Well of course it's harder than one might expect to use those options (distroless is actually fine), because of the lie of "statically building" anything on most distros other than alpine, which brings us to the next pain point...
> An entire copy of the C library is included in the image (even though the binary was statically linked to specifically avoid this)
Using alpine gets you much closer (if not all the way there) to a proper static binary w/ musl libc. You may have to look into things like replacing the use of libnss or certain systemcalls with golang libraries, however[3].
My biggest problem with nix is that it just isn't worth the effort -- The ~5+ paragraphs of nix tools and scripts is just not at all attractive to me personally when I know roughly a day's worth of fiddling to get a proper static build is good enough most of the time. I've heard amazing things about nix, nixops, and all the tools there-in but there just isn't enough pain IMO to warrant completely changing how to do things. Nevermind the fact that disks get cheaper and cheaper, network gets faster and faster, and people are starting to look into pre-emptively sharing container images across meshes of nodes with tools like Dragonfly[4].
Another point that is somewhat related -- if nix is good, guix has to be better on some level (excluding factors like ecosystem) purely because it uses a full-blown language for the config (i.e. terraform vs. pulumi).
It doesn't seem like nix will hold enough value before unikernels "arrive", then the problem of taking your distribution along with the program you want to deploy disappears alltogether.
> It doesn't seem like nix will hold enough value before unikernels "arrive", then the problem of taking your distribution along with the program you want to deploy disappears alltogether.
Unikernels still don't solve the build problem, only the distribution. And you can get that workflow today, by shipping around container or VM images. And Nix can even build either for you!
Correct me if I'm wrong, but they do solve the build problem -- you can build static libraries way easier if you replace all the system stuff libc was doing for you with approaches like unikraft[0] or rumprun[1].
I don't have time to look at rumprun right now, but from the Unikraft page:
> The Unikraft build tool is in charge of compiling the application and the selected libraries together to create a binary for a specific platform and architecture (e.g., Xen on x86_64). The tool is currently inspired by Linux’s kconfig system and consists of a set of Makefiles. It allows users to select libraries, to configure them, and to warn users when library dependencies are not met. In addition, the tool can also simultaneously generate binaries for multiple platforms.
So it doesn't solve getting the makefiles (or presumably the source code depending on how it's organized), the compiler, or assembling multiple projects into one coherent build.
Using Make as the big unifier also sounds a bit scary, since it's so easy to screw up dependency lists or introduce accidental impurities, because it has no way to verify either.
> So it doesn't solve getting the makefiles (or presumably the source code depending on how it's organized), the compiler, or assembling multiple projects into one coherent build.
Ahh I see what you mean -- I was more thinking about the library issue, but yeah building is definitely still really hard.
I think good support for unikernels is a long time off, and will obviously vary by language -- maybe the usage that finally breaks through will be tight integration with some lower level compile tools. For example if a company were to hit GraalVM (which is really trying to push themselves as the better-than-stock VM for a bunch of languages) and LLVM as integration points for unikernels I think they could really make a convincingly easy toolchain (without modifying developers' current toolchains).
This is so wrong on so many levels. So many complications. Monstrously huge binaries. Static linking. Docker and Nix mess of complex instructions, all custom. With RPM, it is one command, rpmbuild, with a standardized spec file and no need to link statically. No Docker image needed, or anything else. Just plug into a yum repository or into a Kickstart install group and you're done. No code or compose files or playbooks to write. Turn the VM on and have it automatically PXE / DHCP / TFTP boot and Kickstart install it with the desired software and configuration. Without writing a single line of code or lifting a finger.
"Custom DHCP config"?!? You do realize DHCP stands for "dynamic host configuration protocol"? That's exactly what it was designed for... And if you can't get "working PXE" (PXE is implemented in firmware, by the way), you might want to brush up on the basics of putting together an IT infrastructure, or this moment might be a good time for a career switch out of IT... There are no "Kickstart manifests", it's a ruleset. There is no "RPM registry" (this isn't Windows™️) or anything of the sort. There are no "custom spec files" as it's a formalized format with grammar and lexicography which the OS's software management subsystem understands - it's like putting a VHS tape into a video cassette recorder, modular and standardized.
Day two is where a software deployment server and clearly defined change management process according to the department of defense capability maturity model come into play.
The PXE client is implemented in firmware, but for the PXE client to have anything to talk to, there's a sizable amount of moving parts to configure just right in order for it to work well.
Those moving parts often include telling your dhcpd to include some additional payloads to offer to clients when they obtain IP leases. This is in tandem with getting tftpd running, writing your ks.cfg, and writing a suitable pxelinux.cfg so that you can boot into Anaconda to install the OS.
This is all assuming that you even have the appropriate access to do these things in our increasingly cloud-centric world. Amazon, Microsoft, and Google all won't let you use PXE to provision VMs. Notwithstanding, networks that are IPv6-only pose a problem for netboot installations. PXE doesn't work so well there unless you make the conscious decision to use DHCPv6 over stateless autoconfiguration.
--
Now, this all said, you're on something resembling the right track with respect to OS packages. There's a common pattern (at least in my experience) in the public cloud world where you use something like Packer to build a machine image for provisioning a VM in, e.g., EC2. I've used Kickstart to great effect here in conjunction with some rules in the build system for autogenerating specfiles to hand off to rpmbuild.
Combine that with a set of functional and integration tests (the apps I wrote during this period of my life were in Ruby, so the tests were in Ruby, but the technology doesn't matter; you could just as well use ksh93 or pdksh) to ensure that the image you built is what you expect, and it's actually rather killer.
The only things that yum/rpmbuild truly lacks that Nix gives us for free are (1) a package build environment, on Linux at least, that's hermetic all the way down to libc; (2) the ability to install more than one release of a particular library at a time without confusing the dynamic linker machinery; and (3) a way to herald when to rebuild certain packages based on changes to their dependencies.
Apropos deployment and running in production, the intent then is to keep it immutable (you disallow SSH as a general rule after day one, right?) and for your change management process to consider the VM as a whole deployment unit. Want to deploy updated configuration? Deploy new VMs. Want to deploy a new software release? Deploy new VMs.
SSH is not disallowed, because there are still cases where it is necessary to ssh in order to service faulty hardware. However, ssh is disallowed for making any ad hoc changes to any system, unless that is done by removing, installing, or upgrading a configuration OS package. It would be better to say that any commands which manipulate system state by hand are disallowed. In worst case scenario, every system is a throwaway one: since the configuration is 100% OS packaged, I can just re-instantiate a server, be it physical or VM. I don't have AWS or hosting trouble because I didn't fall for external hosting hype, and having everything system engineered, I run everything on-premise for peanuts. AWS or Azure just cannot compete with me in cost.
I'm left with just questions at this point. Assume production, not development or UAT or staging or integration, etc.
How do you enforce that kind of compliance on SSH sessions? How do you audit for SSH sessions that invoke commands that mutate the state of the system? How do you account for configuration drift in the event that an SSH session mutates machine state outside of those compliance and auditing mechanisms?
Why are you SSHing in to curate installed packages on machines manually instead of letting a deployment agent/service take care of that?
Furthermore, what kind of environment are you operating in where your response to a hardware fault is to SSH in instead of immediately evacuating the workloads from the faulty machine, replacing it with a new one, and taking the machine to a triage location for further investigation?
Do you operate at a small enough scale where leaving a faulty machine active on the network, installing packages by hand, and SSHing to individual machines is actually sensible?
Do you have a lot of false hardware alarms where the response is to SSH in, run a few commands, and then bail when the going looks good? What kind of monitoring practices do you employ?
SSH in and get caught messing with the system manually - get fired. Keeps everyone honest.
The environment is close to 90,000 systems. We service the hardware because it is configured redundantly, for example the / filesystem is always on a mirror. The physical systems are system engineered and configured to withstand multiple hardware faults without any loss of service.
You keep saying these things, but I'm less and less convinced that you have a significant hand in how these things are "system engineered," as you put it. I'm also concerned by how few of my questions you actually answered.
"SSH in and get caught messing with the system manually" is an extremely hand-wavey answer, especially in an environment of O(1e6) machines. I'd expect such an environment to have a rather significant degree of automated compliance and audit controls in place.
You'll also note well that I didn't say not to service machines. I'd asked why you would prefer to leave a machine with an implied potential data-destroying fault in the rack than immediately swap it out with a new machine that has been verified not to destroy data. The servicing part comes into play here in order to mark the previous-faulty machine as no longer faulty.
In particular, rack space is expensive, and certifying a machine as fit for service again can take a long time, so it's a bit of a waste of real estate to leave a machine that can't serve production traffic in place when you can quickly swap it out with one that can.
Furthermore, redundancy doesn't help when you have an HBA or an ethernet controller or a memory controller or CPU or a piece of PCIe interconnect hardware or a hard disk with a firmware bug that corrupts data. At that point, your redundancy becomes a liability, and you could wind up propagating corrupt data throughout your system.
This all said, I'll agree that the louder and more obvious hardware faults like disk crashes are relatively easy to cope with, so in those cases, I'd likely also leave the machine in the rack and just perform the respective component swaps. The place where I predict we'll disagree is whether to evacuate the machine's workloads prior to servicing.
So, again, I'll assert that you likely have less of a hand in the engineering of these things than you're letting on. That's nothing to be ashamed of, but it does make your responses seem less genuine when you try to pass off your experience working in such an environment as expertise in designing such an environment.
I have single handedly designed entire data centers, irrespective of which impression you might get from my responses, which are admittedly terse because I'm usually on a mobile phone tap-tapping them out, and that starts to severely piss me off really quickly. Like now. Which is why I'm going to stop right here.
> "Custom DHCP config"?!? You do realize DHCP stands for "dynamic host configuration protocol"? That's exactly what it was designed for...
Usually used for assigning network configuration dynamically. I'd venture a guess that most people never touch their DHCP configuration, aside from assigning DNS and IP pools.
And good luck getting your hosting provider to give you raw access to their DHCP config.
> And if you can't get "working PXE" (PXE is implemented in firmware, by the way), you might want to brush up on the basics of putting together an IT infrastructure, or this moment might be a good time for a career switch out of IT...
There is a lot more to stable PXE than having a client. Nevermind the insanity of effectively giving anyone who can broadcast IP packets root access to all new servers.
> There are no "Kickstart manifests", it's a ruleset.
Different words, same thing.
> There is no "RPM registry" (this isn't Windows™️) or anything of the sort.
Correct, guess I've done too much Docker recently. Meant RPM repository. Unless you're shipping around your RPMs manually? In which case, I refer back to the day two question marks.
> There are no "custom spec files" as it's a formalized format with grammar and lexicography which the OS's software management subsystem understands - it's like putting a VHS tape into a video cassette recorder, modular and standardized.
So.. the same as Nix definitions then? Except with Nix you don't have to worry about whether the system was set up for VHS or Betamax.
> Day two is where a software deployment server and clearly defined change management process according to the department of defense capability maturity model come into play.
So.. completely unrelated to the process you've advocated so far?
Not unrelated, in tandem. Once the systems are provisioned, the software deployment server is used to mass deploy components and bundles (OS packages) and is integrated into the change management process (for example, no deployment to production without an approved change request identifier).
Yes - but then their final stage is built on top of an Alpine container, and they complain about Alpine's package manager + OS files being included in the image.
If their final stage was based on scratch or distroless, the Docker file size would have come out to the minimal ~90MB result too.
Yeah, I don't understand that part either. If you want to be able to shell into the container and poke around, use Alpine by all means, but then don't complain about a C runtime being included. If you want a minimal container for your static executable, use scratch.
You don’t typically include the entire toolchain in a container with docker because of the size. You also want to construct things a bit carefully so you get proper build caching, but it’s doable.
Yes, but it goes to the trouble of making a static, dependency-free executable in the first stage, and then builds the second stage from alpine anyway.
This seems like the real answer to this problem. I really like Nix conceptually, but the problem it's being used to solve in this case is already solved by Docker, and using a multistage build with a Distroless stage relies on fewer dependencies, fewer tools, and many fewer lines of config.
The thing is, Nix is an absurdly complex piece of software, perhaps on a similar order of magnitude as maybe Kubernetes or other gizmos, requiring a very large time commitment to understand and to use properly.
Just for fun, have any of you tried to keep up during the Docker bigbang, with all these technologies popping up left and right, switching names, dying abruptly, getting super-seeded or hacked with vulnerabilities? That was one of my most mentally draining year for me and I've been in the field for a while now.
See... along the way, I've learned that Computer Science is always about tradeoffs. Now when I'm being asked to trade away my sanity learning this nonsense for in return a couple of megabyte of disk space that could've been wasted, I just don't see the value.
Seriously, are we optimizing for the right stuff here? I know Nix is doing package isolation with a programmable environment defined by text files, it's immutable and all. I get it. But we have alternatives for these problems. They do a pretty good job overall and we understand them.
Ironically, even those solutions I'm comfortable with are still struggling to get good adoption! How the heck is our field going to avoid fragmentation when it keep growing exponentially like this.
Perhaps it's just me aging and getting groggy; though there must be an explanation for this phenomenon. Please, it definitely gnaws at me.