Hacker News new | past | comments | ask | show | jobs | submit login
Nix 2.0 Released (nixos.org)
278 points by jack_jennings on Feb 23, 2018 | hide | past | favorite | 120 comments



I love the core concepts behind Nix, and have great respect for their engineering abilities. However I am skeptical of their ability to achieve broad adoption beyond their current community of passionate experts.

There are two reasons for my skepticism:

1. User experience. Unless you're one of the "passionate experts", the Nix user experience is pretty terrible. The learning curve is punishing compared to competing systems.

2. Elitist culture. In my experience, the Nix community is too smart for its own good. Their technical foundation is so far ahead of mainstream systems, and their technical design so satisfying to passionate experts, that they've forgotten how to live a day in the shoes of a mere mortal. Try pointing out flaws in the user experience, or the need to offer more pragmatic ways to migrate existing systems, and you will be met mostly with derision and reminders of Nix's superior engineering. But superior engineering is not everything. If you want to spread the amazing potential of Nix to everyone, then you need to compromise with a flawed, imperfect world. You need to meet users half-way, and guide them to the promised land, instead of waiting for them to show up on their own. Otherwise someone will come along that will do it for you.

All this is eerily similar to what happened to functional programming communities.


This, 100%.

Pain points I've run into:

1. Lack of clarity as to how language/application package managers interact with Nix (pip, stack, Vundle). Pretty much every time I've asked about this, I've been told to go use `nix-shell` or to install things through Nix. Increasingly, when I get odd behavior with applications I install through Nix, my first resort is to uninstall the Nix version and install it from apt; it might be a bit older, but I'm sure it'll work as expected. I've went through the apt package -> Nix package -> apt package cycle three times from what I remember off the top of my head, with python/ipython, Haskell tooling, and Vim.

2. Nix on Ubuntu feels like a second class citizen. Things that interact with graphics drivers often don't work properly, e.g. video players. (I understand that there are technical reasons why this is the case, but there's no warning that this is the case.) There only appears to be online package search for NixOS (https://nixos.org/nixos/packages.html#) and not for Nix on other platforms. Nox helps, but it doesn't seem to be the same feature wise (no ability to see the info you get by clicking the package name) and is also slow.

3. In general package quality is not great for less frequently used packages. Inkscape was missing potrace for a while. Rarely used packages go unmaintained.

4. Poor CLI. Needing to pass `-A` to a lot of commands to use them the "right way" smells of a poorly thought out design. No feedback or suggestions if you type the wrong thing to `nix-env -e`. It looks like there are major changes to this in 2.0, so this might have been improved.

Despite being someone who's gotten their toes wet contributing to nixpkgs, I'm likely not going to be installing Nix when I upgrade from Ubuntu 16.04 to 18.04.


> Needing to pass `-A` to a lot of commands to use them the "right way" smells of a poorly thought out design.

That stems from the namespace problem that Nix devs seem to think doesn't exist.

Packages that have the same name, or belong in a different namespace (hackage, pip, etc.) are confusing to find from a user's perspective. The real problem is that every package in a channel has to go into the same global namespace. Practically every package manager has this problem, but, in my opinion, Nix deals with it the most poorly.

To make matters worse, it isn't obvious which namespace .nix files are supposed to reference, or what functions exist globally. Nix really does need a lot of UX work, and I'm glad to hear it is getting some.


> Rarely used packages go unmaintained.

I have a related pain-point. I want to maintain nix packages on their own long-lived branches. I also want to base these branches on the latest stable release rather than unstable.

This would require upstream nixpkgs to treat my package branches as equal peers and to pull their changes with 'git merge' like in the Linux kernel workflow.

Status quo seems to be that upstream expects people to work directly on the unstable branch and to make contributions using short-lived topic branches that can be rebased at any time.

I spent a lot of time packaging Pharo Smalltalk for nixpkgs but I found it too complicated to stay in sync with upstream and so I ended up orphaning that package and redoing it on a downstream overlay repo that insulates me from the upstream nixpkgs workflow.

This was discussed a bit over at https://github.com/NixOS/nixpkgs/issues/27312 but perhaps didn't get enough visibility because I also stubbornly prefer to work on Github and don't follow the mailing list.


I built Darch to give me what I crave from NixOS.

Immutability.

https://godarch.com/


What does... how is this relevant to anything I said?


If I may rush to the commenter's rescue: Darch addresses your issues with NixOS by combining a familiar set of tools (Arch Linux) with stateless architecture. Its a fantastic welding of extreme package availability and best-in-class documentation with declarative dependability.


It's not clear to me that this really solves the same problems that Nix does.

I'd imagine this solution inherits any problems that pacman has. The Arch wiki states that "if two packages depend on the same library, upgrading only one package might also upgrade the library (as a dependency), which might then break the other package which depends on an older version of the library." (https://wiki.archlinux.org/index.php/System_maintenance) This is one of the problems Nix does not have by design; in fact Nix lets you mix and match packages painlessly.

This also doesn't seem to allow unprivileged users to install packages, which is kind of a side benefit of using Nix.

Docker-esque solutions' issues with reproducibility are well known; I look at https://godarch.com/concepts/images/, particularly how packages are installed:

  #!/usr/bin/env bash
  pacman -S steam --noconfirm
and I see the same issues that https://blog.wearewizards.io/why-docker-is-not-the-answer-to... warns of. Heck the author even admits this is the case in an adjacent comment: "I want a machine that can be declared and rebuilt deterministically (at least semi-deterministically, rolling distro and all)" (emphasis mine).

Also, frankly, I don't want to run my personal computer like a server, with complete immutability and the need to build fat image files every time I want to try out an additional program. That seems to me like a workflow that's better suited for servers where spending a few minutes for a deploy isn't a blocking operation, and where stateless service design is considered best practice.


> if two packages depend on the same library, upgrading only one package might also upgrade the library

Yes, you are inheriting the nuances of whatever package manager you choose. Maybe another distro can give you a truly fixed package versions? You can also use apt to pin versions. This isn't something that Darch introduced, but also, it isn't something it solves. So yes, if you need 100% deterministic, Nix is your guy. I don't think this is a big issue though on Ubuntu systems, or most non-rolling distros. Their apt updates are typically well tested and don't bump major versions.

> need to build fat image files every time I want to try out an additional program

You can install your packages and use them when you like, without requiring a rebuild. Hell, that is even part of my workflow for some applications like docker. I have dotfiles with a "install-docker" and "install-vmware" aliases that installs it whenever I need it, instead of baking it into images.

> That seems to me like a workflow that's better suited for servers where spending a few minutes for a deploy isn't a blocking operation, and where stateless service design is considered best practice.

I disagree. If this was the case, then why Nix then? Obviously, stateless is valuable. I have multiple machines that I perform common tasks on. I hate having to always manage the updates when I get back to each one of them. With Darch, I can deploy one single image to all devices, and be confident they will never drift in installed packages/configuration. I never again have to ask my self "what machine am I on again?". Stateless may be the definitive way to run servers, but that by no means restricts it to only servers. I have been running Darch for a few months now, and I find it incredibly useful and calming.


> So yes, if you need 100% deterministic, Nix is your guy.

There are two orthogonal problems here: a lack of package isolation and nondeterminism.

Nix isolates packages, such that updating one package has no impact on any other packages (with unavoidable exceptions like the graphics driver, presumably).

You're right that package isolation isn't much of a problem on non-rolling distros. One of the benefits of Nix is that you get some of the stability and predictability of an LTS distro with the freshness of a rolling distro when desired, without having to deal with package conflicts.

Incidentally, Nix doesn't need to be used in a deterministic manner. In fact, I don't think most desktop users of Nix care too much about determinism for most packages they run. I certainly don't; I'm happy to follow along with whatever arrives in my channel. Nix has features that support determinism, and I'm certainly glad they exist for when I end up needing them, but they're not necessarily why people use Nix.

> Obviously, stateless is valuable.

When I said "stateless", I was referring to the whole "cattle, not pets" view of servers, where the running state of any particular server is unimportant, with nothing in the filesystem being of value. I was arguing that needing to build a new image and reboot in order to change which packages are installed are a poor fit for the desktop use case, where frequent reboots are much more inconvenient than for the server use case.

I'm not sure what taohansen meant when they used the word "stateless"; they seem to mean something different when they say that.

Anyways, this point is not really applicable anymore, since you've stated:

> You can install your packages and use them when you like, without requiring a rebuild.

Presumably if you install additional packages uncontrolled by your tooling, then your systems can start to drift away from each other.

Nix does not have this compromise; there's no build step. At any given point in time you can reproduce whatever configuration you have on one machine on another machine, regardless of how piecemeal you arrived at that configuration.


Do you have more of the 'why' then the 'what' described there?

The samples seem to be targeted at endusers/personal computers (with images like 'gaming' and Steam mentioned), while most immutable systems I see are deployment/server environments.

I've dabbled in Nix before and Darch _could_ be interesting. Can I read more somewhere?


You can take a look at my personal recipes to see it in action.

https://github.com/pauldotknopf/darch-recipes

I think the documentation does a good job at giving you an idea of how it works. It doesn't take long to get through.

https://godarch.com/concepts/

The "why" is exactly the same reason NixOS exists. I want a machine that can be declared and rebuilt deterministically (at least semi-deterministically, rolling distro and all). I looked into NixOS, but the DSL was too much, and the npm/pip/etc stuff was a mess. I am a fan of Arch because of it's package availability and documentation, so I figured out a way to combine the two, using a "Docker-ish" approach.

My entire machines are built on Travis-CI and pushed to Docker Hub. Once I make a change to my recipes, ~20 minutes later, I can pull a fresh image and boot again.

Another thing I didn't like with non-declarative OSs (non-NixOS) was that if I wanted to just test a package out, after removing it, it would leave shards of config/package dependencies still on my system. With Darch, each boot has a tmpfs overlay, which means I can uninstall/install to my heart's desire, knowing that only things I commit to my recipes will be persisted. For example, I was trying to setup Ruby, and I had to try many ruby environment managers before I found one I liked. After a reboot, I was certain that the other ruby packages I tried where 100% scrubbed from my machine.

I also like the Docker approach, because using layers, I can quickly switch our the "desktop environment" layer to i3/plasma/gnome/etc, or my base image from Ubuntu/Arch/VoidLinux. This makes distro and DE hoping a breeze.

As for using Darch as a server, I would wait until I get the Ubuntu image done. That way, the builds will be more deterministic (instead of using rolling distros). I can see using that for servers, or IOT devices. I also intend to add PXE support to boot these images from the network, making it easy to manage the operating system on a fleet of devices. In summary, it is really up to your recipes and what operating system you choose.


> Poor CLI.

Did you even read the OP? Nix 2.0 has begun to fix this as one of its highest priorities.


Did you even read my comment? I acknowledged that.


My bad. It is at the end of that.


Your second point is exactly what happened to me here: https://github.com/NixOS/nixpkgs/issues/9682

Sure, pinning is great, and the ability to ignore versions seems to be a great design goal. Unfortunately, that design goal is shoved down the throats of users, and makes a very messy - and difficult to traverse or use - namespace where different versions are just thrown in the name without another thought.

NixOS is my favorite Linux distro, and the only OS currently installed on my laptop. It's also frustrating in unnecessary ways.


Thanks for linking that discussion. I'm surprised that you didn't get acknowledgment of the UX issues there, and I'm pretty sure there are people who would like to see the UX situation improve. It just takes work... this release represents a large step forward in UX. I think they'd welcome your input in making 3.0 better! I made a couple small docs fixes the other day.


> Your second point is exactly what happened to me here:

Perhaps I'm wrong but I think I not only agree with the nix dev in that thread, but I also think it seems kind of the opposite case to what the gp was describing in their second point.

From the gp's post:

> Their technical foundation is so far ahead of mainstream systems [...] they've forgotten how to live a day in the shoes of a mere mortal. Try pointing out flaws in the user experience [...] If you want to spread the amazing potential of Nix to everyone, then you need to compromise with a flawed, imperfect world. You need to meet users [...]

The definition of "user" can be vague, but if you're talking about "mainstream" and "mere mortals", I don't think setting up Haskell dev environments are the primary use-case. As was commented in that thread:

> It's what most distros do. (Only gentoo diverges from the big ones [...]

Favouring simplicity and focusing on supporting only production environments seems exactly the kind of user-focused pragmatism the gp would rather.

I'm not on Nix (yet), I use Debian; I use Apt to maintain my kernel, browser, video player. I don't use it to install dev dependencies.


> setting up Haskell dev environments

What I was complaining about was the difficulty installing a specific package version. That's trivial with other build systems, but not Nix.

> I don't use [package managers] to install dev dependencies.

I'm under the impression that most people do. Either way, I see the main advantage of using Nix to be setting up dev environments without installing a bunch of global packages, or dealing with a mess of files. Unfortunately, as a user, I find it to be much more of a hassle than it needs to be.


When quoting me, you changed my wording to suit your own argument, which seems to imply you get my point but are choosing to ignore it. (You swapped out "Apt", my distribution package manager, for "[package managers]", which would include both distro and dev language package managers. The latter are for devs, the former for users)


It certainly reads like "it" was to mean "the aptitude package manager" to me.

If you like to use a separate package manager for libraries/etc. that is fine. That is not something I enjoy doing.

Using Debian, I would frequently install a build dependency, and forget I cluttered my install doing so, leaving a package I don't care about or remember to be updated, and possibly break future installs. I read about nix-shell, and have been using NixOS ever since.

NixOS has clear room for improvement, but, unfortunately, some of the devs are adamant that it cannot - worse: should not - be done.


> It certainly reads like "it" was to mean "the aptitude package manager" to me.

The aptitude package manager is the package manager for the Debian operating system. It is for managing packages that make your OS run. Unless you're defining multiple repo versions in your sources.list and using apt pinning, it typically gives you one version of the latest supported software. Other distro/OS package managers do the same. Even Homebrew for macOS deletes older versions and makes them inaccessible. NixOS' package manager does the same here. This is very typical. Portage is an exception.

Separately, Cabal is a package manager for development dependencies for the Haskell programming ecosystem. This is what you should use for setting up such a dev environment. It is specifically developed with this in mind.

Asking your OS package manager to additionally manage the ecosystem of every programming language you may want to develop in seems a bit out of scope, no?

> NixOS has clear room for improvement, but, unfortunately, some of the devs are adamant that it cannot - worse: should not - be done.

Your definition of "improvement" seems to be to handle use-cases it isn't intended for and for which there are specialised package managers, likely introducing huge complexity and spreading resources thin. This seems like something that could also be a disimprovement.


> Separately, Cabal is a package manager for development dependencies for the Haskell programming ecosystem.

And cargo/rustup, go for golang, pip for python, gem/bundle for ruby...

But they all tend to depend on parts of the Os - or their own c compile chain to compile their own version of c libs.

Which leads to the question of how you most easily write a tool in haskell that you can distribute as a deb package.

Saying that "for haskell, playing nice with Perl's ssl-dependencies is hard, so we won't bother" is one approach. It certainly is one way to use up additional resources provided by moore's law: no shared libraries, just thousands of copies of statically linked strlen-functions, in thousands of chroots. (also, hello there, docker).

Snapshots and isolation are great tools, but I don't think it's clear cut what's best yet. Especially when you want to ensure you're not running code with well-known vulnerabilities - be that in bash, ssl keygen, or mallloc.


> Asking your OS package manager to additionally manage the ecosystem of every programming language you may want to develop in seems a bit out of scope, no?

For apt? maybe. For Nix? no.

Nix doesn't install packages, it creates environments. NixOS considers one environment to be the main operating system. That is the main feature of Nix: self-contained reproducible environments.

Because it is so well suited to the task, nix provides all the packages provided by Hackage, sorted into separate namespaces for each supported version of GHC. Each namespace, however, supports only one package version per package; so any packages where it makes sense to provide multiple versions has loosely adopted the name_version naming scheme as a workaround.

Out of frustration, I searched for an answer why this is the case, and found an issue someone else had already discussed at length, only to find a dead end: the issue was discarded (not honestly considered). I then commented at length to explain my line of thinking, only to hear excuses for keeping the status quo, rather than actual help or discussion. The developers commenting on the issue were hung up on their preference for "pinning", and didn't want to consider the namespace mess I have such issue with.

It seems you share the same attitude that changes cannot be done - not because you are certain that is the case, but because you don't believe it should be. There is no reason behind this attitude, and I frankly have no patience for it.

If you want to discuss this problem and any related nuances, I am happy to do so. Please do not naively tell me that my perspective doesn't exist, or cannot make sense.


> It seems you share the same attitude that changes cannot be done - not because you are certain that is the case, but because you don't believe it should be. There is no reason behind this attitude, and I frankly have no patience for it.

I don't share this attitude in any objective sense. Changes can always be done. Whether they should is a pros-vs-cons consideration, many of which are subjective, and both conclusions are therefore valid.

I wasn't arguing that this should not be done, I was challenging the perspective that this should be expected. There's a difference between saying devs should follow the status quo, and asking devs to consider extending the scope of their project and adopt a new approach. And calling devs "elitist" for refusing to do the latter seems pretty rich.

As has been mentioned, Portage already does this, and it's a much older, more traditional package manager without the concept of setting up environments, so noone is questioning that this can be done.

> Please do not naively tell me that my perspective doesn't exist, or cannot make sense.

If your perspective is that it would be nice for Nix to support arbitrary package, that's great. If your perspective is that any dev who refuses to implement such a non-normative and complexity-increasing feature is elitist, then I don't believe your perspective makes sense.

Minor note: I do actually agree with zapita's original comment, so this lengthy thread underneath may seem like pedantry, but I just found it odd that you felt your example fit with zapita's point, as it seemed like the opposite to me.


It seems like Nix could benefit from a first-class concept for pinning the channel and managing updates to the channel. To achieve build reproducibility, the default outcome of checking out a project and building it should be to get the same result as everyone else who does so at the same project revision. I'd expect there to be a way to take a snapshot of the channel and lock those versions in the project until you're ready to upgrade.

I have tinkered a lot with Nix, but have not used it for any production system. I'd be reluctant to use Nix in production if there wasn't a good way to pin the channel, so that everyone who is developing on a project sees precisely the same versions as each other, and everyone who is patching a specific version of a project sees the same version as what's currently deployed. In my opinion, this should be the default behavior and not something that you have to go out of your way to achieve.

Maybe you could model this as something like a "channel fork" or "user channel". When you create a new project, you also create a channel for it that inherits from an existing channel, except that the versions of all packages are pinned and you can control the way in which changes make their way into your user channel from the upstream.

My company has a proprietary technology that's similar to Nix, and we tackle the build reproducibility problem in this way, by making it easy to branch off the main channel and control how updates make their way into it (which is like a merge). The default behavior when creating a project is to also create a project-specific channel, derived from the upstream, company-wide channel. Dependencies are declared at the major version level only, and versioning beneath that level happens implicitly via the project channel.

In this system you think of all channels as having revisions. Developers can control how their channel merges in changes from its upstream channel, which provides stability and reproducibility: until you merge and release a new version, `my-channel` always references exactly the same package versions for all developers. The default behavior of channels is to merge new versions from the upstream channel periodically, but developers can decide what's right for them, whether staying on the bleeding edge or prizing stability.

Software builds, releases, and deployments always happen in the context of a specific channel revision (`my-channel@revision`), which can be named and referenced, so if necessary you can inspect or replicate exactly the code that's part of a colleague's configuration or a production system. Not just approximately but exactly, for every package involved down to `gcc` and `glibc`. It's easy to create a workspace with the package versions specified by `my-channel@12345`.

By further tracking which source code commit hash corresponds to each package release in the channel, you can trivially look up the exact source of all packages (my-channel@12345 -> FooPackage@a1b2c3d4). Extremely useful! When you work and version things in this way, you hardly need version numbers beneath the major version level.

I suppose it's reasonable that the Nix project doesn't want to host binary versions of lots of old packages to make this use-case fast. In that case, I'd want it to be easy to clone the binary version of those dependencies and source them locally. Maybe this could operate as something like a caching proxy in front of Nixpkgs, that will store my own permanent copy of any package that I access or build. On the other hand, keeping builds forever is expensive, so perhaps this is an opportunity for Nix to provide a commercial offering with private channel hosting and a package cache with infinite TTL.


From a technological standpoint, it's easy to pin the channel. It's as simple as creating a file like this in your project: https://github.com/catern/nix-utils/blob/master/pkgs.nix

And importing that file whenever you want to use packages from nixpkgs: https://github.com/catern/nix-utils/blob/master/default.nix

If you want to have a separate channel which you can update on your own, just fork nixpkgs on Github and point your pkgs.nix at your fork.

The Nix binary cache keeps every version of every binary in nixpkgs that's ever been built on the Nix project's build servers. (Note that that's not absolutely everything in nixpkgs, since some things aren't built centrally on the build servers for various reasons, but it's most things.)

However, I might be misunderstanding. Was there some other feature or ergonomics benefit that you were looking for?


> I suppose it's reasonable that the Nix project doesn't want to host binary versions of lots of old packages to make this use-case fast. In that case, I'd want it to be easy to clone the binary version of those dependencies and source them locally. Maybe this could operate as something like a caching proxy in front of Nixpkgs, that will store my own permanent copy of any package that I access or build.

I don't know when it will be ready, but some work has gone into backing Nix caches with IPFS, and it looks like it could be a really cool solution for making it easy to share a cache for 'extra' or 'old' packages without much infrastructure work on the parts of users.

https://github.com/NixOS/nix/issues/859


I agree, but I'm not so negative about its future.

As a starting point, I think Nix needs to become a bit more predictable. I like Arch Linux or Slackware because they are minimal (all packages are mostly vanilla upstream, nothing installed if you don't explicitly do it yourself) plus all operations have obvious side effects. Nix is also minimal, but sometimes side effects are really non-obvious. For example, when you update your system and you are subscribed to some channels, it's not obvious whether you will get binary substitutes (packages) or you will trigger a lot of local compilation. Things are getting better, e.g. with dry-run capabilities, but a more friendly UI should be default.

Furthermore, there should be better documented ways to run software that hasn't been packaged for Nix. There are tons of options: Docker, Systemd containers, FHS chroots. But it should be obvious too.

On the other hand, I can honestly say that if you want a really hassle-free machine that never breaks and you are not installing stuff that isn't packaged, NixOS is arguably one of the best distros out there. I have a few remote machines overseas in my dads house for some of his simulations. He knows no Linux. I can update Nix machines and tweak configuration remotely knowing that nothing will break. I can rollback anytime. Plus the whole configuration is a single declarative file.


If you like Arch, but still want the immutability and reliability of NixOS, you should check out a project I developed, Darch.

https://godarch.com/


I understand and very much sympathize with your desire to share a project you've worked on. However, after four such comments, this is starting to look like spam.


On the other hand, to me, this is basically the same story as `git` (sans the cultural gravity of Linus). Incredibly well thought-out internal architecture, extremely talented developers, functional-style immutable datastructures, idiosyncratic frontend that’s a low-level façade over the internals, painful learning curve, often difficult (but possible!) to recover from real-world corruption, etc.

I say this as someone who has loved `git` for a long time and appreciates that knowing the internals and how the commands map onto them leads to an incredibly powerful tool.


git at least had a good command-line interface and documentation from the start.

Nix is getting better, especially with this release, but it's still far from friendly.


The command-line interface was about the same as it is today. Which is to say, it’s great if you understand the underlying architecture and how the commands map natively to it, but it’s a huge learning curve if you’re coming from something else, have preconceived notions of what commands like “commit”, “checkout”, “fetch” et al do, and don’t have the time or the desire to learn the about what a DAG is.

The cogito interface used to be a thing, but I don’t think it ever gained widespread adoption and was pretty quickly abandoned.


> don’t have the time or the desire to learn the about what a DAG is.

rooted DAG = tree

What programmer doesn't have time for trees?


rooted DAG != tree, so please don't introduce people to a concept by explicitly misleading them.

All nodes in a tree must have exactly 1 parent, except for the root that has exactly 0 parents. All nodes in a (single-)rooted DAG must have at least one parent, except for the root that has exactly 0 parents.


:-( You're right.

What I said was wrong. It was also doomed to be sloppy at best, because I intended to compare trees as programmers know them (i.e., typically directed and rooted, which is more structure than mathematicians' 'trees' have) to DAGs, which have a precise mathematical definition.

But I hardly meant to introduce anyone to the concept of either DAGs or trees by that comment. If there's a chance of it seriously misleading anyone, I would be happy to see it moderated out of sight.

The point I intended to make, more carefully stated, is this:

DAGs are closely related to structures that programmers already deal with all the time, and so it strikes me as odd when programmers protest learning to work with DAGs.


> and don’t have the time or the desire to learn the about what a DAG is.

That's the point people are complaining about with git. It's fine to consider that an issue, but that doesn't mean the interface or documentation are bad, it's that people want it to feel familiar.

I, for one, was very comfortable with git's command line from the start. I came in wanting something different, and found the distributed model was just what was needed, and frankly, more straightforeward than the popular centralized approach. It also helps that you can play with and break things locally, rather than having to worry about a server someone else relys on.


cough

Excuse me? the git command line is notorious for being completely unusable.

The point is git's internals are good, the usability is appalling.


As a former hg user, I second that. The fact that git has so many users is a testament to our incredible adaptability as a species. If we can adapt to the git ui, surely we will have no problem colonizing alien planets.


Where have you run into derision/assholes with nix or FP? Absolutely there are assholes (or perhaps people who just don't realize how rude their words/actions are) in the community - especially on Twitter and reddit in my experience - but I've never not felt welcomed or been talked down to when I asked for help in an IRC/discord channel.

I see this get mentioned all the time in respect to FP communities in particular but I really can't relate at all. And its not like I haven't asked my fair share of stupid questions in these communities or anything.


I didn't encounter any assholes (or rather I didn't encounter any more than usual... I have yet to find a completely asshole-free open-source project).

Maybe "derision" was the wrong term. Nobody was rude or disrespectful. Just oddly patronizing. They just didn't feel there was anything wrong with the user experience, or any need to improve Nix to better meet the needs of non-experts. The main sentiment I observed was that Nix was pretty close to perfect, but also misunderstood - and that it was mostly up to the rest of the world to better understand it. That struck me as the wrong way to go about making your project successful.


I've experienced the opposite.

The fact that this release highlights a new 'nix' command and deprecates a bunch of ugly and verbose baggage shows the commitment to UX.


That's encouraging to hear. I will give it another go soon.

The area that IMO needs the most work is the packaging experience, including the DSL itself... and the "I just want to build this source without having to become a black belt functional packager first" experience.


My biggest gripe with Nix packaging is that the tooling varies so widely per ecosystem. Aside from that, though, I generally find packaging for Nixpkgs to be much, much easier than packaging for most distros.

What kind of packaging experience are you used to?


Most of what we can do with Nix is both a consequence of its model and constrained by it. Part of what this means is that things users want may not have direct replacements in the Nix world, even when they do have _alternatives_.

When we have a close but significantly different analogue of something a (potential) user seems to be asking for, I think that's when you tend to see this type of response the most. ‘Once you understand the model, you'll see that this slightly different thing will probably work better.’

I think:

1. Sometimes it's really true-- understanding the model can inspire you to take a different path, make a different choice than you naively might, and ultimately be happy with it.

2. There's a selection bias working against the community here: as long as the Nix UI is clunky, the people we'll find in the community will be those who are drawn in by its design principles and can overlook the UX warts. The improvements that come with the Nix 2.0 release make me hopeful that we can attract more contributors for whom such things are very important, and who will be inclined to further refine the UX as they work with Nix.

I think most Nixers have known that the UX is horrible in some respects for a long time, and also that improvements in that regard were in the pipeline for Nix 2.0 (which was to be called '1.12' until recently). Heavy Nix users have largely been focused on our own use cases, because we only have so much time and energy. I hope that as our userbase grows and UX improves, the composition of our community can change so that users can feel better heard and represented when they raise concerns like you have here.


I've found that within other ecosystems, people are increasingly considering replacing existing tooling with Nix. For example, I've heard some rumblings in the Elixir space considering using Nix for managing SDK versions, for native/binary package dependencies, and for generating release packages.

To accelerate the adoption of Nix, the Nix maintainers would need to reach out and help other ecosystems adopt Nix or Nix-like logics. But I think, as it stands, other ecosystems will eventually look toward Nix on their own, as they grow and attempt, one by one, to solve all the same problems Nix already has 100% solved.


Except that Docker et al are already that compromised solution. No need to replicate.

And for what it's worth, the #nixos community is one of the kindest, most helpful communities I've ever encountered. Stop by and ask your questions!


> Except that Docker et al are already that compromised solution. No need to replicate.

I think you're right that Docker has embraced the "meet half-way" school of thought (maybe too much?). But that doesn't mean they have exclusivity over the approach.

Docker lacks many of the features that make Nix so powerful. On the other hand Docker is much more pleasant to use, for me at least.

If a tool came along that combined the simplicity of Docker with the power of Nix, I would probably stop using both, and just use that. I think Nix itself could be that tool... if only they fixed the issues I described above.


But when you need to rebuild your container image, how do you make sure it builds in the same way? Nix imho is perfect for building reproducible containers, which you can then run on a minimal install of an OS with a larger ecosystem (or just GKE/EKS) to find & fix crippling bugs before you run into them.


I also greatly appreciate what the Nix and NixOS projects are doing.

In this latest release, it looks like they're taking steps toward making Nix easier to use, in standardizing and simplifying the CLI porcelain.


My thoughts exactly.

Some time ago, I asked how to integrate NixOS in an existing infrastructure managed by Chef/Ansible.

I was told I was wrong to even think about that and that Nix is a better system.

Well, wrong. If you have a few thousands servers with a system, you cannot ask to migrate all to a new system without some integration with the old one first.


Instead of deploying individual configuration files, you would generate `/etc/nixos/configuration.nix` + some modules maybe followed by a `nixos-rebuild switch`. There is not special support required in Chef/Ansible to support this feature. Both Chef and Ansible are available in nixpkgs.


You may be right here. I like to think I am not a complete idiot, but I just read their 'about' page and now I know Nix has a folder naming convention and something about functional programming, but I have no idea how it is going to solve the issues I have with pip/apt/conda etc. In fact I am not really sure if it is supposed to, apart from seeing that it is a package manager.

Now I have filed it mentally under 'too much trouble to learn'


What are the issues you have with pip/apt/conda etc.?


I've found kind experienced users who supported me on webchat.freenode.net , #nixos. I always make sure to say thanks when they help!


What kind of user experience are you talking about? As someone who uses it as just a package manager, I have had no complaints around using `nox`, `nix-env -i`, and `nix-env -e`.

It does seem to have a sprawling cli and many additional purposes, so I can imagine the argument, but am curious as to what part of UX you're talking about in particular.


I've been kinda waiting for something like what Ubuntu is/was to Debian. Approachable. Last time I tried nix, learning curve was too steep. This update looks like they've started to work towards approachability a bit, so I might give it another go.


I have been using nix for a while to build binary packages for crashcart[1] and I really love the premise of isolated source-based builds.

Unfortunately, over time I've become quite frustrated with the pull-everything from the internet model. If you are building packages from scratch each time instead of pulling down cached version using the nix command, the build breaks quite often.

Mostly it is silly stuff like the source package disappearing from the net. A particularly egregious offender is the named.root[2] file being updated, which will cause everything to fail to build until the sha is updated in the build def.

I don't know that there is a great solution for this problem. Maybe there needs to be a CI system that does from scratch build of all of the packages every day and automatically files bugs. Alternatively, a cache of sources and not just built packages could ease the pain. This issue probably affects ver few nix users, but it has demoted my love of nix down to "still likes but is somewhat annoyed by".

[1]: https://github.com/oracle/crashcart [2]: https://www.internic.net/domain/named.root


Regarding disappearing sources: Nix offers a content-addressed mirror for sources downloaded by the Hydra CI system. As a random example, here is the latest Chromium source tarball:

http://tarballs.nixos.org/sha256/3dfa02e873ff51a11ee02b9ca39...

So disappearing sources is not a huge problem in my experience. Obviously if you have package declarations outside of Nixpkgs proper things are different.

This problem is also something the Software Heritage project[0] aims to solve, but I don't think they have a good API yet.

[0] https://www.softwareheritage.org/


Yeah I'm aware of the build mirror, but crashcart builds things under a different prefix so it must build things from source each time. I realize this doesn't affect most users (which is why it takes a while for disappearing sources to be found and fixed). A content addressed mirror for sources as well would solve the problem nicely.


I'm sorry, the mirror is actually for sources, not build artifacts. I've updated my comment to clarify.


When was that added? Well fetchurl automatically poke it if it can't find the sources otherwise?



interesting I wasn't aware of that. I wonder why my builds are not using it.


It might be related to using a different Nix prefix for your builds, which is a little poorly supported. Just curious: Why are you using a different prefix?


The point of crashcart is to be able to side load a filesystem with utilities into a running container. It is very important that the location we pick doesn't conflict with any path already in the container. If we used /nix as the mount path it would conflict with any container that uses nix. In order to prevent this (probably rare) conflict, we build our utilities in /dev/crashcart/ instead.


Hmmmm, interesting... That does seem like a pretty good reason, though maybe you could bind mount over /nix? That should be fine from a Nix perspective, not sure how well it works with container technologies.


we could bind mount over /nix or /nix/store, but that means any existing nix packages from the container would not be available. The whole point is to have the whole container file system available along with the utilities. We could alternatively find each package that we need for our debugging utilities and bind mount each directory from the store individually. This would work due to unique paths in the store, but that means potentially hundreds of bind mounts and is an orchestration nightmare.


Sorry, I didn't mean bind mount, I mean union mount, like with OverlayFS or whatever the most-used one is.


we didn't look at using overlay. Might be possible, although that would introduce a dependency on kernel version and/or module. A custom fuse might be an option here as well but fuse in containers is a bit sketchy at the moment.


Try Nix 2.0, it can place your derivations in custom folder by clever use of chroot, read about --store argument.


This looks like a neat feature. Reading up on it briefly, it looks like it changes the mount namespace / chroots when running the binary in order to accomplish this. Unfortunately that defeats the purpose of crashcart, since the whole idea is to have access to the container file system while running your sideloaded binaries.

Unless there is some magic that I'm missing, I don't think this option helps for our use case.


Ok. I will check it out. Thanks.


What about running your own caching HTTP proxy for your build's external dependencies or else pulling static copies of these dependencies into your own repo? It seems like the problem isn't building everything from source but rather that the sources of truth for the inputs are unreliable. You'd have the same problem trying to build e.g. Debian from scratch if you couldn't reliably pull down all the sources for things.


A caching http proxy would help me build the same things reliably, but it wouldn't help anyone else who cloned my repository and attempted to do their own build unless I also gave them access to the proxy. And it hides the fact that the standard from-scratch build doesn't actually work. I think the cached nix packages is why it takes so long for some of these issues to be discovered.

The difference with Debian (and other linuxes) is that the source code for the build is also provided. The upstream source might be from random place on the net, but Debian provides a source package that you can use to rebuild the binary package. Maybe the solution is for nix/nixos to provide something similar.


I've not tried Nix but this seems like a huge oversight if there is no local cache?

Not everyone is blessed with unlimited gigabit fiber.

Edit: Seems like I was wrong, and I'm happy about it. :)


Generally everything is cached that you build/download. But only on the machine you do it on. That's why you usually want a collective cache inside your org additionally, because not all machines will have everything yet.


If you use a content-addressable scheme (fetchurl with sha256, for example), it will retain the source archives until you run garbage collect.

If you use builtins.fetchTarball, I don't think this is the case.

Since this all uses CAS, you can use the nix prefetch scripts to import an arbitrary file:// or other URI into the nix store.


fetchTarball now also supports an optional sha256 argument. It'll then be used indefinitely without checking for changes after the TTL expires.


Hurray!

I love Nix and I have been looking forward to this new makeover of the UI with the 'nix' command. It seems like the original command line usages developed incrementally over time, making them quirky and inconsistent, and so it is really welcome to have them redone based on long experience. (Thanks, Nix hackers!)

Seeing this released as Nix 2.0 is a really lovely surprise for me this morning :).


I guess this is a good place to plug the fairly new bash completion support for Nix (including Nix 2.0)[1]. For those like me who can't stand using a cli without completion.

[1] https://github.com/hedning/nix-bash-completions


> It introduces a new command named nix, which is intended to eventually replace all nix-* commands with a more consistent and better designed user interface

This is pretty nice. I've been using nix on my Mac for more than a year now, it works well on mac. But the separated commands are not easy to remember and the help documentation is also separated. This change really improves command line UX


Heh- after a many years haitus from running any kind of UNIX/Linux at home, I was thinking about installing a Linux based distro and NixOS was near the top of my list to try. How does this 2.0 Nix release effect a new install of NixOS- should I wait a bit for a corresponding overhaul of NixOS to come out? I suspect theoretically it's not necessary, but wondering if NixOS will be tracking this Nix release in some way shortly... Anyone know?


I upgraded my personal laptop to NixOS 18.03 (currently still nixos-unstable; the release branch hasn't been cut) a week ago and I've had no problems

You can switch which version of Nix you're using whenever you want, and you can install multiple versions of Nix side-by-side if you want to, just like with any other package. To install Nix 2.0 on an older release of Nixpkgs/NixOS, just use the package `nixUnstable`. You can install it with `nix-env` if you want to try it out right now.

The only real differences between a typical upgrade and upgrading between releases are:

1. You'll need more storage space for the course of the upgrade, because most of your dependencies have been updated.

2. You might have to change your config up a little, because a few packages have been removed, renamed, or are out of date.

You still get all the nice rollback features that you're used to, and if Nix 1.x is part of your old system profile and 2.0 part of your new one, when you roll back, Nix will roll back, too.

To more directly answer your question: NixOS 18.03 will indeed include Nix 2.0.


Thanks.

Is there any idea about the release date for 18.03 (or 18.0X, whatever is 2.0 based)?


.03 means March, and that's as much as I know.



Yes, NixOS 18.03 will be released soon, featuring Nix 2.0. You can still install the current 17.09 or unstable and upgrade it later by simply switching your channel.


Great- thanks.


It looks like they're at least attempting to work towards user friendliness, which is by far my biggest complaint.

My other big complaint is that when something goes wrong its damn near impossible to figure it out. Trying to figure out why I couldn't get postgres working with postgis was nightmarish last year sometime when I last tried nixos.

I'm still really optimistic that this will someday replace arch for me. I just don't know when


Finally, a better command-line interface!

Hopefully, this means I can stop using the website search for packages.


Have you tried the 'nox' package?


As a nix user for several years this is pretty exciting. I hadn’t been following the 2.0 development, but I was really hoping to see a mention of support for something like a .gitignore equivalent when hashing directories from the filesystem. Seems that that isn’t in this release :(


If what you're dealing with is actually a git repository, in 2.0 you can just use "src = fetchGit ./.;"-- this is what the expression for building Nix itself does :).

Otherwise you can use filterSource (documented in the linked article, the Nix manual) to roll your own filtering.

If you have any problems with either of these I encourage you to join #nixos on freenode and ask. Hope this helps! :)


Have you seen the library function cleanSource() and related routines? I use these for excluding files (e.g. by filename extension) from being visible in builds. This might suit your use case too.

https://github.com/NixOS/nixpkgs/blob/master/lib/sources.nix


Yeah, and I use cleanSource, but I find it to be pretty obtuse and full of edge cases.


That's a really impressive list of contributors. 99 if I've counted correctly.


It appears that by using a CoW filesystem like Btrfs I can have some of the same advantages of using Nix:

- Parallel installation of multiple OSes sharing the same storage pool

- Snapshot/rollback

How does using Nix compare to using a CoW filesystem such as Btrfs or Zfs?


The main difference is that functional package management lets you declare the system state. Free rollbacks are a consequence of being able to fully describe the system state.

With a file system the storage pool is dumb and all meaning of higher-order abstractions (such as packages) is lost.

You don't use functional package management just for deduplication, but in order to be able to declare and reason about state.

(I work on Guix, which is an implementation of functional package management, but with different abstractions.)


What are the advantages of guix vs nix?


As one of the co-maintainers of GNU Guix I'm obviously biased, but here's what I consider some important unique features of Guix:

- Guix is all written in Guile Scheme (with the exception of parts of the inherited daemon, which hasn't yet been completely implemented in Guile); this extends to development tools like importers, updaters, to user tools like "guix environment", and even bleeds into other projects that are used by GuixSD (the GNU system distribution built around Guix), such as the shepherd init system. There is a lot of code reuse across the stack, which makes hacking on Guix really fun and smooth.

- Packages are first class citizens in Guix. In Nix the idea of functional package management is very obvious in the way that packages are defined, namely as functions. These functions take their concrete inputs from an enormous mapping. In Guix you define first-class package values as Scheme variables. These package values reference other package values, which leads to a lazily constructed graph of packages. This emergent graph can be used as a library to trivially build other tools like "guix graph" (for visualising the graph in various ways) or "guix web" (for a web interface to installing and searching packages), "guix refresh" (for updating package definitions), a lovely feature-rich Emacs interface etc.

- Embedded DSL. Since Guix is written in Scheme---a language for writing languages---it was an obvious choice to embed the package DSL in the host language Scheme instead of implementing a separate language that needs a custom interpreter. This is great for hacking on Guix, because you can use all the tools you'd use for Scheme hacking. There's a REPL, great Emacs support, a debugger, etc. With its support for hygienic macros, Scheme is also a perfect vehicle to implement features like monads (we use a monadic interface for talking to the daemon) and to implement other convenient abstractions.

- Graph rewriting. Having everything defined as regular Scheme values means that you can almost trivially go through the package graph and rewrite things, e.g. to replace one variant of a package with a different one. Your software environment is just a Scheme value and can be inspected or precisely modified with a simple Scheme API.

- Code staging. Thanks to different ways of quoting code (plain S-expressions and package-aware G-expressions), we use Scheme at all stages: on the "host side" as well as on the "build side". Instead of gluing together shell snippets to be run by the daemon we work with the AST of Scheme code at all stages. If you're interested in code staging I recommend reading this paper: https://hal.inria.fr/hal-01580582/en

- Bootstrapping. Some of us are very active in the "bootstrappable builds" community (see http://bootstrappable.org) and are working towards full bootstrap paths for self-hosting compilers and build systems. One result is a working bootstrap path of the JDK from C (using jikes, sablevm, GNU classpath, jamvm, icedtea, etc). In Guix we take bootstrapping problems serious and prefer to take the longer way to build things fully from source instead of just adding more binary blobs. This means that we cannot always package as many things as quickly as others (e.g. Java libraries are hard to build recursively from source). I'm currently working on bootstrapping GHC without GHC and without the generated C code, but via interpreting a variant of GHC with Hugs. Others are working on bootstrapping GCC via Scheme.

- GuixSD, the GNU system distribution built around Guix. GuixSD has many features that are very different from NixOS. The declarative configuration in Scheme includes system facilities, which also form a graph that can be inspected and extended; this allows for the definition of complex system facilities that abstract over co-dependent services and service configurations. GuixSD provides more Scheme APIs that apply to the whole system, turning your operating system into a Scheme library.

- I like the UI of Guix a lot more than that of Nix. With Nix 2.0 many perceived problems with the UI have been addressed, of course, but hey, I still prefer the Guix way. I also really like the Emacs interface, which is absolutely gorgeous. (What can I say, I live in Emacs and prefer rich 2D buffers over 1D command line strings.)

- It's GNU. I'm a GNU hacker and to me Guix is a representative of a modern and innovative GNU. It's great to see more GNU projects acting as one within the context of Guix and GuixSD to provide an experience that is greater than the sum of its parts. Work on Guix affected other GNU packages such as the Hurd, Guile, cool Guile libraries, and led to a bunch of new GNU packages such as a workflow language for scientific computing.

On the other hand, although Guix has a lot of regular contributors and is very active, Nix currently has more contributors than Guix. Guix is a younger project. The tendency to take bootstrapping problems very seriously means that sometimes difficult packages require more work. Oddly, Guix seems to attract more Lispers than Haskellers (I'm a recovering Haskeller who fell in love with Scheme after reading SICP); it seems to be the other way around with Nix.

Having said all that: Nix and Guix are both implementations of functional package management. Both projects solve similar problems and both are active in the reproducible builds efforts. Solutions that were found by Nix devs sometimes make their way into Guix and vice versa. The projects are not competing with one another (there are orders of magnitudes more people out there who use neither Guix nor Nix than there are users of functional package managers, so there's no point in trying to get people who use Nix to switch to Guix). At our recent Guix fringe event before FOSDEM Eelco Dolstra (who invented functional package management and Nix) gave a talk on the future of Nix surrounded by Guix hackers --- there is no rivalry between these two projects.

Let me end with a comment that's actually on topic for the original post: Congratulations on the Nix 2.0 release! Long live functional package management!


Very exciting release notes. Nix is great and this makes it even better.


How do I upgrade my existing Nix installation? After updating the nixpkgs-unstable channel, nixpkgs.nix is still 1.11.16 (and nixpkgs.nixUnstable is a 2.0 pre-release).


Look up channels status here:

http://howoldis.herokuapp.com

It will prob take like 5 hours to be updated. Run nix-channel --update then reinstall nix.


Would it be safe to run `curl https://nixos.org/nix/install | sh` (safe as in it won't screw with my existing installed packages), and if so, will that get me Nix 2.0? Or do I still have to wait for nixpkgs-unstable to rebuild?


The easiest way is probably to do following:

  git clone https://github.com/nixos/nixpkgs
  cd nixpkgs
  git checkout origin/nix-2.0
  nix-env -i $(nix-build --no-out-link . -A nix)


If you already have Nix installed, what you want to do is change the channel, and do an upgrade.


Change the channel to what? It's already on nixpkgs-unstable.


nix.package = pkgs.nixUnstable;


nixpkgs-unstable was updated 4 hours ago. Still doesn't have Nix 2.0.

This strikes me as very strange. Why was Nix 2.0 released without updating nixpkgs?


It looks like Nix 2.0 is tagged[1] but nixpkgs isn't updated yet[2]

[1] https://github.com/NixOS/nix/releases/tag/2.0

[2] https://github.com/NixOS/nixpkgs/blob/472dd33ea4905d562d9380...


You can upgrade right now with `nix.package = pkgs.nixUnstable;` in your `~/.nixpkgs/config.nix` [1] or on the command-line: `nix-channel --add https://nixos.org/channels/nixos-unstable nixos` followed by `nix-channel --update` if on any system but NixOS or `nixos-rebuild switch` if on NixOS.

[1] https://ebzzry.io/en/nix/#nixpkgsconfiguration


As I mentioned in my comment, nixpkgs.nixUnstable shows up as a 2.0 pre-release (specifically, nix-2.0pre5968_a6c0b773).


This looks really nice.

For basic users who have a DigitalOcean droplet with Ubuntu, to run a web server - how does this compare?


To do a car analogy (this is Slashdot, right?):

With Ubuntu, every time you want to fix something with your car, you roll it into the garage, pop open the hood and get to work. It's intensive labour, results will vary, and undoing a change can be really difficult.

With NixOS, it's like 3D printing a new car every time. You'll design a model, press a button, and the car gets built from scratch. If you don't like it, tweak the design a bit, and print a new car. If the new car breaks, just go back to the previous known-good one, which is already in your garage. You can even take the design documents to your friend and generate an exactly identical model.


> With Ubuntu, every time you want to fix something with your car, you roll it into the garage, pop open the hood and get to work. It's intensive labour, results will vary, and undoing a change can be really difficult.

You can do it that way, but I wouldn't recommend it. If your Ubuntu system becomes that way, it has become unmaintainable.

All modern server deployment methods describe the deployment in code so you do "print a new car" every time you change something. This includes Ubuntu.

On the desktop, you largely don't need to pop open the hood at all. If you find yourself doing that, you have yourself an experimental system and not production system.


> On the desktop, you largely don't need to pop open the hood at all.

So you're not installing updates on desktop machines at all? That sounds incredibly dangerous.


Huh? No. Updates ship as part of Ubuntu. From your perspective, updates happen. You don't need to pop open the hood at all to get them.


Unfortunately sometimes you do need to pop open the hood to see whats going on. Regarding ubuntu or rather its derivative mint for example I had to fiddle with xorg.conf to allow me to manually set the fans on a card because the desktop was overheating even with reasonable cooling in a small apartment with no ac in the middle of summer.

In case you didn't know nvidia driver doesn't let you manually set the fan without enabling this in xorg.conf or drop a file in /etc/X11/xorg.conf.d Not knowing about xorg.conf.d at the time I merely set xorg.conf and was very confused to find that it continued to overheat and further that my file was not modfified but gone. This happened periorically seemingly at random.

Turns out their driver manager mints gui for installing proprietary drivers had installed the optimus package to enable a laptop with dual gpus to work properly on a desktop and that the post install script for this package was helpfully removing /etc/xorg.conf every time it was run when said useless package was updated.

Moving the snippit to xorg.conf.d was helpful as was finding and removing the useless package but we are still looking at an issue on a relatively recent machine that couldn't be fixed without grep and a xorg config file in a recent version of a ubuntu derivative.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: