Hacker News new | past | comments | ask | show | jobs | submit login
Reflections on NixOS (zenhack.net)
164 points by ikhthiandor on Aug 22, 2016 | hide | past | favorite | 99 comments



People need to stop thinking of NixOS as another Unix-alike, because it isn't, in very much the same way that MacOS used to be a Unix-alike, but isn't any more, either.

NixOS is rebuilding important pieces of the foundation. Why? To address a basket of problems we have been complaining about with Unix for a very long time. Is it worth the hassle? Well, have you ever lived through a kitchen remodel in your house? That's a hassle, too, but you end up with a better kitchen.

IMHO NixOS is pointing the way forward, but is probably not the last word in next-generation operating systems. The existence of NixOS and Guix (and similar) are exciting, because they are part of the ferment creating the thing on which we will eventually converge.


It's worth checking out at least the introductory parts of the theses written about Nix and NixOS.

Eelco Dolstra's Ph.D. thesis describing Nix, "The Purely Functional Software Deployment Model": https://nixos.org/~eelco/pubs/phd-thesis.pdf

Armijn Hemel's master's thesis describing NixOS: https://nixos.org/docs/SCR-2005-091.pdf

Or this more recent journal article about Nix/NixOS which is a good introduction: https://nixos.org/~eelco/pubs/nixos-jfp-final.pdf ("NixOS: A Purely Functional Linux Distribution").


"The same way that MacOS used to be a Unix-alike, but isn't any more, either."

The Open Group would disagree.


Assuming you mean the SUS, I'd argue there's a big difference between "meets the standard" and "feels like an actual unix".

Personally, I'd call OS X unix-like but not unix-alike, which perhaps sounds like hair splitting but having admined multiple unices there's definitely a difference, even if I can't figure out how exactly to articulate it beyond "it doesn't feel the same".


Why isn't macOS a Unix-like anymore?


macOS is BSD Unix under the hood with the Mach kernel [1]. I'm tired of hearing this nonsensical talking point that macOS has strayed too far to be considered Unix. What does that even mean? Perhaps someone should inform The Open Group who controls the official Unix specification that they are wrong. Even though it shells like a unix, users like a unix, signals like a unix, files like a unix, networks like a unix, streams like a unix, and threads like a unix, because of reasons never enumerated, macOS is not Unix. /endrant

[1] https://developer.apple.com/library/mac/documentation/MacOSX...


A kernel is not an OS.

I'm thinking perhaps some measure of porting effort is the appropriate metric. There is a reason Mac users can't simply point to a bsdports repo.


Uh, yes it is. It is a very core part of an OS. Without it, the OS does not function.


Kernel is necessary, not sufficient. Nobody ships just a kernel and calls it an OS.


Right, linux never just ships as a kernel for others to consume.


Because, with the popularity of Linux, people have forgotten how varied unix-like OSes once were. I'm sure the grandparent poster will mention things like how it stores configuration differently, etc, but that's only not-unix-like to someone who hadn't used a handful of different commercial Unices back in the 1990s.


For me, they key pieces are: 1) frameworks. when traditional search paths don't find executables and dyn libs the way you expect, it isn't unix-alike. 1a) when the execution environment has changed enough that the build needs to be modified beyond a few simple config variables, it isn't Unix-alike. 2) unconventional init system. although BSD is moving to something similar to launchd, so I think that idea is evolving toward mainstream. 3) unconventional cron, see launchd/BSD.

So dll dependencies are crazy-making, and Nix is attacking that. traditional init is getting creaky, launchd and systemd are attacking that.

With respect to systemd, I love how it has motivated people to try to come up with something better than both traditional init and systemd.


I use NixOS since last spring and I adore it so much that I almost credit it with making me like computers again.

It's a kind of bumpy ride sometimes but for me there's no other distribution that's even close. Packaging the Nix way does require patching upstream and that's probably inevitable.

It would be nice to help with the documentation situation, though I'd note that the NixOS manual and the Nixpkgs manual are seriously helpful and quite comprehensive; it's just that there's another type of guide that would really help.


GuixSD is close. It's the internal DSL (using Guile Scheme as host) version of Nix (which is an external DSL).

Both have their own advantages. For example, I really like some of the reproducible binary efforts Guix has made [1].

And I prefer the way Guix packages some things. In Nix, I often found installing a seemingly innocent piece of software like mutt ended up installing python (!) by pulling all possible dependencies of mutt (like gpg), and in turn pulling all possible dependencies of gpg, which brought some X11 bindings and finally python.

But Nix has some amazing stuff like NixOps. And it might be more mature. I know some corps like Logicblox use it for their deployments.

[1] https://www.gnu.org/software/guix/manual/html_node/Invoking-...


Your first sentence is very close to something I've said to others. NixOS is the first time I've been excited about an OS since first discovering Linux back in 2003.

The OP mentions "the ability to do controlled per-user changes in an ad-hoc fashion". I assume they are referring to the use of "nix-env", which some members of the community frown upon. I've adopted the policy of not using it. My system is defined entirely by various ".nix" files, and for launching simple applications, such as firefox, I just have my xmonad configuration bind "C-SHIFT-F" to "nix-shell -p firefox --run firefox".

For programs where I have more complicated environments, I have a "shell.nix" file in $HOME which contains a bunch of derivations that I commonly use, and I use "nix-shell -A" to enter them.

For development, all of my projects now include a "default.nix" file. Typing "nix-shell" in the project directory then brings in my project's dependencies as well as the tools I want when developing on that project, such as a suitable Emacs with the appropriate packages already installed.

I find I move these move ".nix" files frequently between different machines, and I really appreciate that I am put into reproducible development environments wherever they are.

To reiterate what others have said below: the Nix package manager makes sure to share build inputs. If I start a new project and include a bunch of stuff in my "default.nix" that I use elsewhere, it costs nothing to enter the new environment.


One fun thing I'm doing recently with NixOS is automatically deploying my system setup to a Hetzner server.

NixOps deployment takes the server from scratch to finished clone of my laptop -- with a bit of server-specific configuration -- with zero manual intervention.


> Packaging the Nix way does require patching upstream and that's probably inevitable.

Not to familiar with Nix, can you explain why? Anything that can't be handled simply by applying small patches to the upstream during packaging?


"patching upstream" in this context means "applying patches to the upstream during packaging".

Nix and Guix require programs to work with a directory layout that differs significantly from the standard. Simple things like `#!/bin/python` don't work anymore, because they aren't explicit enough.


Yeah, thanks. To add to that, the Nixpkgs "standard library" includes lots of convenient tools for patching like this, and much of the common stuff is done automatically by the standard package function. The #nixos IRC channel is a good place to ask if you run into trouble when packaging something.

Generally, making NixOS packages is really delightful. Even the patching stuff is straightforward and easy to use. Now that I've learned the basics, I routinely make small packages.

With many typical packages the only thing you need to do is specify how to fetch the source, plus the list of build dependencies, and there are built-in functions for fetching from HTTPS or Git.


Ah ok, that clarifies things. Thanks.


Avid NixOSer here. While it's sad, that we're not there yet in terms of user-friendliness, I still think, given the radical approach, NixOS takes, it's increadibly feature-complete. I find it easy to tweak, too, but that might be due to previous experience with functional systems.

For the specific suggestions in the article, I'm not convinced that they are possible, while retaining the same benefits:

- Filesystem snapshots are not enough to deal with, e.g. slightly different builds of the same shared library

- Namespaces, however are. In fact, there is buildFHSEnv in nixos, to take advantage of those.

- Hiding of runtime dependencies is done with the intent to minimize "accidental success", because that leads to unstable systems.

- Also a package could, e.g. depend on a patched version of Python2.7, that you'd never want to accidentally end up on the system path.

Having packaged some, I do feel the author's pain though. I wonder, if we couldn't get rid of a lot of upstream grind, by encouraging usage of fhs-env namespaces.


I think the right term here instead of "filesystem snapshots" here is "btrfs-like filesystem subvolumes" which are reflinked CoW volumes based on a snapshot.


Disclaimer: I'm core NixOS developer

It's great to see such articles as they dive deep into psychology when it comes to new technology. It's really hard to unlearn.

It takes effort to realize that the promised gain is bigger than the pain coming from doing what we're used to doesn't work.


> It takes effort to realize that the promised gain is bigger than the pain coming from doing what we're used to doesn't work.

This could almost be the subtitle for "Functional Languages: Those Things Your Manager Won't Allow You To Use Yet"

I think that a day will come when everyone will realize, and we'll have the tools to do, "functional" all the way to the metal. I like to quote John Carmack from http://www.gamasutra.com/view/news/169296/Indepth_Functional...,

"A large fraction of the flaws in software development are due to programmers not fully understanding all the possible states their code may execute in. In a multithreaded environment, the lack of understanding and the resulting problems are greatly amplified, almost to the point of panic if you are paying attention. Programming in a functional style makes the state presented to your code explicit, which makes it much easier to reason about, and, in a completely pure system, makes thread race conditions impossible... No matter what language you work in, programming in a functional style provides benefits. You should do it whenever it is convenient, and you should think hard about the decision when it isn't convenient."

I think this paradigm applies at the OS and filesystem levels as well (ideally). The better control you (and your code) have over the states of everything in your system, the more deterministic your system behaves, the fewer unexpected bugs you have to chase down, the better you can reason about the computing you're doing and the programming you're responsible for making work.

I mean, what's the big advantage of Docker and the whole "container" movement? You are guaranteed (more or less) to have a known state. Why does resetting hardware fix problems? Because it resets things to a known state. Managing state is the problem, and functional paradigms are the solution.


> I mean, what's the big advantage of Docker and the whole "container" movement? You are guaranteed (more or less) to have a known state. Why does resetting hardware fix problems? Because it resets things to a known state. Managing state is the problem, and functional paradigms are the solution.

I would argue that, with Docker, you may or may not have a known state. You certainly have a fixed state, but whether or not the state is "known" depends on how you generated it. If you use something like Nix (or Guix) to create the image, then you have a chance of knowing the state and being able to understand it. But if you build the image in some other way, there's a good chance that you won't be able to understand the relationships between the components, and that you won't be able to change the relationships safely and effectively later on.


Agreed (and that is why I qualified it with "more or less"). Docker is sort of a halfway solution, but it's after the same end goal as functional paradigms- Deterministic behavior that can be understood by a programmer (thus leading to a reduction in bugs) thanks to its known state(s).


i use nix to build my docker images and love it for that.

http://nixos.org/nixpkgs/manual/#sec-pkgs-dockerTools


It sounds like you're blaming the users for not "unlearning" enough to be able to use NixOS.

As long as you have that attitude, you'll be driving away users. How do you think you can improve your software to make it easier for users to "unlearn" stuff you think is bad?


As a near-fanatic NixOS user (now!), I agree with both you and GP. The learning curve is brutal, so it took me a while to get started, but it's paid off several times over.

It's unfortunate that humans can't trust each other enough that saying "You'll like it after a few months" is good enough, but it's also an inevitable fact of life; for starters, the speaker might be mistaken. I don't think NixOS is for everyone, at least not yet.

And it's hard enough to run arbitrary software that, yes, you do need to learn the Nix language. In most cases it's as easy or easier to package the application for NixOS than it is to run it without packaging, which on the one hand is a benefit to the Nix ecosystem, but on the other hand can be a little annoying. There are workarounds...

But the workarounds require some unusual understanding to begin with.

I think one of NixOS' major flaws at the moment is documentation. There's the wiki, but it's both somewhat outdated and very much not comprehensive. As for Nix, it's been best explained in a series of blog posts, and as good as the Nix Pills series is, you have to find it before you can read it. It's also somewhat short on HOWTOs.

Most people don't like to spend hours writing documentation, though.

The existence of NixOS, and work done on it, is a massive positive externality. Even if it doesn't catch on, it's going to inspire half a dozen clones in the next decade or two. Unfortunately I don't see a good way to get more effort put into it, especially the boring bits (such as documentation)...

I suppose this comment is my way of trying to change that.


Would you suggest the Nix Pills series now? I notice it's 2 years old.


As a "common user", I'd say very much yes. It goes quite deep into the internals, and touches underlying ideas and basic mechanisms of Nix, which I don't think are expected to change [ever].

That said, note that it starts to be relevant when you're starting to write Nix expressions (meaning, write something in Nix language). Sometimes also when you want to read them and understand them. Which... is probably quite soon after starting to use Nix/NixOS, if you'll want to tweak some configs or add custom new packages. If you're really, really just doing first steps with Nix/NixOS, and/or just use the default packages with only simple tweaks, I'd say you don't need to read it yet (I personally wouldn't grasp enough basics then to understand WTF the articles are talking about at that point).


I would suggest it, there is not much that has changed about the core of nix/nixpkgs, the nix language is quite stable and most changes have been additions rather than changing existing features.


The Nix language hasn't really changed, so sure. Note that it doesn't say much about NixOS.


Because that's fundamental to Nix. We're giving up on some current habits to get the freedom we want.

This won't ever change, but people might.

I'm not blaming them, I feel their pain.

This is common in technology all around, think about driving a car for 25 years with manual transmission and then trying automatic transmission.


While some things are fundamental for Nix's architecture, user interfaces are not. You don't even have to force people to learn the language, you can get apt-getish experience for generating .nix files with git-like commits or whatever. There are no limits in making it user-friendly and respecting people's habits and familiarity (which is actually a core concept for interfaces, that you seem to confuse with new technology).


And that's exactly when shit hits the fan. Now they expect everything will work out with commands managing a language.

At one point, sooner or later, you have to tell them they just need to learn the language.

I'd rather spend time explaining the language than teaching them workarounds.

Having said this, we already do have ways to dodge the language for example using imperative package management. But that's exactly how another such blog post could be written how it doesn't always work due to "freedom" that Nix tries to provide.

Lesson learned here is: pick your fights carefully.


> I'd rather spend time explaining the language than teaching them workarounds.

Good user interface is not a workaround.

I don't think you understand the magnitude of the problem though. By not caring about user experience and forcing people to learn the language you are limiting your target audience to a fraction of professional programmers and pretty much closing the door to everyone else. This will not get Nix anywhere, really. Don't get me wrong, I think what Nix brings us is great and pretty much the only future for package management, but it's very obvious that it's not going to be the Nix itself.


Faking destructive package management on top does exist, and we do plan on making it better, but it's lot harder to get right than you might think.


I have some UI ideas of my own and could've helped, but I don't think investing time into Nix would be smart at this point. I feel like Nix lacks a clear vision of its future, focuses on desktop systems, while package management is not a big problem there, but it is on servers, and yet apart from Linux no other server systems are officially supported. No FreeBSD, OpenBSD, NetBSD.


Overall, I don't think the situation is as dire as you say it is, but I do understand why you think these things.

> I feel like Nix lacks a clear vision of its future.

There's various high-level plans that I think have a decent consensus, but not enough people have authority to make things happen. This means that more interesting PRs often rot.

> focuses on desktop systems

Nobody actually prioritizes desktops over servers, and the core design is quite agnostic, but because Nix is over a decade old, lots of old code and documentation do make it seem this way.

> yet apart from Linux no other server systems are officially supported

Other unices work do for userland Nix, and darwin has official binaries which would be the moral equivalent of official support. Also check out https://github.com/triton/triton a fork of nixpkgs which aims to have a NixOS work with FreeBSD kernel, and https://github.com/cleverca22/not-os to better make immutable OS images.


This statement is completely wrong and ignorant.

Please don't say things if you don't know about them.

Just to prove my point, we've recently merged a branch for hardened compiler flags. This is mainly for server security.

https://github.com/NixOS/nixpkgs/pull/12895


Write a horrible hacky script that emulates 'npm install --save', see what users do with it?


We're not running a startup here Ok? It's not "grow or die". Priority #1 is make sure all the proper abstractions are in place: foundations before furnature. And while we're pretty good on that front, there are a few things left.

That's said we've got some CLI stuff being rewritten and other work that will help new users.


It's really a collective problem caused by user requirements (e.g. needing packages X, Y and Z), upstream practices (e.g. assumptions made by developers of X, Y and Z) and NixOS's unconventional approach (almost everything living in /nix/store/<hash>-<name>).

There's no need to blame anyone, we're just caught in an accident of history. If there is a cause of problems, it's Nix's unconventional approach; but if Nix didn't take that approach it would be just another run-of-the-mill distro with no distinguishing features.

I'd rather live in a world where I can choose whether or not to use NixOS's features, at the expense of its rough edges.


I would love to hear your thoughts on Guix.


Guix comes up on every Nix thread on HN and it's a bit frustrating. Guix runs on Nix technology with a new config language for packages & a different philosophy about what can go into its core package set (it's a GNU project).

My personal take is that the difference isn't large enough to warrant a new distribution and if you want to choose you should go with the momentum and that's with core Nix (10-30x contributors, 10-100x packages, depending on how you count).

People also think scheme is better than Nix as a language, but if you reason backwards from the requirements that's not at all clear to me. I think Nix-the-language would be better off with types, but the lazy, functional & pure parts are necessary for the concept.


>Guix runs on Nix technology with a new config language for packages & a different philosophy about what can go into its core package set (it's a GNU project).

This is a common misunderstanding. Guix is not just a new config layer on top of Nix, it's a new implementation of the concepts pioneered by Nix. The only component that is shared between Nix and Guix is the daemon (written in C++), and they have diverged quite a bit at this point. Everything else about the core implementation (UI, the client for the daemon, recipe->derivation conversion, initial ram disk, init system, etc.) is implemented in Scheme and has pretty significant differences with Nix/NixOS. Even the Guix daemon will be written in Scheme eventually.

>People also think scheme is better than Nix as a language, but if you reason backwards from the requirements that's not at all clear to me. I think Nix-the-language would be better off with types, but the lazy, functional & pure parts are necessary for the concept.

This is another common misunderstanding. The part that is necessary for the concept is that the derivations, the things that the package recipes are transformed into to be built by the daemon, are computed lazily and act as pure functions. Guix certainly retains this quality. Also, if you look at the Scheme code in Guix you'll find that it's written in a purely functional style. I think Scheme's multi-paradigm nature is a strength, not a weakness.


Apologies for misrepresenting - my Guix knowledge is clearly out of date!

OTOH disagree with many of your choices and the assumptions you based them on. But I also think that this would easier to discuss in a civilized face-to-face than over the lossy, easy-to-misinterpret medium of text on the Internet!


I agree that the endless Nix vs Guix discussions are getting tiring.

As a contributor to both, I'll try to explain why Guix won't make Nix obsolete any time soon:

Guix is an official GNU project, meaning they won't ever support any proprietary code. The Linux kernel is "Linux-libre", which is a Linux fork with all proprietary drivers removed, so it probably won't run well (if at all) on your laptop. Whereas on NixOS you'll have things like Steam working out of the box.

Guix also has an unfair advantage in that they started "from scratch", while Nix contains a lot of legacy cruft from "paving the way". The `guix` tool/frontend is a lot better than the various "nix-*" commands, but there is work underway for a similar unified interface to Nix.

Finally, the language: Scheme is more pleasant to work with, but I find system configuration a lot more intuitive in Nix. NixOS also uses systemd/udev which is more familiar for most people than the maturing Shepherd service manager in GuixSD.

TL;DR: Nix and Guix serve different purposes through similar means. They are both excellent distributions and clearly represents a new era of operating systems. Most people would want to go with NixOS if they want a minimal-hassle functional operating system; but if you are just looking for a decent package manager for your favourite distro then Guix may be a better choice (as long as you can live without any non-free software).


> Guix comes up on every Nix thread on HN

Sorry, and thanks for taking the time answering.


Not OP, but i would go with Guix for two simple reasons.

1. both the packaging format and the init format is scheme.

2. because of the second part of 1, no systemd.

That said, i am already a happy user of similar(ish) distro, Gobolinux.


I love that Guix chose a nice, clean (i.e., S-expression-based) representation for all of its code and data, but it's unfortunate that they chose Scheme instead of Lisp.


I much prefer Scheme (and Racket) to the other Lisps I've tried (Common Lisp and Emacs Lisp), since it emphasises functional programming.

Since the ideas of Nix come from functional programming (e.g. conceptually, the Nix store contains all packages, the "install" commands just force their evaluation) Scheme seems like a closer fit.


Functional programming is cool, and it's possible in Lisp, but I prefer multi-paradigm. Sometimes one wants functions, sometimes procedures, sometimes objects & methods; sometimes one just wants to declare stuff.


Actually iElectric was one of the people who encouraged me to learn the Nix Expression language. Now we're doing stuff over at github.com/fractalide/factalide I could never have imagined. The promised gain iElectric hints at can only be earned by learning the Nix Expression language.


As a NixOS user I've certainly encountered some of these issues. While Nix and NixOS are really nice, the first thing I'd recommend to a new user would be to learn the Nix language, as it's pretty much a requirement for getting anything done.

I agree with the author that per-user configuration is a bit overkill for the common use-case of a single user machine, but you have to go out of your way to use it (via the "nix-env" command), so I think it's a bit of a non-issue. Since Nix can be used from any user's home directory anyway (this is often how it's used on non-NixOS systems), it's not necessary for Nix/NixOS to contain this per-user feature; however, by doing so they gain a huge space optimisation by re-using the same store. The same can be said for NixOS containers, which I don't use but I see why it's there.

As for making packages available at runtime, that's what the "propagatedBuildInputs" attribute is for. Sometimes Nix can actually automagically figure out runtime dependencies, since it looks through files as they're installed to see if they contain references to other store paths; if so, those paths are added to the runtime dependencies.


Love NixOS. I spend about a month in it. While it did feel very complete, I did end up reverting to Arch just for lack of time. I agree that you have to learn the nix packaging language in order to make good use of the system. When I have more time, I will get back to it.

Another issue I had was around config methodology - specifically for the desktop use case. For example, in Gnome I wanted to tune my brightness and mouse sensitivity. In most desktop cases you want to adjust those things every session and have that remembered between reboots. But that violates the Nix config system. Perhaps there is a middle ground where those session specific configs can be stored instead of putting them in the global config.

Also the Nix package manager is great - mostly that it is platform agnostic, but the real power is in NixOS's global config file - which is essential for SERVER stability.

I will say that it was an enlightening experience as a whole and some of these concepts are being taken very seriously in other projects. I cant wait for non-deterministic build systems like Puppet and Salt to be a thing of the past. Nix, Guix, Snap packages (and the like), and Docker are much safer and dependable.


No, that is not what `propagatedBuildInputs` is for. `propagatedBuildInputs` should be used very sparingly, in most cases wrappers are the better solution.

For example, consider if every package that contained bash scripts would put bash in `propagatedBuildInputs`. Then I could not use a different version of bash for my user environment than those packages I installed used!

Also, `propagatedBuildInputs` are not even propagated into the user environment, the attribute that does that is `propagatedUserEnvPkgs` (not sure if that name is fully correct), and it is very sparingly used within nixpkgs.


> `propagatedBuildInputs` should be used very sparingly, in most cases wrappers are the better solution.

propagatedBuildInputs is a widely-applicable hammer, and the article's tone made it sound like the author wanted a quick fix. Wrappers certainly provide a better outcome, but require more thought and case-by-case caveats; i.e. they're more appropriate for widespread adoption in nixpkgs, but not so much when throwing something into packageOverrides to stop a build script complaining.

> For example, consider if every package that contained bash scripts would put bash in `propagatedBuildInputs`. Then I could not use a different version of bash for my user environment than those packages I installed used!

Well, wrappers aren't the answer here either; fixing up the shebangs is. Functions like stdenv.mkDerivation will do that automatically.

I've not come across propagatedUserEnvPkgs before, but I also don't make much use of the user environment. I spend most of my time in nix-shells :)


Perhaps that was a feature added after the article was written. It dates from January.


The Nix language itself is very small and simple; much of the functionality comes from the libraries which come bundled in the "nixpkgs" repository. The normal way to define a package is via the stdenv.mkDerivation function, which has accepted propagatedBuildInputs for at least the last few years (as long as I've been using Nix and NixOS).

To make things even easier, there are functions targeted at specific languages, like buildPythonPackage. That's taken propagatedBuildInputs for as long as I can remember, and since Python is interpreted that's basically the default way to use it (there are very few Python packages which are only required during build/installation time).

If the worst comes to the worst, you can always just define a "buildCommand" attribute containing arbitrary bash code. It'll be executed in a temporary directory, and inherit a bunch of environment variables (the most important being $out, which is the path which Nix will install).


The "wrapProgram" function used in builders takes a --path (IIRC) parameter that helps with this, useful in conjunction with the "makeBinPath" function which takes a list of packages and concatenates their path entries.


I used arch for a long time, but got tired of having to fix things every time I upgraded the box. And got tired of having to upgrade the box every time I wanted to install a new package, and found that my installed version of arch was too old.

Nix's language is a giant PITA. But the box runs quite solidly. Hell even my nvidia optimus works - for the first time on any distribution.

For things that don't work well, I shove them into a docker image. Like the arduino IDE.


I used nixos for a few weeks and then went back to Debian.

* I share the concern of the author on symlinks farm. It is scary! I would like it to be dealt with in the filesystem layer (Plan 9 had a snapshot based filesystem - fossil - years ago). Symlinks have all sorts of weird semantics on different Unix machines.

* Another of my gripe with nixos is that it makes Unix, a single user machine! Sure, packages need not be installed in a user-local way. I may be ignorant of other possibilities here.

* More care for licenses. I still use Debian because they really care for licenses. Last I looked, nixos was in no way close to Debian in terms of documenting the various copyrights and licenses of files pertaining to a package.

Otherwise, Nixos is a great idea and a huge step forward.


Regarding licenses, most (all?) packages in the official nixpkgs repo have license metadata, and you can set your configuration to forbid/allow proprietary packages (the "allowUnfree" option), or use a whitelist (e.g. forbid everything except Flash).

Guix is a GNU project, and doesn't allow proprietary packages in their collection at all.


Yes. But the amount of information in the debian/copyright file is a lot more than just one scalar flag. I think that level of detail on the licenses is extremely important.


Why do you believe it makes Unix a single user machine??? I'm not aware of any reason for such statement, can you elaborate?


I think I should have been clear. It does not make Unix single user machine but it is more useful for cases where a single user is the main user of a machine like the laptop user.

My reasoning was very simple. Nixos makes it easy to install packages on a per user basis. This would mean there is a lot of redundancy if another user also needs the same package. A snapshot/dedup filesystem will easily solve the problem.


Multi-user Nix systems use a "nix daemon", which user's commands send requests to ("please build this version of Firefox with this version of GCC, etc."); all of the results go in the main "nix store" (usually /nix/store).

Nix doesn't store duplicates; a hash is calculated, based on the inputs (source code, compilers, libraries, etc.) and if an output with that hash already exists, it will be used. If not, the configured binary caches will be queried, to see if a pre-built binary can be downloaded. If not, the inputs are fetched (following the same process) and the build is performed.


Thanks for educating me. I was totally mistaken. Thanks.


When a user installs a package, it goes into the system-global package store, and the user's symlinks are updated. When a second user installs the same package, they just get a symlink to the original. There isn't any duplication.


Thank you.


Nix profiles link to the same global store, so user packages are not duplicated.


Thanks.


> Symlinks have all sorts of weird semantics on different Unix machines.

Is there any other implementation than posix in the wild these days?



Well, are there any advanced filesystems that provide better functionality and which work on Linux and OS X? Or what would be the alternative?


btrfs perhaps? or zfs?


PLAN9 namespace is a perfect solution for this kind of package isolations. I can imagine a perfect NixOS plan9 hybrid, only if APE were good enough. Perhaps people(TM) should really pool resources together and make that happen.


It's very early in the project lifecycle, but that's along the lines of what I'm planning for my hobby OS[1]. Right now I'm planning on using Genode/seL4 for the kernel and doing a plan 9-style userspace in Rust. Genode has a Linux virtualization/compatibility subsystem that I hope to use to bootstrap Linux "flavored" namespaces.

[1]: https://www.heartcore-os.org/


funnily enough I tried the same sort of switcheroo a few weeks ago as well and just ran into too many issues for it to be useful for me at the moment as well. I tried to contribute back fixes for the the problems I ran into, but just couldn't figure out how to fix all the problems I had with the way things were being done :( PostGIS and a few Gnome 3 desktop issues basically drove me back to Arch where stuff just works.

I'm sure I'll try nix again, and again, until maybe one day it just works as well. I really do love the features it brings not just on paper but in practice.


Yup, same. Nix was awesome, and i was convinced that for my laptop (OSX) and my home server, it was all going to be NixOS and Nixpkgs from here on out.

Then i started running into external dependencies.. one after another, having to patch them to make them work with nix.. it was just such a headache, and honestly i was in way over my head. I am not that well trained of a linux user, but Nix sort of required me to be. So, i had to backoff for now.

I really hope i can use it again soon.


Correct me if I'm wrong but isn't the better solution to this containerization? The application will in a literal sense own the system.

You also get inherent security (unless someone figures out how to escape their vm).

I member someone had a website talking about this. The use of a file system with revision tracking, containerization for security, and sending files between VMs to make things work.


Containers solve a related, but different problem. They help you black box some application so that you can split out the application state (on volumes) from the application itself – and you don't have to worry about things trampling on each other.

Nix gets you a reproducible environment – which is just as applicable to building containers. Most containers on docker hub would be very hard to exactly reproduce, if you build them with nix you don't have that issue.


The great things about NixOS is that you can describe the entire state of your system in configuration files, and then after running `rebuild` know that what's on your computer _exactly_ matches that configuration.

Can you do that with containers?

For instance, I'm on NixOS `16.09.git.20f009d (Flounder)`. Here's my configuration: https://github.com/seagreen/vivaine. I know my machine exactly matches the combination of that NixOS version + those config files. My impression is that this is messier with containers because while you can make a snapshot and know the snapshot is frozen, as you modify snapshots over time they'll begin accumulating cruft due to things like installing programs, uninstalling them, snapshotting, but still having some leftovers from the install. I'd like to hear from someone with more container experience here though.


Is there any real advantage to that opposed to say running this on my arch machine...

   yaourt -Qe|grep -v "(base"|grep -v "(xorg"|grep -v "lib32"|cut -f 2 -d "/"|cut -f 1 -d " " > package_list.txt

Then to reinstall...

   pacman -S package_list.txt

And copy in my home directory. After a reboot everything should work the same way I have it setup (I keep every change I possibly can in my home directory. It's everything aside from two scripts that aren't actually needed)

The difference is I generate my config via usage, not via definition. I don't actually know what I need to use my computer, I've just always used it. I can back everything up and create essentially the same machine (since my distro Manjaro is rolling) every time I need it.

For me, this is a solution that is a bandaid fix for a completely different problem: the lack of rolling distributions and the lack of storing old versions. These are both package manager and os developer problems. Once we leave the space of OS revisions rather then having "The OS" and having it up to date or out of sync with the main os we will fix these problems.

I don't know, am I seeing something wrong?


You don't get features like transactional rollback to prior revisions of your system definition, which is never an important feature (until it suddenly is). This also seamlessly allows non-privileged multi-user package management without duplication of programs or anything like that, so you don't need to poke your system admin or alternatively pollute $HOME (and in turn pollute your bash/zsh config for library paths, require fixing up include paths for anything you compile using $HOME stuff, etc). This feature is actually particularly important, because it allows you to set up individual, 'hermetic' (or 'pure') build environments for every project, using 'nix-shell', even on multi-user machines.

If the project supports Nix, you can just go in and run 'nix-shell', and your machine is magically populated with the needed dependencies to build it (or using already existing ones), and you drop into a new `bash` with a cleaned environment and custom $PATH for that project. You can then leave that shell (after you submit your patch to an upstream project or whatever) and garbage collect the installed dependencies.

The model also permits things like transparent, remote multi-system/architecture builds (why build the Linux kernel in my chosen configuration on my laptop when I have a 16 core server somewhere else, why SSH into my OSX machine to run a build when Nix can do it with the same description, etc,) -- and in the future when binary determinism is worked out, a single checkout of the Nixpkgs repository will (hopefully) be able to serve as completely-reproducible build chain, allowing anyone to produce identical binaries and identical Nix packages. (And since Nix works on any Linux system, you can run that build anywhere).

NixOS of course supports general container technology as you mentioned in your first post, so you can also use that to your hearts desire.

In practice NixOS is quite powerful but there are still many glitches and user-facing UX issues that are problematic. It is definitely not for everyone.


I don't think so, that looks like the setup I'd like to be using if I was on a more traditional distro.

One issue might be the handling of dependencies: does `yaourt -Qe` list everything you have installed or only programs you've given a specific command to install?


Containers are but one type of virtualization. What manages the host machine running the container? Nix (and Guix, the project I work on) can be used for everything: bare metal, virtual machine, container. Furthermore, the packaging strategy also allows for granular virtualization of environments. Sometimes all you need is to set a few environment variables ($PATH and such) to point at a specific set of software, and this is made easy because Nix doesn't use a global /usr directory and has a notion of "profiles" that allow you to pick and choose the software you'd like for any given project. If all you have is containers, everything looks like a crate.


Containers don't handle the case where two different versions (or otherwise conflicting variants) occur in the whole dependency tree. For instance, consider program A with direct dependencies B and C, where B needs version X of /usr/bin/D but C needs version Y of /usr/bin/D.


I just remembered that the thing I was talking about was SubUser [0]. You can read their manifesto since it better describes everything that they are looking for. Their solution is to keep file revisions so if you need an older version of a file it is still there.

[0] - http://subuser.org/


Trying to set NixOS on a armv7 board (NVIDIA Jetson Tegra K1), There is a customized linux 3.10 kernel that comes with the board and I'd like to know how to integrate that on NixOS. Currently I managed to boot the board using a generic armv7 image which runs on SD card I got from this page - https://nixos.org/wiki/NixOS_on_ARM . But I cannot activate any services like ssh or install drivers like nvidia, cuda iwlwifi and bluetooth on Nix. Any support will be appreciated. NixOS manual doesn't explicit between different architectures and I think it's mainly written for running NixOS on amd64 PCs or similar architecture but not arm. What am I missing here, where should I look into or read about to set a properly running system on my arm board? Thank you!


So this probably just saved me a trip down NixOS road, which was on my (ever too long) todolist for some time now.

Last year, I finally put my private systems under configuration management, but just like the author, I was concerned that both the configuration management tool and the system package manager like to exert complete authority over the system without knowing about each other, a recipe for disaster.

In the end, I wound up writing my own configuration management tool, Holo [1], which relies on system packages to install applications and deliver configuration files, and implements the additional logic to provision things that the package manager does not understand (users, groups, SSH keys, etc.).

One thing that's missing from Holo right now is a validation pass that the whole system matches the expectation described by the configuration packages. (Right now, only files that were touched by Holo can be monitored for changes, but not files installed by the package manager.)

But it turns out that that's a hard problem. A normal Linux system has a lot of moving parts that are not accounted for by the system package manager. See for example the positively huge list of exceptions in [2]. Of course, the bulk of these quickly-changing files are data, which can be skipped easily by just ignoring /run, /tmp, /var/cache, /home and so on. The question remains what to do with the rest: implement some sort of parsing for auto-generated configuration like /etc/passwd, /etc/ssl/certs and so on, or just rely on snapshots of known-good states designated by the administrator? I'm still undecided.

[1] http://holocm.org (the site seriously needs an update)

[2] https://github.com/graysky2/lostfiles/blob/master/lostfiles


Not sure if I'm reinventing the wheel, but incidentally I've been working on this lately:

https://github.com/CyberShadow/aconfmgr

It's like basic traditional configuration management, but it's two-way: it takes the difference between the user-managed configuration and the current system state, and rectifies this difference by either editing the configuration or the system. A second invocation is a no-op. If your configuration is in version control, this allows easily tracking any unaccounted changes to the system.


I used NixOS for about six months, and while I think the core of NixOS is great, but I never liked KDE and how much it kind of forces it on you.

Gnome support is kind of horrible, and it felt like I either had to commit to something incredibly low-level like XMonad, or KDE.


I don't think many NixOS users use desktop environments, generally speaking, and NixOS users tend to wind up being NixOS contributors as it stands right now. XFCE's available, though, for something lightweight.


I mean, the Live-CD has KDE running by default, so I kind of assumed that was the standard.


It had to have something, right?

There's no 'standard', but KDE is used more often and therefore has fewer bugs. XFCE works fine. If you want Gnome, well, I haven't tried it, but they always welcome patches.


Am I the only one that caught the homestarrunner reference?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: