Hacker News new | past | comments | ask | show | jobs | submit login
Revisiting How We Put Together Linux Systems (0pointer.net)
110 points by vezzy-fnord on Sept 1, 2014 | hide | past | favorite | 58 comments



So the proposal is another layer of indirection, à la namespaces, à la where-have-I-seen-this-before.

I'd appreciate it if they'd stop thinking that what they're doing is for the benefit of everyone instead of in their particular interests.

I wish someone, anyone, would officially pull udev out of systemd for the sake of all Linuces.

But in reality, I'm betting on hypervisor-based micro OSes as they way to go. We don't need another "app" format, like what Docker/libcontainer and this proposal are gravitating toward. We already have one, it's called HTML5/REST/HTTPS. Get that in your skulls people! A tar-ball is an app that serves the directory tree, okay. A POST virtualizes this, like a copy-on-write fork(). Now you have to determine how you virtualize on sessions/user logins/etc. There is nothing in this decade that needs to be done any other way!


> So the proposal is another layer of indirection, à la namespaces, à la where-have-I-seen-this-before.

Every year or two another group of people gets it into their heads that there is "something wrong" with the way distributions do things, and they have "the answer". (Often the answer involves a lot of new names for things.) These groups always make a lot of noise, launch a bunch of new projects to reinvent a lot of wheels "the right way", and draw a bunch of media attention.

All of the users who like this sort of thing immediately desert the previous group and leap to this new one. Last year's group somehow manages to become even louder as their userbase dwindles.

Meanwhile, the established distros dig into what this latest group is doing, and finally manage to cut away enough of the rhetoric to identify what real advantages it has. This usually boils down to a couple of features buried behind several gigabytes of blogs, mailing lists, wikis, and enthusiastic-but-content-free rants. The established distros usually say "yeah, that could be handy" and implement these new features, without rewriting the world.

Sound familiar?

This one looks like some folks decided docker wasn't complicated enough and renaming everything would let them make a new thing. If you look through their summary proposal, it's a list of things which you can already do, albeit without there being any single distro you can install today which has all of them working out of the box (largely because cryptographic verification and docker are still works-in-progress - they'll get there).


That's normally harmless... But this time it's systemd developers doing the dance, and they have enough leverage to take several distros hostage into their new ways.

Or maybe that sort of thing is what is necessary to make somebody replace systemd for once.


> Every year or two another group of people gets it into their heads that there is "something wrong" with the way distributions do things, and they have "the answer".

Even Nix package manager, which I think is doing something usefully different?

Having just spent too much time trying to compile PHP 5.5, 5.4 and 5.3 + extensions on the same Ubuntu 14.04 box and failing due to small-but-incompatible underlying library changes (especially FreeType) I'd really appreciate it if established distros learned from their model.


But in reality, I'm betting on hypervisor-based micro OSes as they way to go. We don't need another "app" format, like what Docker/libcontainer and this proposal are gravitating toward.

That approach works well for web apps, but not for desktop applications, of which there are (luckily) still thousands.

I agree that this seems over-engineered and a move to tie everything in the systemd ecosystem.

Frankly, I don't see why an approach like OS X's application bundles would not work. Sure, since libraries are generally less ABI stable, you'd end up packaging more libraries in the bundle, but disk space is cheap. Slap on some MAC and you'll also have sandboxing.

PC-BSD's PBI seems to go more in the right direction:

http://www.pcbsd.org/en/package-management/


The problem with "application bundles" is that a lot of libraries are involved in interactions between programs. Can libgnome from 3.10 interact with libgnome from 3.12? What happens when you try? How about when you've got 20+ applications each using a different version? I'm not sure anybody really knows the answer to these questions, because we normally go to great lengths to avoid finding out.

OSX "solves" this problem by not having it: all the major libraries are supplied by Apple and are part of the system, not part of the bundle. So in practical terms, OSX does it the same way as the linux distros: on the vendor's timetable, and upstream authors have to follow along.

You can bundle things like zlib, which have a very well-defined API and never really change... but then you have a horrible mess when there is a security flaw in zlib and you need to update every application installed everywhere. But stable libraries like zlib are exactly the sort of thing that regular shared libraries work great for, and it's the fast-moving unstable APIs like libgnome that cause all the grief.


Nothing in that proposal is systemd specific. It only requires btrfs and a bit of support in the initrd.


No, it doesn't require systemd, and that's the annoying bit.

    I now want to take the opportunity to explain a bit where we want to take this with systemd in the longer run

    The systemd cabal (Kay Sievers, Harald Hoyer, Daniel Mack, Tom Gundersen, David Herrmann, and yours truly) 
    recently met in Berlin about all these things, and tried to come up with a scheme that is somewhat simple, 
    but tries to solve the issues generically, for all use-cases, as part of the systemd project
If this doesn't require systemd, why should it be part of the systemd project?


Because Kay Sievers SAID SO! <insert boilerplate insults>

:-)


I wish someone, anyone, would officially pull udev out of systemd for the sake of all Linuces.

Already happened a while ago, much to the collective and highly amusing derision of many pro-Freedesktop developers: http://www.gentoo.org/proj/en/eudev/

I'm curious how they'll handle the libudev migration from netlink to kdbus, though.


For simpler systems, there's also mdev from busybox and toybox (no 3d engine included). It's usually enough for headless servers or vm's and fits in neatly with making these "micro OSes" even more lightweight.


> But in reality, I'm betting on hypervisor-based micro OSes as they way to go. We don't need another "app" format, like what Docker/libcontainer and this proposal are gravitating toward. We already have one, it's called HTML5/REST/HTTPS. Get that in your skulls people! A tar-ball is an app that serves the directory tree, okay. A POST virtualizes this, like a copy-on-write fork(). Now you have to determine how you virtualize on sessions/user logins/etc. There is nothing in this decade that needs to be done any other way!

What you're describing sounds interesting. Is there a way I can read about what you said in more detail? I mean the following:

- HTML5/REST/HTTPS can be thought of an app container format?

- tarball is an app that serves the directory tree.

- POST virtualizes it like a copy-on-write fork()

I'm trying to connect it to what I've been thinking: that Javascript is permeating everything, and is like the assembly language of the future, cz everything would be done inside the browser, the OS would only be there to provide hardware (based on the Gary Bernhardt's talk: https://youtu.be/tMEH9OMYmsQ).


Is this a joke?

The standard node.js joke but more disguised?


The beautiful thing about it is that you have no way of telling, and that there are a lot of people who earnestly believe it.

It's also quite plausible, given the up and coming category of web-based mobile OS foreshadowing it.


>But in reality, I'm betting on hypervisor-based micro OSes as they way to go.

For all this talk about "the sake of Linuces," how does it help to spread a bunch of crap around multiple virtual machines?


Hidden in these sorts of proposals is an underlying assumption that being able to install the newest version of some arbitrary program the day it's released and have it Just Work is something that (a) people actually need, and (b) this is a solvable problem that doesn't involve significant tradeoffs in usability, cognition, third-party integration (which is more important in server environments than people realize), or security.

I've been working with Linux systems for over 20 years now. Despite the grumblings of wet-behind-the-ears programmers to the contrary, having the newest version of package Foo is almost never necessary and one can generally get things done on a 1, 2, even 5 year old version of some operating environment. In fact, I'd argue that allowing time for software to mature is a Good Thing -- in the meantime, bugs get fixed, regressions in libraries are reverted, semantic versioning violations are repaired, etc, which reduces the amount of needless headache you have to deal with when you just want to Get Work Done.

Of course there are exceptions (I would never ask anyone to write server code under Ruby MRI pre-2.1), but those conditions are readily dealt with: you compile what you need (perhaps statically), put it in a tarball in /opt, make some symlinks to /usr/local (or update your PATH and other environment variables) and move on to the next actual problem.

Yes, standards and "old packages" can be annoying if you're used to a greater level of control on your personal equipment. But fences and constraints can be good: they keep your eye on the problem you're trying to solve; they keep the goalposts from moving; they minimize combinatorial compatibility problems; and they allow people to leverage their zones of expertise at the appropriate layers.


> Of course there are exceptions {...} put it in a tarball in /opt, make some symlinks to /usr/local

This becomes unmanageable once you need to do that for multiple applications. Different applications need different versions. I'm not that familiar with Ruby, but you can imagine different versions of Ruby itself needing different versions of system libraries. An upgrade of your OS could become incompatible with the Ruby version you just compiled yourself.

It's good that some people are looking for solutions to this problem. It's a worthwhile effort, even though it might not be directly applicable to everyone. Same was true for Systemd a number of years ago.


In practice it hasn't really been much of a manageability problem at any of the (reasonably large) companies whose servers I've maintained. There are perhaps five or six mission-critical packages that need to be kept up, and since we only support two target distributions, the order of complexity is quite small.

The key is to define your constraints clearly and not to deviate from them unless you have to.


The article is completely by side the point. He's basically ranting about how they can't test systemd on all distros, because every distro does things differently. How to fix it? Virtualize distros, yay. That may fix your problem, but no one else's.

The real point is that, due to distro fragmentation, Linux gives the casual user an inconvenient shared library linkage experience. At the same time the distros are really good as packaging software, which gives us the "one-click install" experience we're so use to. But they do so, at the price of compatibility, so we have the current distro lock-in situation.

Anyway, as an upstream software developer, I see the medal from the side of library linkage: when my program starts, the dynamic linker resolves this big list of symbols that my program contains and finds me matching libraries. You can extend this paradigm to icons, locale, and other support files as well.

So who tells the dynamic linker which symbols to match? Well, the contents of the shared libraries, and these already contain versioned symbols (e.g. this memcpy@1.0 does this, memcpy@1.1 doesnt do that anymore). The easiest solution to the matching problem is static linking. But that's not what we really want, because we may actually want to replace symbols with better versions (due to security fixes, faster implementations, super-set of features, etc).

And who versions symbols? Well, usually the upstream developers of the important libraries do. And they do that somewhat remarkably well already, but it could be better. If your program uses an upstream that doesnt version things, well, bad choice, static linking is probably the best solution here.

So what I would propose, is to move the problem down to a linkage problem, with better consensus between binary creators and upstream authors on how to version symbols. And if my system currently doesnt have a matching symbol installed? Just download it from the distro's well-versioned repositories, maybe not the whole gtk+ library, but a diff of it to another version.

We would then in the end not have a package installer, but a fetcher-of-missing-versioned-symbols-for-a-binary installer. But this model is pretty far fetched, since the current method of compiling code into monolithic shared libraries is much simpler. On the other hand, the linux dynamic linker is already very intelligent, maybe it's time for the large distros to cooperate on this level.


I usually try not to be blatantly negative, but this is just stupid. Most users do not care how quickly new versions of software get to them. The current distro packaging system is just fine. Leave it to Poettering to come up with something complicated for no good reason.

The funniest part is that he's using systemd as an example. That's exactly the kind of software that I want to change very rarely, and I want those changes to be vetted and integrated by someone intimately familiar with the distro I'm running. The idea that I'd run an upstream-packaged version of something like systemd is absurd.

The proposed packaging system is just another system incompatible with all the others out there.

Want a standardized system image? Build one from your distro of choice (guess what: your users don't care about that either), uninstall dpkg/rpm/whatever, make a disk image. Automate the process so it's reproducible. Done.


>Most users do not care how quickly new versions of software get to them.

Users do indeed care about getting bugfixes and new features in a timely manner.

I'm not sure if this is the solution to that problem, but it would be good to have a way to make pushing packages downstream easier - decoupling the configuration/integration, testing and security code review steps.

This is a problem that the distros themselves need to fix, though, not Poettering.


Sadly, this is only true if they are affected by the bugfixes. And, honestly, most users need to be taught to avoid upgrades that include massive feature creep that actually has the potential to pose security and stability problems.


The current packaging systems has problems. You cannot install just any version of any application on your system, but a lot of people want that.

A nice example is games. You want to install a game on your system without working around the package manager. That was very hard to do if you weren't on the distro that the game was build for. For instance, if the game was just released, there was little chance it would work on Debian stable...

Steam has solved this problem by using their own package manager and their own set of 'approved' libraries that other games must link to. Steam always ships with the set of 'approved' libraries, just so that it can side-step the libraries on your system.

The same is true if you want to build a piece of software that seems to be incompatible with your system: it needs a different GCC, it might need a different version of Gnome libraries.

Also, disk images aren't as simple as you make it seem. If you have a system and want to upgrade one of your applications (but leave the rest as they are!), you aren't going to like the current package management tools.

These kinds of problems happen often for a lot of people and a solution is highly appreciated. That said, the solution in the article probably isn't the best.


> You cannot install just any version of any application on your system, but a lot of people want that.

That's the price of shared libraries (which you're trading off for easy security updates).

If you don't like this, you're still free to run statically-linked programs.

> Steam has solved this problem by using their own package manager and their own set of 'approved' libraries that other games must link to. Steam always ships with the set of 'approved' libraries, just so that it can side-step the libraries on your system.

So it's OK for Steam to do this, but not the OS vendor? I don't follow your logic.


This could have been solved on a different level. If the right architecture was in place Steam didn't need to solve it.

You can have multiple shared libraries of the same name, but different major versions. Applications that need the same version can use that same version. Applications that need different versions can use different versions. The package repository shouldn't be the conflicting factor.


Most distributions have provisions for installing multiple major versions of a shared library (they change the package name to libfooX to accommodate this).


You cannot install just any version of any application on your system, but a lot of people want that.

My thesis is that no, a lot of people don't want that. Who are we talking about here? The "self compile from source tarball" crowd grows smaller and smaller. The kind of user that is going to want a pre-installed OS image or the like does not in general care about updates to the software they have running.

Most people just install and forget, and if it weren't for in-app auto-update, most people would never upgrade.


I agree with most[1] of the issues he is aiming to solve. But the approach would have been useful if he would have built it on top a superior existing system which is Nix & NixOS. Docker is increasing popular while missing most of the problem NixOS (and the new vision of systemd) solve, so I'd say it is not wrong to assume there is definitely solid demand for this kind of app "packaging" (virtualization).

Tying this to systemd is an "interestingly" bold move, to put it mildly. People don't switch to btrfs just for the sake of an initrd replacement, after all, and forcing them is openly asking for replacements. So, I would be very careful if were willing to see that vision materialize.

[1]: One obvious point of discussion is declaring exact versions of library dependencies, which in most systems leads to highly undesirable results: just compare the horrible world of maven packaging (i.e., the java world, mostly) to the efficient (but curated) world of debian packages. But when you have exact declarable dependencies (where any dependency dictates exact versions of its own dependencies), and reliably reproducible builds (the game changing factor), and hopefully some level of automated testing, you will be fine: You can quickly upgrade all your package portfolio to the newest version of the "security fixed" library, have the world get rebuilt against the newness, and get everything verified (using the tests that you have. You have some, don't you?). The problem of transferring only what actually changed is non-trivial but completely solvable in various ways (Example: only send diffs, and issue new encryption/hashes for the diffs through an automated trusted system that does the diffing).


I totally agree. I was very surprised Nix wasn't mentioned once in the article. They seem to be solving the same problem, but in a way that should work across different setups (no btrfs requirement for instance).

Systemd has made a number of good (imo) standards for distros to adhere (most notably service management and logging), but the standard that the article is describing doesn't seem like the way to go.

Whatever the case, I'm very interested in seeing how things will progress with Nix, Systemd and others.


"We want to allow app vendors to write their programs against very specific frameworks, under the knowledge that they will end up being executed with the exact same set of libraries chosen."

Do we? So if lazy vendor X doesn't get around to updating their application which "depends" on a version of zlib/openssl/libpng that turns out to be horribly insecure, we can't swap in a patched version?

I don't see how this is any better than the library bundling problem.


Don't we? I thought this was just called "static linking."


It is, or as I like to call it, "keeping that old insecure library around forever".


This could be done by freezing the ABI of a runtime, allowing compatible patched versions to be swapped in. (Now you rely on the runtime's vendor to do this.)


Okay. Something I'm not sure I see clearly, that mean you'll have as many runtime as there is ABI ?

The way I it right now is that I can have a Gnome 3 runtime at version 3.20.1 compiled with let's say openssl for all my Fedora binaries depending on Gnome and another complete runtime at the same version because the Firefox I've donwloaded from mozilla.com has been compiled on an opensuse the rely on gnutls ? The example is made up but as far as I understand it could happen, right ? It feels like a lot of wated space for one app.

And the scheme is supposed to cater for embedded ? I'm puzzled. I missed something ?


How is this not Embrace Extend Extinguish?

1. Systemd as an alternative to Upstart and init.

2. Systemd does everything.

3. Systemd becomes more important than the distributions themselves.

I actually like their vision - I just wish my linux boxes were not part of it.


A very apt summary.

Can anyone recommend something from The Other Side of the fence for desktop/laptop use? One of the BSDs? A spinoff from Solaris? Or, something else?

I currently use archlinux, and value the rolling release approach with precompiled binaries.


PC-BSD is the obvious option targeted at desktop. Normal.FreeBSD, NetBSD or OpenBSD are also options. The Solaris spinoffs are pretty much server targeted so not really recommended.


FreeBSD and the very desktop friendly PC-BSD


I run FreeBSD on most things that I care about running long term these days, tho PC-BSD probably makes more sense for end-user use. I'll caveat that by saying *BSD isn't Linux, and it does a lot of stuff differently, so expect a learning curve and a need to embrace certain new ways of doing things.


I think this is awesome. The ability to deploy new OS versions, and have a full trustable roll-back, is particularly great.

I'm not totally convinced about encoding a user database into the volumes, mainly thinking about network directories, but I guess the disk users will only be local static accounts anyway. Solving the network user database 'problem' properly would be a huge win (and I don't count LDAP as a solution, although a well-tuned LDAP configuration does work pretty decently).


The largest failing of the article is in the "Existing Approaches To Fix These Problems" where its ack'd that this is attempt number one zillion to fix some old problems, and here is a slightly different approach.... the problem specifically being lack of analysis of past failures to improve the new proposal. We're just going to end up with POSIX / LSB again, except maybe less successful because its obviously going to be immensely more complicated. Its possible that analysis was done but not documented.

The tone of the article is very strange, like someone who has no experience with distros or sysadmin work came in with a blank slate from an unrelated field, like perhaps sorting algo research. I simply can't identify with most of his postulates, so its no surprise I was all WTF at the proposed solution. Or rephrased, its an academically interesting idea in an abstract sense, but seems to be unrelated to, and not useful for, the existing real world. (edited to emphasize, that this is a particular instance of the general subject that new opinions from uninformed outsiders are often interesting to look at, sometimes even insightful to have a new pair of eyes, but are usually a truly awful source of advice)


> systemd cabal

That phrase isn't doing them any favours.

Software packaging sucks. It will continue to suck until there is a concerted effort from a quorum of stakeholders to fix it. I'm not holding my breath.


> The classic Linux distribution scheme is frequently not what end users want, either.

THAT'S THE WHOLE POINT YOU GODDAMN MANIAC. It's not what they want, it's what I want.

> Many users are used to app markets like Android, Windows or iOS/Mac have.

STILL THE FUCKING POINT. Making desktop linux more like windows is like selling beef steak and lamenting you're having a hard time getting vegetarians to buy it.

Fuck, this made me mad. I couldn't get past the "Users" section, he might redeem himself farther on, but I doubt it.


[deleted]


There's this mantra: "Anitvirus is dead". Unsigned shit will ruin your day. It's creating sadness every day. Blindly booting any old garbage, happily taking any old firmware update, executing whatever has an execute bit set is what is broken about computer security today.

Your computer's TPM is actually completely customizable. By taking ownership of mine I presume I can no longer boot the Win8 image that shipped with it, but I see now it's a very useful tool. I also started off in the anti-DRM, anti-Microsoft crowd with a deep distrust for this technology, but I think these days it actually is seeing useful application in enterprise security. Things like Intel's TXT and AMD's SVM are definitely on the right trajectory.

Having said all that, I dislike systemd. They still don't support keyscript in crypttab, from what I can glean of Lennart's comments on this it's because custom scripting seems to be an abhorrent concept among systemd developers. This strikes me as the wrong culture to have for such a critical piece of infrastructure in our ecosystem.


He actually mentions ChromeOS as an inspiration.


You'll probably get downvoted, but I gave you on upvote since it made me laugh.


I beg to differ. IMHO package management in Debian and derivatives is nothing short of brilliant. If a security issue arises in a library, or a new feature is implemented, you update it and all of the software using that library will benefit.

OTOH it would be great if apt could automatically take advantage of btrfs snapshots to better ensure atomic updates, or to allow rolling back from updates that might break something. This would be much better than just choosing a different kernel at boot.


It would, unfortunately, tie package management to a particular filesystem.

As an optional item, I like it; letting it connect to zfs, btrfs, LVM, or what-have-you on a plugin basis seems plausible.


Two words: "Use NixOS".


It's astonishing that he fails to even mention Nix (and various other distributions like Bedrock) in the section on existing solutions and his previous post on reproducible software and rollbacks. It's almost like, he hasn't heard of it from inside the Red Hat bubble - or perhaps because it's not "Certified cgroup container systemd trending cloud Enterprise Linux (TM)", he isn't interested.

The way he presents this post is almost like he thinks he has discovered some problems nobody else has encountered before too. Nix solves the majority of the problems he mentions, and it still has plenty of room for improvement to fill the gaps where it doesn't. It should really send flags waving about the mentality behind this project and what their intentions really are. They probably want an "integrated solution" (read: tightly coupled to Red Hat components), so that they continue to be in the driving seat.

> We want our images to be trustable (i.e. signed).

Signing images does not make anything trustworthy at all - you still need to trust the signer. It's shocking to hear him mention "post-Snowden" world, yet completely fail to recognize that on should absolutely consider Red Hat to be the potential malicious party in this - especially considering their dubious customer base and large contracts with US government bodies.

On the other hand, Nix, Guix and Debian are trying to create an actual solution to the trust problem - by developing a system where one can perform bit-identical reproductions from the same source code, such that several independent parties can build the same software and you can opt-in to trust a consensus of parties, rather than a single one (and if you don't trust that, you can rebuild packages yourself from source). This is how to create trust in the post-Snowden world - you decentralize it.

> We want ...

We want a lot of things; but Mr Poettering, you have not told us why you want to NIH solutions to every problem you identify, rather than developing upon the existing solutions to (mostly) the same problems. How about some justification as to why Nix should not be considered as the framework to build on, or why say, Bedrock is inadequate for running packages from different distributions on the same OS. Must we really throw away 7 years of effort by the Nix community to support your next toy?


How does this help me from "Desktop app for tracking stars" or "Video game" application developer perspective that I want to build once, publish once and non-technical users will be able to install simply on(Debian, Ubuntu, Fedora, CentOS, NixOS.....) and not depend on distros packagers?

Like I can do for OSX or Windows?


Users would have to install NixOs (or GuixOs) as the base operating system, there is no getting around that.

But, after that, it seems to solve many of the versioning and dependency issues.


Not necessarily: nix the package manager runs fine on other distros, as well as FreeBSD, OSX, Windows, SmartOS, ...

Nix doesn't need fancy filesystem shenanigans to do its job: build isolated packages with complete runtime dependencies.

NixOS is built on top of it to provide atomic system-wide upgrades and rollbacks.


Or perhaps OSTree https://wiki.gnome.org/action/show/Projects/OSTree "git for operating system binaries"

I think there's a bit of both. There's obviously a lot of common thought regarding requirements and a bit of overlap in solutions.

(aside: I really don't get how these discussions get people so animated.)


So if, just to post hypotheticals, I actually wanted systemd but didn't want a crazily overcomplicated snapshot system forcing me into btrfs' less well-tested areas, would that be possible in this brave new world?


So, union mountss ? This looks a lot like what plan9 used to propose:

http://www.cs.bell-labs.com/magic/man2html?man=mount&sect=1

with the added feature that each sub-fs is independently distributable, which is very good. I fear tying it to btrfs (or any specific fs) might be a bit much though.

I don't know if that could be possible by mounting upstream tarballs as read-only fs in userspace... maybe a crazy idea.


Slax did something similar with aufs + squashfs . This looks very similar but with a standardized way of handling dependencies and paths (something that Slax modules not had, at least the last time that I used it). The idea is very powerful and interesting.


OMG - finally.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: