Hacker News new | past | comments | ask | show | jobs | submit login
Why systemd? (jorgenschaefer.de)
235 points by jamesog on Sept 22, 2014 | hide | past | favorite | 257 comments



I think part of systemd's problem, as much as Poettering et al will try to deny it, is that it is full of NIH. One of the things this post criticizes, and Poettering criticizes, is the BSD-inherited daemon() function. Being curious, I looked at the function's implementation, both in FreeBSD's implementation, and glibc's implementation. FreeBSD's implementation handles pretty much everything the daemon writer themself would want to -- it sets the signal handlers and masks appropriately, double forks, creates a session, sets PIDs unless you tell it not to, and changes to the root directory unless you tell it not to. Glibc's misses important steps, like the signal manipulations, tries too hard to create a typical null device, and otherwise completely misses the point.

The biggest problem I see with system is that the developers don't play well with others. Instead of working with various parties, like the glibc maintainers, to fix deficiencies elsewhere, they expect developers everywhere to drop what they're doing to redesign how their projects work, when they work just fine for the many, many other unix architectures out there. Too much of systemd is based on magical pixie dust, compatibility be damned, and not enough on actually making things better.


Well, the problem is, you only changed the problem. Now instead of depending of a specific init system, you're depending of a specific implementation of daemon() function (which I imagine is not really standard, since if it was glibc would implement it correctly). Maybe you could create daemon_correctly() function that was guarantee to implement daemon as it should, but you would incur the risk of having the case of strlcpy() again (glibc didn't added this function since the maintainer said it was messy and bloated, so almost no software used it).

And anyway, correctly daemonizing is only part of what systemd got right. I think the fact that you don't need to track services with random programs anymore (since systemd knows each and every process parent thanks to cgroups) is something much more interesting than just getting daemon() right.


> double forks

That's actually not something you want. It turns out that makes process management unnecessarily hard. That said, the glibc implementation isn't terribly good either. The CW is not to use either.

> I think part of systemd's problem, as much as Poettering et al will try to deny it, is that it is full of NIH.

The most exasperated criticism I see about systemd is its use of dbus for a communications infrastructure, because dbus is both a system bus and a desktop session bus, and everyone associates it with the latter. If they'd just done the NIH thing and rolled their own communications protocol (like various other parties), they'd be deflecting a lot of that criticism.

Honestly, the NIH syndrome seems at least as prevalent amongst systemd's critiques.


The systemd developers are planning on replacing dbus with their own NIH reimplementation, kdbus, soon - complete with its own serialisation format, its own in-kernel IPC framework, and a hard dependency on systemd. (Oh, and it requires everyone to rewrite their code to use systemd's dbus library rather than libdbus too.[1] That library currently supports old-style dbus but they're planning to drop support for that, at which point every dbus-using application will require systemd to function. They're also planning to make udev dependent on kdbus[2] in a way that can't be worked around by forking or using old versions, since apps will use the kdbus API to call it directly[3] )

[1] http://lists.freedesktop.org/pipermail/dbus/2013-July/015726... [2] http://lists.freedesktop.org/archives/systemd-devel/2014-May... [3] http://lists.freedesktop.org/archives/systemd-devel/2014-May...


Oh, and it requires everyone to rewrite their code to use systemd's dbus library rather than libdbus too.

Have you actually read the mail you're giving as "evidence"?

The last paragraph begins with: "The current idea is that systemd will provide a bridge service, that offers the current D-Bus socket, and an unmodified libdbus (or an alternative implementation) can talk to that socket like it talks today to the dbus-daemon."

Edit:

kdbus is also not a NIH-implementation, in any meaning of the word. It was "invented there", and it also fixes things: it will have much lower latency and overhead than the current userspace D-Bus.

http://kroah.com/log/blog/2014/01/15/kdbus-details/


Okay, correction/clarification: it requires everyone to rewrite their code if they want to use kdbus rather than doing old-style dbus calls to the compatibility daemon, which may not be installed by default and is a fairly ugly hack that's probably going away in the future. It needs a bunch of special support code in systemd that's not used for anything else, and the generall expectation seems to be it's a stopgap and most or all dbus consumers will move to kdbus.

Also, technically developers don't have to use systemd's dbus library; apparently Gnome's doing direct calls to the kdbus kernel API instead. That makes it even harder to support systems that don't have kdbus+systemd, of course, but this is Gnome we're talking about.


> it requires everyone to rewrite their code if they want to use kdbus rather than doing old-style dbus calls to the compatibility daemon

Umm... the mere existence of the compatibility daemon makes it pretty clear that one could easily build a system which interfaced with kdbus without changing much of your existing code at all. Your code wouldn't have to be that modular to pull it off.

Honestly, the Linux kernel has, for the longest time kind of been filled with these ad-hoc, efficient IPC mechanisms like netlink. It has severely needed SOMETHING like kdbus, and you can see the key pain points in Linux have already been addressed by other systems using their own proprietary or semi-proprietary mechanism (which invariably happens if you are late to a party that people need addressed immediately).


So basically the horror you're campaigning against would be... change that provides full backward compatibility?


Sorry, I meant to say full forward compatibility.


You need to double fork() in order to ensure that the daemon won't be the controlling owner of the tty once it calls setsid(); i.e. that it will be re-parented to init.

So personally as a sysadmin that occasionally runs daemons from the command line I much prefer a double fork(); I don't want the daemon to exit when I log out.


Both forking and double-forking are completely wrong behavior for supervised daemons run as part of any automated system. Forking makes sense for small systems which lack any automated supervision and where a human admin is responsible for all process starting and stopping, but it's a bug anywhere else. Modern Linux has various methods for working around this bug (like the method of assigning an alternate process for orphaned descendants to be reparented to, which systemd uses, AFAIK), but really any production-quality daemon should provide a non-forking way to start it.


You have to fork, it's the only way to make a new process. Now exactly which process does the fork()ing is, somewhat, academic. Some might argue about the band aid approach of the requirements of parenting and process groups; but that would require leaving POSIX (and hence source compatibility) behind. Allowing any process to set their own parent, process group, or controlling terminal without very strong bounds is dangerous; kind of like having Erlang with mutability, it's possible but it breaks the model (and all the cuspyness that comes with it) and leads to bad juju.

Requiring init to spawn the final daemon process (and hence getting the pid from fork()) is a red herring, any process can get a list of its children and query their proc structure (modulo permissions, but most sane people run init as root).

init should be as small as possible to do only what it is required to do; this is simple anti-fragility at work. The less code and state the better. Feel free to stick your process manager, dependency calculators, and helper daemons in other processes. init should be able to handle something as catastrophic as a SEGFAULT and just execve(/sbin/init) and just carry on (while logging locally and remotely of course).


> You have to fork, it's the only way to make a new process.

I think he was presuming you already have all the processes you need, and this is just about getting them architected right.


Yeah, I think we all know why the double fork model exists. It turns out, if fork a daemon from a different process than your login shell, you don't have a problem and you can more reliably track the process...


The reason people use UNIX-like systems is because they work reliably. In order to make a complex system work reliably, it needs to be easily fixed. In order to fix a system, a person needs to understand it as well as be able to make a change in it. And in order to understand a system, it helps very much if that system is straightforward and lucidly verbose.

I hope systemd will live or die on its merits; I fear that it will take over via politicking.


It already has taken over despite being mediocre at best.

Mostly I feel that it got an undue jumpstart thanks to RedHat trying too hard to be bleeding edge, and Upstart/OpenRC not having as high-profile a backer.


To be fair, Upstart was pretty awful in the early goings. It seems to be a lot more reliable and predictable now, but that's a fairly recent development for us (when we upgraded to Ubuntu 14.04 LTS).


The golden rule of technology hype: once people start abandoning a technology is when it starts being functional.


The problem is that some projects started depending on systemd behaviour now and the transition hasn't been smooth at all in Debian even if you want to stay with sysvinit. I'm still on sysvinit with systemd-shim, and things started to break in KDE already, mostly related to authentication (can't mount USB devices, can't manage VPN connections, etc.). In the end I'm afraid Debian users might not have much choice: either use systemd as PID1, or use another distribution (like Gentoo, etc.) that work without systemd.


The fact that upstart is covered under Canonical's developer contribution agreement and copyright license counted materially against it in the Debian debate.

Canonical's insistence on control and ownership ended up torpedoing its project. Which is really quite sad.


It is interesting looking at how differing linux distributions approach this problem as well as things like connection management. Many do seem to be moving towards things like systemd. Angstrom Linux for the BeagleBone even bundled something called connman for managing network connections. Attempting to navigate the path of systemd plus connman to get networking to work the way I wanted was a pain. Some of the more interesting "solutions" when searching about recommended essentially wedging sys5 type init scripts into the systemd framework.

I wised up and moved over to using the Debian distribution in this case. Less moving parts trying to make things work the way they thought they should.


Connman has a lot of promise, and I like the theory and design of it, but found it very frustrating and incomplete in practice. Basic functionality like "connect to my wireless on startup, and keep on trying to connect if you don't succeed right away" is missing.

Also, it's being rapidly iterated and there isn't a PPA for Ubuntu, which sucks.


Potential, yes, but with devices like beagleboard where you are often also using a WiFi adapter that is not super strong, the frustrations grow.

Fewer moving parts means easier debugging.


> In order to fix a system, a person needs to understand it as well as be able to make a change in it.

So if I follow you, it would be easier to grok the whole system if all the code was in different places?


Opaque C code that sits in lots of spread binaries and some end-user documentation in man pages don't help you a lot if have to debug an issue. It's okay from a user perspective but with systemd (or other complex systems that rely on in this principle) you need to start reading c-code, start gdb, deal with dbus... if it's a toolbox of scripts a `grep -r <error>` is often the first step on the way to understand and learn something and fix the problem.

This is more difficult if you have a lot of abstractions and binaries lying around. You need to start reading (often nonexistent) documentation and abstract C-code...

It likely does not matter for 95% of users and I've rarely had to do something like this but you are losing some control as developer/sysadmin. For some it's important as their productivity and job depends on solving such issues fast, others will never have this problem...

I've never had a problem with systemd through. But if you have it's difficult to fix it on your own.


> Opaque C code that sits in 20 binaries and some man pages don't help you a lot if have to debug an issue. It's okay from a user perspective but with systemd (or other systems that rely on in this principle) you need to start reading c-code, start gdb, deal with dbus... if it's a toolbox of scripts a `grep -r <error>` is often the first step on the way to understand and learn something and fix the problem.

What's opaque about Free and Open Source C code?

Some might argue (this guy included) that statically typed, statically analyzed C code will result in fewer people having to debug their system code than the equivalent code written in a particular variant of shell code.


> What's opaque about Free and Open Source C code?

Everything if you are sysadmin/developer trying to fix an issue. At first you need debugging symbols to pinpoint the problem, then you need to read the source.. it all takes time. E.g. you need to learn about dbus-monitor and dbus calls and need to grasp some internal concepts of systemd if something goes wrong. It takes time and patience you usally don't have or don't want to spend on such details. For comparison i.e. FreeBSD rc is only shell-scripts: http://www.freebsd.org/cgi/man.cgi?rc(8)

I don't want to say that one is better than the other but the latter is for most folks far easier to debug and modify than the first. However as I've said it only really matters for few people. But I can understand that they are not particularly happy about this new complexity. And bugs happen.


> I don't want to say that one is better than the other but the latter is for most folks far easier to debug and modify than the first.

Generally bad form to make claims on behalf of "most folks" since you are in fact a single person. It's totally a valid argument if you say this on your own behalf.

And yes, bugs happen. But fixing the C code, is in my experience, much easier than tracking them down in `bash -x`. Especially when dealing with race conditions between services/triggers/device initialization.


>And yes, bugs happen. But fixing the C code, is in my experience, much easier than tracking them down in `bash -x`. Especially when dealing with race conditions between services/triggers/device initialization.

You are being stubborn just for the sake of winning an argument. Interpreted languages are easier to debug. They have many flaws. Debugability isn't one of them. Heck, my servers don't even have gcc/gdb. Good luck trying to debug systemd in my production environments...


Yes. Shell scripts are it's own unique kind of hell. I'm really speaking on my behalf here. I can read shell good enough to follow and debug issues in it. However digging into internals of systemd and its interactions with dbus and other binaries is opaque for me.

Maybe it's just a different perspective - as a developer systemd likely eases a lot of pains and makes otherwise problematic and error-prone problems easy but as an sysadmin that mostly deals with servers it feels sometimes like forced unnecessary complexity that can introduce difficult to debug issues.


As a sysadmin, I'll take systemd units over SysV init scripts any day. They tend to be shorter, more simple to read, and I don't have to worry about the race conditions or services not restarting correctly due to varying daemonization techniques.


Yes. I don't intended to argue about that. For that systemd is perfect and I like using it too. I mean such problems as a hanging boot in an lxc-container where I've once got only a (not a lot on google about that at this time) red error message that something went wrong. How to go from there? It's sure possible but it's a lot of work.

I don't say that's the norm and I don't say this happens often but if you build custom stuff and do "strange" things it's easier to know what's going on if you grasp the complete system. This is more difficult with systemd.

I believe it's a valid criticism and I realize 95% of users never need to care about this. However it's still a valid point if you build complex systems that are not "off the shelf".


> fixing the C code, is in my experience, much easier

Really?

Did you account for the many very subtle ways you can run into what the C language defines as "undefined behavior"? I have only met a few programmers that truly understand that can of worms. Way too many don';t even know that compilers exploit these parts of the spec despite programming in C for many years.

http://blog.regehr.org/archives/213

http://blog.llvm.org/2011/05/what-every-c-programmer-should-...

That's just the cases where it it is totally legal for the compiler to output random noise - or output nothing at all - instead of what the C code says locally. These are some of the nastiest "gotchas" I've seen in any language.[1] Even the best C programmers are occasionally bit by this class of bug.

I still like C (a lot), but it is not easy. It's just so very annoying and time consuming to track down a bug happening in "foo.c" that is actually caused by a variable in "bar.c" didn't get updated waaaaaay earlier because some bit of code in "quux.c" was skipped over due to undefined behavior. Especially it becomes a heisenbug due to that particular optimization being turned off when in in debug builds.[2]

Bourne [Again] shell has its own share of quirks and "gotchas", but they are usually easy to investigate, and they are generally easy to avoid once you've written a couple scripts.

[1] There are other important classes of bug; I'm just using undefined behavior as an example because of how amazingly subtle it can be and how many serious security bugs it has caused.

[2] Before anybody complains that behavior involving 3 files like that is bad design, consider that A) this happens all the time in real world C, and B) I agree. Which is why many of us are against systemd, which adds complicated interactions like this on purpose as a way to force vertical integration.


> At first you need debugging symbols to pinpoint the problem

These days there is really no reason you can't have the debug symbols already around. But you've got a fine point there, the Bourne shell debugger is much more convenient and easy to come by, and it makes postmortem analysis with core files trivial... ;-)

> then you need to read the source..

As much as you need to do that with any system, you need to do that with them all.

> E.g. you need to learn about dbus-monitor and dbus calls

Yes... and if not you have to learn about whatever other mechanism is being used to provide encapsulation and separation of concerns between the components of the system...

> It takes time and patience you usally don't have or don't want to spend on such details.

This really boils down to, "I'm already really familiar with this other system...". It's a legit argument for why you might not use systemd. It's not a terribly legit argument for why systemd is bad.

The rc system doesn't address a fraction of the problem and actually makes a number of things worse. Heck, the rc man page you linked to links to four or more other components of the system, including the voluminous "simple because it is shell" rc.conf.


I completely agree with you.


My machine with systemd (FC20) doesn't boot up at all unless systemd loglevel is set to debug on the command line. Even with it it takes about 5 minutes to boot. Luckily I don't need to reboot often, but every single time there's a small fear that some upgrade has made systemd crap up even worse, and the system won't boot at all.

How do you debug a complete black box, where turning on debugging partly fixes the problem. You really don't. This is literally the worst debugging experience I've had in 20 years of using and maintaining Linux systems -- and that includes trying to do these things with much less knowledge and only limited internet access back in the early days.

I love the ideas behind systemd. It's too bad, even if not surprising, that the implementation is a flaming pile of garbage.


Reminds me of the joyful early days of moving from grub to lilo. Grub was (is) more fragile due to being more complex -- but at least the grub shell gives more information than lilo failing at "LI"...

Grub always seemed like the improved features made up for the added complexity; I'm not convinced about systemd.


Even Grub isn't all flowery. I have a background of using what I think is now called grub-legacy. Then I ended up running Debian, with newfangled Grub. I needed to change some kernel parameter (IIRC) but all I could find was a mess of undocumented scripts which say "don't touch this". I don't know where the documentation went, and it seemed needlessly complicated to configure. Why couldn't I say man grub and learn all I need about it? I had other issues with Debian, but the last straw was when during a routine package update it decided to install a new version of Grub.. and the next time, it wouldn't boot anymore. Why was it so complicated in the first place? Why did Debian have to fix it if it's not broken? Why did it fail to do it right? I don't know, I don't really care... all I know is that needless complexity and churn caused trouble, again.

So I'm no longer using Grub or Debian. And my bootloader is simple. I've installed it once, and never touched it afterwards. It's possible to configure it a little, but there's no need for it. So what if it has fewer features. It only needs to load the damn kernel... and it works. I'm happy.


The config files you want are /etc/default/grub and possibly also /etc/grub.d/ - though you're right, this doesn't seem to be documented anywhere obvious like in the man page. Gentoo installs its own version of /etc/default/grub with comments and examples but Debian may not be so helpful.


I'm pretty sure I poked in those very files, and some of them gave me the impression that they are generated by a script (hence "don't touch this"). Some of them gave me an impression they are read by some undocumented script. I still don't know what script.

But it's been a while.


It was messy (to a certain extent it still is). Reasons for GRUB(2) are mostly UEFI, multiboot support (eg: bsds, windows nt derivatives like modern windows). Grub fails the test of making simple things (as) simple (as possible). But it does support booting to space invaders. So there's a trade-off there, and I agree, it's not entirely clear much was gained from moving off of Lilo...


    info grub


Might be obvious, but that should be "moving from lilo to grub", not the other way around...


> What's opaque about Free and Open Source C code?

Opaque code is opaque. What does it have to do with Free or Open Source?


Actually, the nice thing about systemd being built around dbus is you can track most of what is going on with it simply by tracking the flow of messages in dbus.

If you find C-code "opaque", you're already kind of screwed in the Unix world...


> So if I follow you, it would be easier to grok the whole system if all the code was in different places?

Yes. Separated in documented modules that are self-contained, with well defined behavior. When your logger breaks, you fix your logger. Not your init process. When your devices are not discovered you debug udev not muddle through a code riddled with synching with ntp and so on.


The code is necessarily in different places, whether it's within a project or between projects. The difference is having an API that can be found without having to read all the code, and is well-defined and somewhat stable.

It's very "UNIX" to implement things as communicating processes rather than RPC or procedure-call-within-monolith.


Yes. See "decoupling", "big ball of mud".


yep


We often criticise systemd for being too bloated, and making it hard to write a drop in replacement. I totally agree with this line of thought.

However, in my mind it has made several awesome things possible. My boot time got dramatically shorter when I adopted it thanks to parallelization. Besides, daemons have now simple and robust service definitions. Sys V had become a mess!

Lastly, lightweight containers are the real-deal for small development tasks (not for production!). Just one command: systemd-nspawn, and you're ready to go. Docker is currently a bit more complicated to set up.

Arguably, many features, including containers, should be moved out of systemd. Right now, more than a monolithic architecture, I think systemd is rather shipping too many things under the same project umbrella.


> However, in my mind it has made several awesome things possible. My boot time got dramatically shorter when I adopted it thanks to parallelization. Besides, daemons have now simple and robust service definitions. Sys V had become a mess!

Writing daemon startup files was somehting I always dreaded, and never really did well.

Before systemd, if I needed to run services I'd try to use daemontools (for auto-restart, and logging), but then I had two service-starting services running my system. Upstart had some of the features, but was still finicky (and the versions I had available didn't consistently have good service supervision support).

systemd just fix that.

Also, with systemd, for the first time I feel like I'm really using Linux, not just a random *Nix that has adequate drivers.


so you're saying without systemd linux isnt linux.


Not quite.

I'm saying that systemd makes the Linux kernel's feature set and capabilities visibly usable from user-space. For (nearly) the first time, it feels like it matters that I'm using Linux.

Linux is still Linux without systemd, it just doesn't provide as much benefit (aside from device support and compatibility) over, say, FreeBSD without software that takes advantage of its feature set.


What stopped you from using Linux-specific features before systemd? They were accessible from userspace well before systemd came along.


The lack of documented software that used them to enable useful (to me) functionality.

I was using some of them, such as kvm for my virtualization and lvm for disk management. But systemd still had a substantial 'oh, wow, Linux lets process management be this easy and powerful?' factor, showing me something new that I hadn't seen in my use of any other system (FreeBSD, OpenBSD, Windows, a touch of Mac).


the documentation directory of the kernel source is actually pretty nice. theres a bunch of utilities for things like cgroups, namespaces, etc. they're not well known but they work perfectly fine.

I suspect its not well known because there was no commercial, marketing drive behind them. Nowadays at least one of these seems to be needed to even gain visibility. People don't go search what's cool/good where it is. They wait for HN or some other news website to tell them

Just like the regular news really. Turns out it doesn't work all that great.


Setting up and running services like the OS does, but as non-root user, is kind of a big deal for my use case.

Perhaps I'm ignorant and this was always possible.


systemd makes Linux not Unix.


There are so many things that make GNU/Linux not UNIX...


Boot time again.

In a server environment it happens that the various DRAC/BIOS(es) are initialized and the bootloader reached is far longer (several minutes sometimes) than the boot time of sysvinit. So optimizing boot time in the Linux part on a server is probably moot for me.

On the laptop you can suspend/hibernate as others have said if you care about startup time. I have full-disk encryption and need to type in password to boot so few seconds more or less doesn't matter anyway.

So that leaves the desktop, where I might care about boot times (the UEFI/BIOS is actually saner than in servers and reaches the bootloader very fast). It turns out that my desktop boots faster than my router with sysvinit already, so having faster boot times on my desktop would get me nowhere, I still wouldn't be able to use the internet until the router has booted.

So faster boot times ... I didn't need all the pain systemd is causing just for that. Debian has haid makefile/startpar based concurrent boot already, I don't think systemd would improve on that much ...

Meanwhile not using systemd breaks things that used to work on a KDE desktop (USB mounting, VPN config, etc.), so having an app support systemd is a net-negative for me.


Boot time most definitely impacts my cloud computing setup. If your servers are pets then, no, boot time is not important to you. However, my cattle are constantly being brought to life and killed again. When I need additional capacity to handle a spike in load, I want it right now, not in 5 minutes.


Holy heck, I get being defensive, but you've let logic totally fall by the wayside here. Where do I start...

The fact that BIOS/DRAC/RAID initialisation is slow on some servers is irrelevant. Linux's init and the firmware initialisation don't run concurrently, therefore if init takes longer the whole boot takes longer. Additionally many servers manufacturers have improved boot times in the last few years (down from 10+ minutes, to 5+ minutes, to less).

Most routers don't take as long to boot as you claim. The entire OS is about 8 MB (uncompressed) and RAM is only 32 MB, and the medium that the OS is stored on is faster than a computer's hard drive. So just looking at IO should tell you your supposition is flawed. In my experience most Linux based routers boot the RJ-45 interface (LAN side) in under 20 seconds unless it is allocating slowly on the WAN interface (e.g. unable to get an IP, etc). If you set a static WAN IP/gateway/etc, then boot times comes down substantially.

Additionally the whole concept that every time your PC turns off you'll also turn off your router at the mains is, uhh, strange. Sure there are power cuts but that isn't the only time you shutdown your PC throughout the year.

The concept that your PC needs to wait for the server is equally flawed. Again, yes, power cuts. But PCs get shut down significantly more often than servers and if we're playing that game then wouldn't a "server" have a UPS anyway?

So overall your argument for why boot times don't matter lacks any kind of substance. It is also purely based on a PC->Server->Router infrastructure where nothing is on a UPS and everything suffers from a power cut (then "races" to all come back up).

In the real world my phone has Linux, our "Tivo" has Linux, our printer has Linux, our car's entertainment system has Linux, etc. So bad Linux boot times will be noticed day to day. It matters to a lot of people and while I don't know if systemd is the solution, I do know that progress is needed relative to the classic UNIX init system (per the article).


I think my point was that boot times are fast enough already on desktop/laptop, and systemd improvements over that do not justify the costs for me. It would be nice to improve boot times on routers, but I don't know if systemd would improve there much.

Boot time on desktop with sysvinit: 8-9s, boot time on desktop with systemd: ~6s. Boot time of router (until network is up and usable, maybe made longer by having to setup WiFi): 1m+.

PC waiting for router is my usual use-case when I power off everything and then power them back on at another time. I haven't said anything about PC waiting for server, and I agree it wouldn't make sense.

I don't have a server with systemd to check, but assuming similar improvement, 4s out of 5m+ you mention is barely 1%.


> Lastly, lightweight containers are the real-deal for small development tasks (not for production!). Just one command: systemd-nspawn, and you're ready to go.

You might be interested in firejail [1]. It makes finer-grained use of Linux namespaces, and doesn't depend on systemd (or much of anything, for that matter).

[1] http://l3net.wordpress.com/projects/firejail/


> Lastly, lightweight containers are the real-deal for small development tasks (not for production!).

I've come across this sentiment a few times in the last month, but I haven't yet heard an explanation other than "VMs are battle-tested and containers might leak data to each other". Is there something more that I'm missing? Why aren't containers a good idea to use in production?


If we (fairly or unfairly) group Linux' LXC (eg: docker) and * bsd's jails, the main contrast with "proper" hypervisors (xen/kvm/vmware/bhyve(? That new thing in freebsd 10?) is (the possibility of) full resource accounting/limitation. Go ahead run you pi-digit-finder at "100%" cpu, pipe /dev/zero over an ssh pipe to /dev/null on some box and pipe it to a local file as well: no other vm or the host will notice. You only get 1mbs, x cycles of cpu and x mb of disk.

Secondly, assuming a bug in the kernel, one might assume root in a container can lead to root on the host. Bsd jails have been pretty solid for the last few years afaik - but hardware support for virtualization might still get more of both separation/safety and speed. There have been som bad bugs in (as i recall) the io system in xen, leading to similar issues ... but again the last time i saw anything on that was years ago.

Ymmv - generally docker doesn't have "run untrusted code, safely, as root" as a design-goal (yet, afaik) (not entirely sure about lxc, née vserver -- the underlying technology) -- so don't expect it to do that. Isolation and security (esp. without sacrificing performance) is very hard to get right. Or so a long series of privilege escalation exploits across many different os' seem to indicate.


> lxc, née vserver

Just to be a little pedantic, LXC has definitly be inspired by pre-existing Vserver and OpenVZ. But it's a different implementation.

A lot of things that are viewed as innovations from Docker really already did exist in 2006~2007. Maybe a bit cruder but not that much. OpenVZ was very close to that. AUFS is the only real innovation as far as I know.

Anyway, Docker guys were smart enough to ride the cloud wave and hype the thing. I'm pretty sure Parallels missed the boat because they went the opencore way (OpenVZ/Virtuozzo).


I'm just familiarising with docker at the moment (specifically docker, not 'containers'). I'm finding that there's a lot of glitz and glamour around it that's good for devs, but us ops guys like mundane things like logs and status messages. For example, I get the same message whether I start or stop a container: the arg I used to refer to the container. No information. I've run into a few shortcuts like this. It's pretty magical, don't get me wrong, but it's still in adolescence. I've heard some banks are using it in production (no idea of what for, though) - which is a feather in docker's cap - but there's still some things that need to be polished.


> Why aren't containers a good idea to use in production?

See Dan Walsh articles here: http://www.projectatomic.io/blog/2014/09/yet-another-reason-...


That's about security but people run LXC containers for other reasons.


If your boot time was bad before and decent now, this is not thanks to the goodness of systemd but rather the badness of whatever hideous system your distro was using before, and/or because your distro is starting a bunch of useless junk that shouldn't be running in the first place. I've been using flat /etc/rc (all commands in one file, & at the end of anything non-essential) for years (decades?) now, and have always had a login prompt faster than the display can synchronize to the video mode change.


Lots of people do not seem to understand the criticism of systemd.

systemd = init system + a whole lot of other things.

When people complain about systemd,they usually do not complain about what it does or how it does it in the init system part.That part is pretty solid as far as functionality is concerned.

When people complain about systemd,they usually complain about the "whole lot of other things" part.Lots of people have different complains and my biggest one is on udev.

udev is a core component in any modern linux system I see systemd absorbing it as nothing but a political move and a power grab.They could have left udev as an independent project and just create a dependency on it.

The "whole lot of other things part" will,by definition,make any other project that is just an init system seem very much deficient in functionality when compared to systemd.


I think the main critism is the political grabs from systemd. Every bad technical decision has been made because of political reasons.

Its not like if they were stupid and took a bad decision because they didn't know any better. They took the bad decisions conscientiously, for these power, political grabs.

That's not how people want Linux distros to be. They want technological innovation to be the driver.

And being on the same camp: fuck you systemd for successfully adding to that model where political profit is more important than technological innovation.


This init process debate has brought out one of the worst elements of people in the FOSS community: treating some FOSS technology as an extension of their identity.

Let's keep some perspective here. We are literally just talking about an init system. There are many others you can use. systemd is not taking away your freedom in any meaningful sense of the word "freedom". Debates should be about the technical merits of system not baseless accusations that they are making a power grab.


> systemd is not taking away your freedom in any meaningful sense of the word "freedom"

This is absolutely incorrect. We're to the point where certain software packages (GNOME comes to mind) are requiring hard dependencies on it. I was just today reading about some incompatibility that arises if your kernel is set up with no IPv6 support which is explicitly caused by systemd. (To which the response from the systemd folks was something along the lines of "You shouldn't be turning it off anyways")

(Great, now our software takes philosophical positions...)

Sure, you're "free" to use something else, in the same way that you're "free" to patch and recompile every program that touches it to stop touching it. So "free" in the FOSS sense that nobody but developers care about.

Meanwhile, in the real world, populated by end users and sysadmins, the most important people when it comes to a computing environment, the ones that all of this crap is being done for at the end of the day... not so much.

sigh

I'm annoyed that systemd is taking over for political reasons and not purely on its technical merits, and that there is no way this is not going to lead to a monoculture. There will be others, but they will be relegated to the position of marginalized, niche players that nobody outside of /g/ troll threads care about.

I'm annoyed that the rest of the world is going to have to adapt to this software, rather than the other way around.

I'm annoyed that this software is doing 5000 things where one would do.


> We're to the point where certain software packages (GNOME comes to mind) are requiring hard dependencies on it.

Wow... we've come a long way baby! Now freely available software that you can modify as needed without interference is taking away your freedom! ;-)

> I was just today reading about some incompatibility that arises if your kernel is set up with no IPv6 support which is explicitly caused by systemd.

Actually, the problem is that if you load IPv6 support after a socket was created, there's no efficient way to make that existing socket compatible with IPv6, which of course creates a nasty little integration problem. That wasn't a choice of the systemd folks, that was a choice of how the kernel folks organized their network subsystem & modules.

Systemd runs fine on my system with no IPv6 support.

> Sure, you're "free" to use something else, in the same way that you're "free" to patch and recompile every program that touches it to stop touching it. So "free" in the FOSS sense that nobody but developers care about.

For all the complaining about NIH syndrome and absorbing other projects for political purposes, systemd actually builds on top of a lot of very well established components (dbus, udev, etc.). To the extent that software gets tightly coupled with it, it wouldn't be hard to change it so that it used those standard components without systemd... unless systemd is actually providing some unique advantages for that software that you can't live without. In which case... go out and do a better job of it!

> I'm annoyed that this software is doing 5000 things where one would do.

I know, Unix is annoying that way. ;-)


Call it flexibility, then. I don't think you can reasonably deny that a package that requires major surgery to work with a different init system is less flexible than one that simply doesn't care, even if said surgery is perfectly legal with the source available. That flexibility is a large part of why a lot of us came to Linux, and regardless of whether it's technically "freedom", losing it feels like losing freedom.


> I don't think you can reasonably deny that a package that requires major surgery to work with a different init system is less flexible than one that simply doesn't care, even if said surgery is perfectly legal with the source available.

I think you'll find that it isn't nearly that difficult a problem to address. For all kinds of reasons, just like every other system design that has come before it, there will need to be accommodations made to work with other stuff. Its not like the old code just disappears overnight. You can't succeed as a new platform without a way of working with the old (and again, you've mostly been sold a bill of goods... most of systemd's architecture is the old system).

> That flexibility is a large part of why a lot of us came to Linux, and regardless of whether it's technically "freedom", losing it feels like losing freedom.

If it feels like losing freedom to you, you don't know what that is about. You're losing someone writing code the way you wanted them to. That's not losing freedom. That's getting it.


According to your definition of "freedom", Microsoft Windows is just as Free as Linux. After all, Microsoft gives you the "source code" on the CD itself! All you have to do is start flipping bits here and there, and...

Wait, what's that? Editing compiler-generated assembly code is too hard for you? Well, that's clearly your problem, since you can't expect Microsoft to bend over backwards and write code for you the way you wanted it!


> According to your definition of "freedom", Microsoft Windows is just as Free as Linux. After all, Microsoft gives you the "source code" on the CD itself! All you have to do is start flipping bits here and there, and...

Wha?

> Wait, what's that? Editing compiler-generated assembly code is too hard for you? Well, that's clearly your problem, since you can't expect Microsoft to bend over backwards and write code for you the way you wanted it!

Umm. no. That's Microsoft bending over backwards NOT to give you the source code, as defined very clearly by the FSF and the OSD.

More importantly, in the case of Microsoft's proprietary software (they do actually have some open source stuff which isn't encumbered like this), it's literally a violation of their license agreement for you to edit that code yourself.

Am I through the looking glass or am I being trolled?

Free software means you can't restrict anyone from going ahead and making whatever changes they might want to to software, distributing it to the world, and potentially garnering mindshare in the process.


If there's one thing I've learned about FOSS, it's that everyone is looking for an excuse to fork things. Linux will never develop a monoculture - because someone can and will fork it. How many distributions are there? desktop environments? Package managers? Text editors and IDEs? And even zooming out of linux, there's openBSD, freeBSD, etc; If you start yelling about monoculture, isn't sysv init the worst offender for that? We have less monocultures than ever before.

If you think freedom is the ability to run only the code you want, the only way you're ever going to get freedom is by writing everything from scratch. Software will never be exactly the way you want it to be unless you write it yourself.

Most software depends on other software. That's just how things work. GNOME is particularly bad offender for that - it installed apache for some reason last time I used it. But GNOME also does a lot of things I don't really need it to do. Maybe someone else uses those features, and that's OK. You can switch DE if you really don't like systemd that much. But if you don't, that's not denying you your freedom - you made a decision about the benefits and drawbacks of a product, and decided to use that product.

Nobody really says how systemd only took over for political reasons, there's just random posts on forums that make claims and link to some dude's podcast. There's really no reason to believe that the systemd people are malicious.

I think systemd is the wrong choice for debian/ubuntu. But there's no reason to badmouth people about it and say hurtful things. Just use something else.


> There's really no reason to believe that the systemd people are malicious.

You're kidding, right?

http://lists.freedesktop.org/archives/systemd-devel/2014-May...

http://linuxfr.org/nodes/86687/comments/1249943

http://lkml.iu.edu/hypermail/linux/kernel/1404.0/01327.html

The systemd people and Lennart in particular are very open about their contempt for anything that isn't Linux + systemd and their intent to shove whatever they want down everybody's throats regardless of bugs or breakage, and blame everyone but themselves for what their shit breaks.

This can't be dressed up as anything but malicious.


That last thread in particular is a case study in why people don't like systemd. Thank gods that the BFDL of Linux is a sane man.


GNU/Linux wars have replaced the UNIX wars....


Let's keep some perspective here. We are literally just talking about an init system.

Systemd does more than just init, it has the features to replace everything from network manager to fstab. It is not just an init system it is an invasion of Linux userspace.

Don't take my word for it, just read the developers blog:

http://0pointer.net/blog/projects/fudcon-gnomeasia.html


> systemd is not taking away your freedom in any meaningful sense of the word "freedom"

Systemd has made it impossible for me to run an up-to-date Gnome on FreeBSD. That feels a lot like taking away my freedom.


You're blaming the wrong party here.

The systemd project has no control over what Gnome relies on. They independently rely on systemd because it provides functionality that makes their lives easier, and it's their right to do that. If you want "up-to-date Gnome" to work on FreeBSD, go and write some code to help make it happen.


What are my real options?

I could write patches that reimplement the functionality that Gnome gets from systemd using lower-level functionality, in a cross-platform way. But those patches would be rejected; the Gnome project has decided to use systemd and would not want to duplicate its code. If I were to maintain my own gnome fork I would have to convince distributions to adopt it.

I could write patches that add FreeBSD support to systemd. But those patches would be rejected - again, as a policy decision, systemd doesn't want to support FreeBSD. Thankfully in this case there is a fork, uselessd, but again, we need to convince distributions to adopt it or it's meaningless.

The claim that Gnome "independently" relies on systemd is specious; there was a lot of lobbying and politics from the systemd side. My only practical option is to counter at the same level, and lobby Gnome (and linux distributions) to make the political decision to move away from systemd.


It's even more specious than you think. Red Hat employees are the largest "contributors" to GNOME. It's effectively a Red Hat project.

So, Red Hat project GNOME relies on Red Hat project systemd, and Red Hat employees won't allow systemd to be portable to competing platforms. Convenient.


The real option actually is to bring up FreeBSD to the level that it can provide the same functionality (specifically, dbus interfaces) as systemd and Linux. Gnome depends on systemd features because they're useful and solve real problems. As it is, FreeBSD can't provide those features.

You can of course make the case that the interfaces provided by systemd are substandard, but so long as you don't have an alternative to offer, it's just talk.

As far as I know, Gnome does currently have fallbacks (with reduced functionality) on non-systemd systems, so it's not the case that they just ignore things. However, I perfectly understand why they would not bother with duplicating code just to support systems that aren't good enough.

EDIT: I'm apparently not able to respond to the reply below me for whatever reason, but... It's been said many times that making systemd portable makes no sense, which is why it provides interfaces. And regarding the interfaces, which part of them, exactly, are not stable? There's a very reasonable interface stability promise, which to my knowledge has held, so far.


FreeBSD can and does provide these interfaces. We've had e.g. cgroups-equivalents for years. The reason systemd doesn't run on FreeBSD is pure politics, not technical - after all, if it were technically impossible to implement systemd on FreeBSD, there would be no need for a policy of refusing patches.

> You can of course make the case that the interfaces provided by systemd are substandard, but so long as you don't have an alternative to offer, it's just talk.

The problem isn't that the interfaces are particularly bad, it's that they're not standardized. If systemd would offer standardized interfaces that let me offer a compatible alternative that would be fine. But they don't. Trying to remain compatible with software that will make no effort to provide compatibility from its side is a mug's game.


http://0pointer.de/blog/projects/the-biggest-myths.html

"That is simply not true. Porting systemd to other kernel is not feasible. We just use too many Linux-specific interfaces. For a few one might find replacements on other kernels, some features one might want to turn off, but for most this is nor really possible. Here's a small, very incomprehensive list: cgroups, fanotify, umount2(), /proc/self/mountinfo (including notification), /dev/swaps (same), udev, netlink, the structure of /sys, /proc/$PID/comm, /proc/$PID/cmdline, /proc/$PID/loginuid, /proc/$PID/stat, /proc/$PID/session, /proc/$PID/exe, /proc/$PID/fd, tmpfs, devtmpfs, capabilities, namespaces of all kinds, various prctl()s, numerous ioctls, the mount() system call and its semantics, selinux, audit, inotify, statfs, O_DIRECTORY, O_NOATIME, /proc/$PID/root, waitid(), SCM_CREDENTIALS, SCM_RIGHTS, mkostemp(), /dev/input, ..."

It's not just cgroups.


I guess uselessd is fundamentally impossible then, along with other systems that provide the same functionality. Again, if it were impossible there wouldn't need to be a policy against it.


IS uselessd possible? From its own docs[1]:

"uselessd is planned to work as a primitive stage 2 init (process manager) on FreeBSD. Stage 1 is inherently unportable requires a total overhaul in regards to low-level system logic (with systemd assuming lots of mount points and virtual file systems that aren’t present, is designed with an initramfs in mind and many other things). Stage 3 can always be achieved by having a sloppy shim around the standard tools like shutdown, halt, poweroff, etc.

So far, uselessd compiles on BSD libc with a kiloton of warnings, with lots of gaps and comments in the code, and macros/substitutions in other places. All in all, it is an eldritch abomination. A slightly patched version of Canonical’s systemd-shim is provided and works well enough to emulate the org.freedesktop.systemd1 D-Bus interface. Some of the binaries provide diagnostic information, but at present we are trying to find ways to bring up the Manager interface in the whole buggy affair, in order for systemctl to send method calls. Nonetheless, you are absolutely welcome and encouraged to play around with it in its present state, and we hope to get somewhere eventually."

1) http://uselessd.darknedgy.net/


Except that Lennart Poettering (at the least) was directly involved in negotiating the dependency on systemd's libraries for the GNOME stack: https://mail.gnome.org/archives/desktop-devel-list/2011-May/...

Considering GNOME is part of the new school design philosophy in general and largely developed by Red Hat employees, it's inevitable that it would have happened anyway, but the systemd developers were directly complicit in speeding it up.


That's a proposal. And it's not like Lennart was being somehow subversive. He explicitly states the following:

> systemd is Linux-only. That means if we still care for those non-Linux > platforms replacements have to be written. In case of the timezone/time/locale/hostname mechanisms this should be relatively easy > as we kept the D-Bus interface very much independent from systemd, and > they are easy to reimplement. Also, just leaving out support for this on those archs should be workable too. The hostname interface is documented > in a lot of detail here: http://www.freedesktop.org/wiki/Software/systemd/hostnamed -- we plan to > offer similar documentation for the other mechanisms.

I'm really not seeing any foul play anywhere; RH tried Upstart (they even used it in RHEL6), found it lacking, and out of that came systemd.

The thing is, the systemd project is far more ambitious (which is good) and not content with just providing an init system. I personally don't see anything wrong with that (a well-integrated core userland for all Linux distros? Yes please), but you obviously do.

I think your project is ultimately not going to gain much traction it's simply ignoring most of the goals of the systemd project. It might have side-effects on how systemd develops though, but I can't really say.

It seems to me that Lennart's personal goal is to make the perfect OS as he visualizes it. He's doing work to make it happen, and he's gaining support because the code is useful to other people. If people outside of Linux circles want to get involved in standardizing core DBUS interfaces (which they should, because pretty much everyone seems to use DBUS) and things like daemon startup notification, they should get involved with the systemd project and discuss the interfaces, not just tell people not to use them... That ship has already sailed. Systemd is rapidly becoming the de facto standard, and that progress is not suddenly going to stop because minorities complain too loudly. :)


>Systemd has made it impossible for me to run an up-to-date Gnome on FreeBSD. That feels a lot like taking away my freedom.

It sounds to me like Gnome is the reason why you can't run Gnome without systemd, and that you should probably direct your complaints against them.


It seems to me that Gnome's position of "we want to use this useful library that exists on our biggest platform" is much more reasonable than systemd's position of "we won't accept patches to add cross-platform compatibility, and we won't provide stable interfaces".


There's legal freedom, and there's practical freedom. Sadly, the latter is seldom talked about.

You can be stuck in a maze, in a pit or under a tree trunk. You're legally free, since no law or copyright license is saying may not get out. Yet, you're stuck and if you can't get out, you're not free at all.

When it comes to software, it is the size, complexity and complicated interdependencies that make the maze. As the system grows, an individual's practical freedom erodes. For instance, I complain about the web a lot. Even with a browser's source code and a permissive free license, there's close to nothing I can change about it in practice. It would be far too much work for me to maintain millions of lines of code and remain compatible and interoperable with a huge, fast-changing stack of technology... and the more you diverge, the harder it becomes. It's an uphill battle, and at some point you have to give up if you're not the giant. So the four software freedoms are reduced to two (or less). In practice I don't have the freedom to do what I want.

I think the FSF's stance of reducing freedom to a merely ethical issue is alarming. How would Dr. Stallman have felt if he had gotten his printer driver with a free license but so much code and complexity and dependency that it would've been impossible for him and a small team of hackers to actually port it and make it run on his system?

Of course, systemd alone is not approaching that level of complexity. Except that it's not only systemd.. the trend seems to be that all aspects of a modern OS are getting larger and more complicated. A little here, a little there, it all adds up and becomes a lot, everywhere. It's a sad trend.

There was a time when you could've picked up a book that pretty much describes all of your system's hardware at a low enough level so that you could start writing your own bootloader and OS from scratch, with the knowledge that you can interface with all the logical hardware devices. And it didn't take hundreds of thousands of lines of code. Now the amount of accumulated cruft we depend on is so large that the idea of writing your own not-a-toy OS is laughable...

Standing on the shoulders of a giant is necessary and helpful, but when you have to do too much of it, it stifles innovation and encourages monocultures.

I don't have anything against systemd per se; it doesn't represent my ideals, doesn't bring me features I want, and so I don't want to use it. If Ubuntu and Debian want to use it, they're of course free to do so. However, I am concerned that with the notion that "systemd has won", its proponents are going to assume it to be everywhere and build future software with the attitude that it is okay to depend on it -- who cares about the people who would prefer not to use it, let them suffer for disagreeing with the king!

Before the systemd rage, sysvinit might have been "the winner" on Linux in the sense that it was most widely used and supported. But we didn't have this sort of polarizing "sysvinit has won, fuck everyone and everything else" notion. Other distros with other init systems have happily coexisted all along, and these other distros haven't had to constantly fight a growing dependency on one specific init system.


> How would Dr. Stallman have felt if he had gotten his printer driver with a free license but so much code ... that it would've been impossible ... to actually port it and make it run on his system?

While I am not an expert on legal language and how it can be used by lawyers, I believe RMS and the FSF at least attempt to address this in the GPL Version 2 (there is probably a similar requirement in the GPL Version 3, but it is more complicated so I'm not sure which requirement corresponds to this quote form v2):

"The source code for a work means the preferred form of the work for making modifications to it."

Code that is so large complex that it is not practical to understand or port wasn't written by a human. Such code is probably a template or macro expansion of the real source, and the "preferred form" would be the pre-expansion source.

This may not cover all ways of obfuscating the code, but "preferred form" is trying to be as inclusive as possible.


> Code that is so large complex that it is not practical to understand or port wasn't written by a human. Such code is probably a template or macro expansion of the real source

Or it was written by thousands of people over twenty years.

GPL doesn't protect you from accumulated cruft, complexity and snarly design that makes it hard to understand let alone modify a system.


> "the trend seems to be that all aspects of a modern OS are getting larger and more complicated. A little here, a little there, it all adds up and becomes a lot, everywhere. It's a sad trend."

Some hypervisor-based systems are moving in the opposite direction, with unikernels that reduce or eliminate the OS to run directly on virtual hardware: Cloudius OSv, HalVM, OpenMirage, Erlang on Xen.


> There are many others you can use.

I understand your good motivations but just saying "it will be ok" doesn't make the problem go away. Yes I can write my own init system, that's good. Can I uninstall systemd from most system that install and depend on it by default. Write my own distro? You can't just easily plug and play it.

It is a big like the kernel. Just swapping out Linux kernel with a FreeBSD one doesn't quite work.

> system not baseless accusations that they are making a power grab.

With that I agree. Maybe to get the conversation back on track is to present a few valid technical points in response. Or just ignore it. Saying "hold up people, no fighting please" doesn't work as well in such forums.


> It is a big like the kernel. Just swapping out Linux kernel with a FreeBSD one doesn't quite work.

Well, actually that's the heart of the problem right there. Systemd mangles and complects things. You can replace the kernel (see Debian/kfreebsd and to a lesser extent Illumos). Or you can make a "distro" like cygwin/mingw et al for windows and homebrew for os x. Because there are some more or less well defined interfaces between userland and kernel space. Not just "shit that systemd does that makes sense on recent linux kernels" (afaik there's no plans for supporting something like linux 2.4 on small embedded systems for example).


> udev is a core component in any modern linux system I see systemd absorbing it as nothing but a political move and a power grab.

I think this is a bit of a stretch. It's not like they did a hostile take over of udev. The maintainers also thought systemd was the right place for that code to live.

As for bloat, there certainly have been some new features, but so much of the systemd code (from what I can tell) was existing code that now lives in one place. That means it's free to consolidate the utility code it uses (every project has helpers for what (g)libc does not provide). In the grand scheme of things, less duplicated code is a good thing.


> It's not like they did a hostile take over of udev.

http://lists.freedesktop.org/archives/systemd-devel/2014-May... (via http://redd.it/2a2tz5):

> Also note that at that point we intend to move udev onto kdbus as transport, and get rid of the userspace-to-userspace netlink-based tranport udev used so far. Unless the systemd-haters prepare another kdbus userspace until then this will effectively also mean that we will not support non-systemd systems with udev anymore starting at that point. Gentoo folks, this is your wakeup call.


It's this kind of crap that scares me away from systemd. I have no problems working with current init structures, so I gain nothing from systemd. It basically just removes options from me, and gives me nothing in return that I actually want.

What we ultimately lose with systemd is modularity. If we cannot upgrade systemd without also upgrading the kernel, then systemd might as well be considered part of the kernel.


Modularity and isolation is at the core of reliability. I think it is worthy discussion if a tradeoff beteween 30 sec and 15 sec boot time is worth that sometimes your boot process might lock up.

I think there was already at least one visible problem with systemd stepping on kernel developer's toes (so to speak) by re-using one of the debug flags.

Heck kernel is monolithic. But thinkign about it, I trust kernel developers a bit more than systemd guys. Maybe it is just a new project and it will stabilize at some point in the future. Now they are kind of shooting from the hip (adding ntp, udev, network socket pools, logging, ... ). That tells me "hello lockups and freezes" and being back in the mid 90s on Windows restarting every day.


On the plus side, maybe HURD will get more love and be pushed to production-ready status...


The choice isn't between 30sec or 15sec boot times. It's between 30sec, 15sec, or 1sec boot times, the latter coming from dropping all of the crap and writing a flat linear /etc/rc file.


This is just disgusting; I hope a lot of people see this. Poettering doesn't even have the shame to hide that he doesn't want anyone using a non-systemd GNU/Linux system.


People already did see it. It was a hot thread on /r/linux and Phoronix. No one really cared.

Gentoo's eudev is more relevant now than it ever has been before.


> udev is a core component in any modern linux system I see systemd absorbing it as nothing but a political move and a power grab.

So you're complaining about politics, while your only substantive criticism is a purely political one?


Am I the only one who's disgusted with this bloated, convoluted, dbus-dependent pile of crap? I mean, c'mon, binary log files? I'll pass, thanks. It replaces way more than it needed to.

I prefer the BSD-style philosophy, nice, simple rc.conf, used to run Arch till it got infected with this garbage too. It slowly progressed away from it's BSD-style roots. So recently, I just gave up and moved to FreeBSD. Not a single regret so far.


I think I'm mostly fine with journald (as a concept). At least I can explain reasoning for it to myself. A switch from non-structured to structured data provides a significant advantage, and indexes are useful. At least I had too many times grepping a multi-gigabyte log file. Sure, relatively modern (RFC5424) syslog protocol has structured data too, but in my experience most software had never bothered to use it. So, forcing a switch through introducing another protocol that has structured data baked-in isn't a too terrible idea.

My only issue with journald's binary log files is that they're of homebrewn custom format, that's not accessible by any standard means. Plain text files aren't directly readable by humans, too, but we have cat, less or alike tools to pass such data to terminal (sometimes an iconv is required, say, if log entries contain filenames that has characters outside of ASCII range), and those tools are available on every modern OS out there.

Personally, I think, a compromise that'd satisfy me (YMMV) would be either an industry-standard log format (like, maybe, sqlite - it's fairly simple, universal and omnipresent nowadays) or, even better, storing data in text files, but having accompanying binary index and metadata ones that store non-human-readable stuff (like hash chain - bet, no sane human would ever check cryptographic log integrity by hand) and provide additional information for faster machine access.

But the heck journald is a tightly-coupled part of into systemd instead of being a separate project is beyond me. I can't reject that systemd has some good things about it too, but it's too terribly monolithic and unhackable compared to mostly-scripted init systems. And such negative points easily outweigh the positive ones.


The biggest problem with dropping the syslog protocol altogether is that there's a huge amount of other stuff that speaks syslog. Things like networking gear use it for the same purposes as a normal *nix box, and getting them to switch is going to be like pulling teeth. With your particular case, I'd say you should look at rsyslog and/or syslog-ng. Both of them have backends that talk with actual databases, so you can have all your tools readily available, and can additionally dump to plain text and/or email messages at the same time. As to the why for journald, it seems very much like the rest of Poettering's MO of NIH. He doesn't seem too capable of working with other project makers to get his goals handled, so just does everything himself, to the detriment of the overall community.


Well, journald has syslog compatibility layer and can talk syslog, so supporting any existing software is not an issue. It doesn't speak syslog by itself, so it probably can't forward logs to another networked syslog server, but I don't see anything that prevents implementing this, if necessary.

The point is, journald also introduces a new protocol that's oriented at logging structured data. This way it not just provides a feature, but forces developers to think about structuring their log output in a machine-readable manner. I think that's the excuse that I believe is the journald's raison d'être and that I personally accept.

Just my opinion, though.


The problem with the journal format is that Poettering used a completely new format for doing so, instead of one of the many existing formats out there that have an excellent design for such things. He could have used something like berkeley db, or sqlite, which have been used extensively to store machine-readable data, are fault-tolerant, and small memory footprints, and would have let developers and administrators use the many pre-existing tools that exist for both database types for log analysis.

However, my philosophical problem is that there's no escape from it. I'd be perfectly happy with it existing if there was a way to turn it off, and let me use my own syslogd program in peace. Instead, I have yet another binary on my system that's running, with all of the problems that can bring, wasting cycles while I hand off log data for actual processing.


Compatibility isn't the problem. The problem is the use of structured binary data itself for logs. Logging binary structures instead of raw text makes it very difficult to recover from crashes or misbehavior, since the logging facility (journald or otherwise) must ensure that the log's structure on disk is consistent at all times.

I've seen more than my fair share of journald corrupting its own log due to unclean shutdowns. If I'm going to be grepping the journald log file anyway to reconstruct it (possible, but not easy, since journalctl is useless here), then why bother using it at all? It fails at the very task it was built for.


> journald corrupting its own log due to unclean shutdowns

Exactly! What the binary log advocates seem to be missing is that those unclean shutdowns (often called "crashes") are probably the very thing you are going to want to search for the in log. In general, few people care how (or if) log stores that the cron daemon yet again ran the hourly maintenance without any errors. What everybody who has had to search a system log cares about is "what happened right before the crash happened".

The current data that needs to be committed to the log successfully and immediately almost by definition happens at a time when you do not have the time for the complexity of an atomic addition to a database. Often there is barely have time to any disk write at all.

The only way make the system log useful would be to make adding events synchronous. As nobody wants to deal with a syslog that is 10,000x slower (or worse), the only sane option is what we always did - make the writes simple and immediate, and defer any fancier feature.

Have they never heard of "log parsers" before? If you want it in a search able DB (which can be very useful), you do that from original log either as an async daemon or defer it with cron or similar.


Not sure it was built for the task we expect a log facility to fulfill. I mean the corruption issue is a built in feature of journald: https://bugs.freedesktop.org/show_bug.cgi?id=64116


I think I disagree.

Classic plain text log files are structured too - they're files of '\n'-separated records, without much else to it. It doesn't really matter (integrity-wise) whenever one's writing, say, JSON or mostly-unstructured plain English records.

I'm unaware on particular journald internal implementation quirks and issues. Maybe it's badly coded and has lots of bugs that corrupt data. That would be implementation issue. But the overall idea of using "binary" logs isn't that bad to me.


It seems highly disturbing to me how much push he seems to have obtained by writing what have in my experience been complicated and crash-prone tools. Especially when such tools seem essentially in direct conflict with system philosophy.

I'm just glad there are sane options available still.


>Sure, relatively modern (RFC5424) syslog protocol has structured data too, but in my experience most software had never bothered to use it. So, forcing a switch through introducing another protocol that has structured data baked-in isn't a too terrible idea.

I agree. There is a strong need for common interface(s), and that's a strong part of the motivation behind Fluentd/Kafka/etc. www.fluentd.org/blog/unified-logging-layer


sqlite (or other "real databases") need a fsync for every or at least many transaction to be data safe. Without that it is far more prone to losing data than journald, because with a write ahead log the window for losing data is far bigger.

And a journal that has another journal inside it would be somewhat silly. A simple write log can be done better.

I don't think fsync on every log commit is a good idea. This would be more a DOS attack.

That said I'm somewhat troubled by the cavalier attitude to log data safety in journald too.


I have some difficulty figuring out the use-cases for journald over (some) plainish text log files. Our disks are getting faster and bigger. If you need to consolidate logs you have many options (starting with rsyslogd). If you need to log gigs of data, that can't possibly be kernel/initd logs?! And if it's application logs why not pipe it through something for indexing? By definition if i need kernel/local/init log files it's because something went very wrong: in such cases problems with clean shutdown/startup seems very likely - and in such use - cases plain text to file/serial wins hands down.

I'm not against having a wrapper that magically slurps stderr/stdout to timestamped logs -- but if that can't be written cleanly with the apis we have, then surely what we need is to make the minimal improvement in our (probably kernel) api to make writing such a program a trivial exercise? Nothing I've seen of systemd has me convinced the project cares one whit about finding the simplest, least coupled solution to any problem.


Systemd is more like busybox than sysvinit. People bitch about Systemd because they're not foul. Systemd folks (mostly Redhat) have their agenda. I don't mind that. Heck I even agree on some of it. But pushing the whole bloated thing down nearly everyone's throat disguising it as just a simple init replacement is a bit too much.


> dbus-dependent pile of crap?

This seems to come up frequently. I'm curious: how alternatively would a local process communicate with the Init daemon?


I'm curious:

Why is the init daemon what a local process should communicate with?

A daemon that managaes other daemons doesn't need to be PID 1, even to reap zombies.

Secondly, daemons shouldn't care what process is managing them, a principled approach to communication between the daemon manager and a daemon would probably include handing off sockets/ports/fds, but probably not much else.


>A daemon that managaes other daemons doesn't need to be PID 1, even to reap zombies.

And get this, prctl(PR_SET_CHILD_SUBREAPER) has existed since May 2012, the original patch was created and submitted by Poettering, and yet we're still told that service management needs to run as pid1 in order to see all double-forked detached daemonized processes.


> Why is the init daemon what a local process should communicate with?

Because there could, and I know this is a crazy thought, be some benefit in having more meaningful information flow between the master of processes and the processes it manages?


> how alternatively [to dbus] would a local process communicate with the Init daemon?

The answer to that is easy: It's still signals, like it has been for decades, and systemd doesn't change that any more than anything else does. systemd defines some more signals, but that's about it.

Notice that you said init daemon. The errors here are in thinking that (a) dbus is the communications system, even in systemd, for the part of the system that does overall system state management and the stuff that the kernel requires of process #1; and (b) the part of the system that supervises daemons is in all packages run in process #1, the "init daemon".

systemd, the package, uses dbus in a number of places between a number of components, most notably logind, hostnamed, timedated, machined, and localed. But don't get that confused with systemd the program that runs as process #1, which is as constrained by the kernel (and others) into using the same signals as always, just as any other system manager is. There's an AF_LOCAL socket (/run/systemd/private) for systemctl to talk to PID #1 using a private undocumented protocol, and a public documented D-Bus API for units, but those are for the service management part of systemd the program.

The system manager and service manager in the nosh package are in the same boat. The system manager's API is that same set of signals again (augmented with some of the extra systemd-defined signals that fit the model). There's no other API because the system manager is not what the world talks to about service management. The control/status API for individual services is the filesystem, the service manager (that is not in PID 1) presenting the same suite of pipes and files as daemontools-encore. And there's an AF_LOCAL socket for system-control (and indeed anything else, such as service-dt-scanner a.k.a. svscan) to talk to the service manager for loading and unloading service bundles in the first place, and plumbing them together.


named pipes, posix ipc, signals, etc. There are quite a few existing communications methods that can be used for this purpose.


OK, and how do you format the data you send across, or find out which functions the daemon supports and trigger them?


Classic sysvinit or BSD-style init doesn't rely upon this sort of functionality at all. Just some simple scripts. No need to complicate it any further. A few signals are all that are needed.


Simple scripts? Have you looked at the shit in /etc/init.d in a modern linux distribution?

  $ wc -l * | sort -n | tail
   274 exim4
   286 apache2
   290 dnsmasq
   298 nfs-common
   350 clamav-freshclam
   364 udev
   386 checkroot.sh
   420 clamav-daemon
   465 clamav-milter
  9893 total
If you look into these, you'll find tons of near-duplicate code between scripts, and frequently every script reinvents the wheel in one way or another.


Using sysvinit doesn't require a domain specific configuration language to be known and understood. So I think simpler is quite valid.

Less verbose does _not_ mean simpler! In many cases it means quite the opposite.

UNIX to me is about simplicity. We don't need crap like binary logs and heavy RPC mechanisms to be polluting beautifully simple and minimal systems.

As several others have noted, the code duplication issue is solved in FreeBSD's init(8) with rc.subr.


/etc/init.d $ for I in * ; do printf "%s %s \n" $(cat $I | grep -v -e '#' | wc -l) $I; done | sort -n | grep -v net.e | tail

  142 mysql
  142 sysfs
  157 kexec
  172 udev
  172 xdm
  182 apache2
  195 bootmisc
  232 named
  297 dmcrypt
  736 net.lo
This is on an OpenRC Gentoo system. There's very little duplicated code in the init scripts.

You might well complain that many these init scripts are substantially longer than the equivalent systemd unit files. Your complaint would be valid. Thing is, many of these init scripts do so much more than the equivalent systemd unit files. Ferinstance, the postgres and mysql unit files that I've seen permit no user configuration (like, such as, altering daemon listen port, config file location, and the like). [0] They also don't do any sort of housekeeping such as verification of the validity of the service's configuration file, checking and repairing mode and ownership of the same, and verifying the existence of the service's data directory.

I understand that OpenRC wasn't being considered for Debian Jessie, but it does a lot of things right, and is (IMO) head-and-shoulders above SysV init. (But then, isn't even bringing up SysV init kind of beating on a dead horse? We all agree that it really needs improvement.)

If you're interested, Gentoo's apache2 init script is here: http://pastebin.ca/2845519 . For reference, the apache2 systemd service file is here: http://sources.gentoo.org/cgi-bin/viewvc.cgi/gentoo-x86/www-... .

For a look at a simpler init script, check out the script for dnsmasq: http://pastebin.ca/2845520 . Looks pretty simple, no? At least as simple as the systemd service files we've been seeing, yes? The config file for dnsmasq is a single line: DNSMASQ_OPTS="--user=dnsmasq --group=dnsmasq" .

OpenRC provides complexity when you need it, and gets out of your way when you don't.

[0] I don't have systemd installed, so I might be missing the user configuration facilities. If they exist, and the configuration files have any appreciable complexity, then their line count must be counted against the unit file's line length.


Yeah, I've got OpenRC on one of my Funtoo systems too... Let's try some different shell magic...

~ # for file in `equery files openrc`; do [ -f $file ] && echo $file; done | xargs wc | tail -1 23729 196573 7432232 total

Oh, and that's with lzma compressed man pages.


If duplicate code bothers you this much, then what stops you from abstracting the common routines into something like BSD's /etc/rc.subr, and then sourcing them in the service-specific init script?


You see crap. I see time-polished scripts.

I also suspect you're running RH. Under Debian script lengths are typically quite short:

n: 104 sum: 13553 min: 8 max: 1246 mean: 130.317308 median: 99 sd: 138.595750

That outlier, by the way, is xprint, part of CUPS. Never had to touch it myself.

Quite a few of those lines are comments, and the basic structure is a set of start / stop / restart blocks.


The comment makes more sense if you substitute "familiar" for "simple," a tactic I use a lot while reading arguments like these.


And on my Debian testing system with systemd, the longest service definition is 46 lines! (/lib/systemd/system/getty@.service)


Yes, modern linux and sysvinit are pigs compared to some alternatives (though systemd is worse). The choice isn't a binary choice between sysvinit and systemd. How about this? init + rc + all the scripts combined are smaller than your handful of daemon-specific scripts. And I understand exactly how all of this works (the kernel parts included). I can easily change any part of it, I can easily debug any part of it, I can easily extend any of it if I need special features. With systems that comprise hundreds of thousands of lines of code, just beginning to understand how all of it works together takes much more time...

Simplicity really does buy me something.

    $ wc -l /usr/src/sbin/init/init.c
        1450 /usr/src/sbin/init/init.c

    $ wc -l /etc/rc
        537 /etc/rc

    $ wc -l /etc/rc.d/*

      21 /etc/rc.d/amd
      11 /etc/rc.d/apmd
      21 /etc/rc.d/avahi_daemon
      21 /etc/rc.d/avahi_dnsconfd
      11 /etc/rc.d/bgpd
      15 /etc/rc.d/bootparamd
       9 /etc/rc.d/cron
      14 /etc/rc.d/cvsyncd
      16 /etc/rc.d/dbus_daemon
      15 /etc/rc.d/dhcpd
      11 /etc/rc.d/dhcrelay
      12 /etc/rc.d/dvmrpd
      11 /etc/rc.d/ftpd
      11 /etc/rc.d/ftpproxy
       9 /etc/rc.d/hostapd
       9 /etc/rc.d/hotplugd
      11 /etc/rc.d/httpd
      13 /etc/rc.d/identd
       9 /etc/rc.d/ifstated
      17 /etc/rc.d/iked
       9 /etc/rc.d/inetd
      17 /etc/rc.d/isakmpd
      17 /etc/rc.d/iscsid
      11 /etc/rc.d/ldapd
      15 /etc/rc.d/ldattach
      11 /etc/rc.d/ldomd
      11 /etc/rc.d/ldpd
      11 /etc/rc.d/lockd
       9 /etc/rc.d/lpd
      16 /etc/rc.d/mopd
      17 /etc/rc.d/mountd
       9 /etc/rc.d/mrouted
      18 /etc/rc.d/nfsd
      11 /etc/rc.d/npppd
      38 /etc/rc.d/nsd
      12 /etc/rc.d/ntpd
      11 /etc/rc.d/ospf6d
      11 /etc/rc.d/ospfd
      24 /etc/rc.d/pflogd
      11 /etc/rc.d/popa3d
      11 /etc/rc.d/portmap
      16 /etc/rc.d/rarpd
       9 /etc/rc.d/rbootd
     289 /etc/rc.d/rc.subr
      11 /etc/rc.d/relayd
      11 /etc/rc.d/ripd
       9 /etc/rc.d/route6d
      11 /etc/rc.d/rsyncd
      11 /etc/rc.d/rtadvd
      11 /etc/rc.d/rtsold
       9 /etc/rc.d/rwhod
      11 /etc/rc.d/sasyncd
      13 /etc/rc.d/sendmail
       9 /etc/rc.d/sensorsd
      11 /etc/rc.d/slowcgi
      13 /etc/rc.d/smtpd
      11 /etc/rc.d/sndiod
      12 /etc/rc.d/snmpd
      26 /etc/rc.d/spamd
      25 /etc/rc.d/spamlogd
      13 /etc/rc.d/sshd
      11 /etc/rc.d/statd
      15 /etc/rc.d/syslogd
      12 /etc/rc.d/tftpd
      11 /etc/rc.d/tftpproxy
       9 /etc/rc.d/tor
      32 /etc/rc.d/unbound
       9 /etc/rc.d/watchdogd
      11 /etc/rc.d/wsmoused
       9 /etc/rc.d/xdm
      16 /etc/rc.d/ypbind
      11 /etc/rc.d/ypldap
      28 /etc/rc.d/yppasswdd
      13 /etc/rc.d/ypserv
    1285 total


I think you also need to count the Bash source code there.


You cannot assume that /bin/sh always refers to Bash. There's a lot more to Unix than just Linux and Mac OS X. On FreeBSD /bin/sh is a 1989 rewrite of the SVR4 Bourne shell. As of 10.0-RELEASE it comes to 21167 lines, including its Makefile, ancillary shell scripts, and documentation.


That's not really fair since the shell is an independent part of the system and could be swapped out for another implementation. Plus, it's not there just for the init system; it'd be there regardless of the init system. It's an already existing independent component, which is used for leverage.

Otherwise should we also count the C compiler, libc, all the CLI utilities and the kernel?

But here you go. Still smaller than systemd. Sysvinit needs a shell too...

    $ wc -l /usr/src/bin/ksh/*.[ch]
     127 /usr/src/bin/ksh/alloc.c
    1412 /usr/src/bin/ksh/c_ksh.c
     935 /usr/src/bin/ksh/c_sh.c
     560 /usr/src/bin/ksh/c_test.c
      53 /usr/src/bin/ksh/c_test.h
     201 /usr/src/bin/ksh/c_ulimit.c
      60 /usr/src/bin/ksh/config.h
     829 /usr/src/bin/ksh/edit.c
      86 /usr/src/bin/ksh/edit.h
    2163 /usr/src/bin/ksh/emacs.c
    1333 /usr/src/bin/ksh/eval.c
    1433 /usr/src/bin/ksh/exec.c
     107 /usr/src/bin/ksh/expand.h
     593 /usr/src/bin/ksh/expr.c
     984 /usr/src/bin/ksh/history.c
     438 /usr/src/bin/ksh/io.c
    1653 /usr/src/bin/ksh/jobs.c
      13 /usr/src/bin/ksh/ksh_limval.h
    1643 /usr/src/bin/ksh/lex.c
     132 /usr/src/bin/ksh/lex.h
     196 /usr/src/bin/ksh/mail.c
     786 /usr/src/bin/ksh/main.c
    1149 /usr/src/bin/ksh/misc.c
      91 /usr/src/bin/ksh/mknod.c
     285 /usr/src/bin/ksh/path.c
     267 /usr/src/bin/ksh/proto.h
     420 /usr/src/bin/ksh/sh.h
    1161 /usr/src/bin/ksh/shf.c
      82 /usr/src/bin/ksh/shf.h
     897 /usr/src/bin/ksh/syn.c
     231 /usr/src/bin/ksh/table.c
     183 /usr/src/bin/ksh/table.h
     420 /usr/src/bin/ksh/trap.c
     708 /usr/src/bin/ksh/tree.c
     141 /usr/src/bin/ksh/tree.h
      57 /usr/src/bin/ksh/tty.c
      37 /usr/src/bin/ksh/tty.h
    1210 /usr/src/bin/ksh/var.c
      10 /usr/src/bin/ksh/version.c
    2128 /usr/src/bin/ksh/vi.c
   25214 total


What's wrong with old good Unix pipes?


The don't solve race conditions in peers trying to locate each other (surprisingly difficult). They don't solve a standardized marshaling format. They don't come with an implementation to integrate with main loops for event polling. They don't handle authentication (well, sort of). They have an inherent vulnerability in FD passing where you can cause the peer to lock up. You can get into deadlock situations in your messaging code if you aren't really careful about message sizes and when you order poll in/poll out. They aren't introspect-able to see what the peer supports. They make it super easy to not maintain ABI.

I could go on.


You seem to want pipes to bend over backwards to solve each and every one of your applications' problems.

> The don't solve race conditions in peers trying to locate each other (surprisingly difficult).

Not even sure what you mean here. Are you talking about peer discovery? Because DBus won't help you there either--peers have to be aware of each other's DBus object paths before they can rendezvous. Similarly, two peers need to know where the common pipe is to rendezvous.

> They don't solve a standardized marshaling format.

Nor should they. There are a ton of ways to skin this cat in userspace, depending on what your application needs. Protobufs come to mind, for example, but there are others.

Why do you want the pipe to enforce a particular marshaling format? Does the pipe know what's best for every single application that will ever use it?

> They don't come with an implementation to integrate with main loops for event polling.

It's not the kernel's responsibility to implement the application's main loop. That's what libevent and friends are for today, if you need them.

> They have an inherent vulnerability in FD passing where you can cause the peer to lock up.

Last I checked, you pass file descriptors via UNIX sockets, not pipes.

> They don't handle authentication (well, sort of).

Depends on your application's threat model. The kernel provides some basic primitives that can be used to address common security-related problems (capabilities, permission bits, users, groups, and ACLs). If they're not enough, you're free to perform whatever authentication you need in userspace to secure your application against your threat model's adversaries.

It is unreasonable to expect the pipe to be aware of every single threat model an applications expects, especially since they change over time.

> You can get into deadlock situations in your messaging code if you aren't really careful about message sizes and when you order poll in/poll out.

It's not the pipe's fault if you don't use it correctly.

> They aren't introspect-able to see what the peer supports.

Peer A could use the pipe to ask peer B what it can do for peer A. Why do you want the pipe to do peer B's job?

> They make it super easy to not maintain ABI.

Nor does DBus. Nothing stops an application from willy-nilly changing the data it serves back.


> Not even sure what you mean here. Are you talking about peer discovery? Because DBus won't help you there either--peers have to be aware of each other's DBus object paths before they can rendezvous. Similarly, two peers need to know where the common pipe is to rendezvous.

I suspect he was referring to socket activation and how that simplifies these kinds of messes.

> Nor should they. There are a ton of ways to skin this cat in userspace, depending on what your application needs. Protobufs come to mind, for example, but there are others.

Right... so that's exactly what systemd did. It used Dbus, which provides that standard serialization format. Not my favourite format, but very well established and tested and focused on systemd's problem domain.

The point is, in order to have loose coupling between components, something like unix pipes is just a starting point.

> Does the pipe know what's best for every single application that will ever use it?

Ah, now I understand the problem with systemd. I never realized it was trying to take over every application's communications protocol! ; -)

Seriously, I think it is perfectly reasonable (and necessary) to define a standard protocol for system even notifications... I say this because it has already been done... by the standard compoents like udev & dbus that systemd is building on top of...

> Last I checked, you pass file descriptors via UNIX sockets, not pipes.

Correct. People have a tendency to mess up their semantics though. If the original poster wasn't referring to unix domain sockets, than it is an even sillier question.

> It is unreasonable to expect the pipe to be aware of every single threat model an applications expects, especially since they change over time.

Yes, but you do need something more sophisticated than a pipe to manage secure communications between your systems components.

> Nor does DBus. Nothing stops an application from willy-nilly changing the data it serves back.

? D-Bus will drop you like a hot potato the moment you fire off invalid messages. You could send valid messages with fraudlent/misleading data payloads I guess, but at least a whole host of problems are addressed by tightening that up.


> The point is, in order to have loose coupling between components, something like unix pipes is just a starting point.

It's also an ending point. If each application gets to define its own IPC primitives, then there become as many app-to-app communication protocols as there are app-to-app pairs. This does not make for a loosely-coupled ecosystem.

> Seriously, I think it is perfectly reasonable (and necessary) to define a standard protocol for system even notifications... I say this because it has already been done... by the standard compoents like udev & dbus that systemd is building on top of...

This is circular reasoning. You're saying "we should use systemd's notification protocol, because systemd uses it." This says nothing about the technical merits of its protocol.

> Yes, but you do need something more sophisticated than a pipe to manage secure communications between your systems components.

Um, data within a pipe is visible to only the endpoint processes, the root user (via procfs and /dev/kmem), and the kernel. If you don't trust an endpoint, you should stop communicating with it. If you don't trust the root user or the kernel, you can't really do anything securely at all in the first place. My point is, data within a pipe is about as secure as it's going to get.

> You could send valid messages with fraudlent/misleading data payloads I guess, but at least a whole host of problems are addressed by tightening that up.

I think you will find that the bulk of IPC problems will come from dealing with data you didn't expect--that is, processes sending "fraudlent/misleading" data and other processes acting on it. DBus won't help you there--you always always ALWAYS have to validate data you receive from untrusted parties, no matter what transport or wire format you're using.


> t's also an ending point. If each application gets to define its own IPC primitives, then there become as many app-to-app communication protocols as there are app-to-app pairs. This does not make for a loosely-coupled ecosystem.

That's exactly why you need a more systemic approach to the IPC mechanism...

You can pretend that "oh this is just a stream so there isn't tight coupling", but the information that is communicated is the same. If you haven't imposed some structure and consistency to it, that's exactly how you end up with a ball of mud.

> This is circular reasoning. You're saying "we should use systemd's notification protocol, because systemd uses it." This says nothing about the technical merits of its protocol.

You misunderstood my point. I'm not justifying it on the basis that systemd is using it. I'm saying the fact that all the other systems have arrived at a similar, and in many cases the exact same mechanism, is pretty strong evidence that it is a reasonable design choice.

Basically all the Linux systems out there are already using udev & dbus. Most of the non-Linux systems do as well. Everyone's done it and made it work. That systemd is adopting arguably the most entrenched one in the Linux sphere is hardly as controversial as people seem to think it is.

> My point is, data within a pipe is about as secure as it's going to get.

I wasn't trying to suggest it wasn't a secure point-to-point communication mechanism (it has issues, but fine enough). The issue is that you need more integration in with the security model to avoid having a rats nest of security logic on top of it.

> I think you will find that the bulk of IPC problems will come from dealing with data you didn't expect--that is, processes sending "fraudlent/misleading" data and other processes acting on it.

There's a very long and glorious history of malformed and misleading IPC causing problems. Not that it is the only thing, but life becomes a lot easier when that problem is off the table.

> DBus won't help you there--you always always ALWAYS have to validate data you receive from untrusted parties, no matter what transport or wire format you're using.

Yes you will. However, DBus ensures that you don't have to write a ton of redundant code just verifying you are getting validly structured data and dealing with the nasty ways someone might try to exploit that.

Imagine having to write a secure REST service where the only thing you had to worry about in the entire network protocol stack was the validity of the data expressed in the payload.


> That's exactly why you need a more systemic approach to the IPC mechanism... You can pretend that "oh this is just a stream so there isn't tight coupling", but the information that is communicated is the same. If you haven't imposed some structure and consistency to it, that's exactly how you end up with a ball of mud.

So, you basically want to turn IPC into CORBA. It's a bad idea to have the OS impose too much structure on your IPC, in the same way that it's a bad idea to have the base class in an object hierarchy try to take on too many subclass-specific responsibilities. This is because over-specialization of a component needlessly constrains the designs of systems that use it.

That said, you are correct in that byte streams alone do not make for loosely coupled systems. Programs must additionally emit data such that other unrelated programs can operate on it without modification. But we already have this universally-parsable data format: it's called human-readable text. It's why you can "grep" and "awk" and "sed" the outputs of "ls" and "find" and "cat", for example.

Take a second and imagine what the world would be like if you had to write "grep" such that it had to be specifically designed to interact with "find," instead of simply expecting a stream of human-readable text. Imagine if "awk" had to be specifically designed to interact with "ls." This is the world that CORBA-like IPC creates, where programs not only need to be intrinsically aware of the higher-level RPC methods each other program exposes, but also intrinsically aware of the access and consistency semantics that go along with it. No thank you; I'll stick with pipes and human-readable text, where the data format, data access, and consistency semantics are universally applicable.

> Basically all the Linux systems out there are already using udev & dbus. Most of the non-Linux systems do as well. Everyone's done it and made it work. That systemd is adopting arguably the most entrenched one in the Linux sphere is hardly as controversial as people seem to think it is.

First, udev is Linux-specific--it uses netlink sockets to listen for Linux-specific hardware events. Second, not everyone uses udev and dbus. mdev, smdev, eudev, and static /dev are also widely used and have well-defined use-cases that udev does not serve, and plenty of servers (and even my laptop) get along just fine without dbus.

Trying to justify udev and dbus because "everyone uses them so you should too!" is not only an example of the bandwagon logical fallacy, but also reveals your ignorance of and insensitivity to other users' requirements.

> The issue is that you need more integration in with the security model to avoid having a rats nest of security logic on top of it.

I said it above and I'll say it again here. The IPC layer does not know and cannot anticipate the security needs of every single application. If you try to design your IPC system to do this, you will fail to encompass every possible case. This is because threat models are not only specific to the application, but also specific to the context in which the application runs.

For example, you do not send your bank account password over an out-bound socket unless it has first been encrypted using a secret key known only to you and your bank. Your reasoning implies that the IPC system should be tasked with automatically enforcing this constraint, among others. Nevermind the fact that the IPC system will see only the ciphertext and thus will not know that the data it's about to send contains your password.

> However, DBus ensures that you don't have to write a ton of redundant code just verifying you are getting validly structured data and dealing with the nasty ways someone might try to exploit that.

So do plenty of stub RPC compilers and serialization libraries that have been around longer, are more widely used, and are better tested than DBus. However, neither DBus nor any of these solutions will help you with well-formatted but invalid data. Your application has to deal with that, since the validity of data is both application-specific and context-specific (so, not something the IPC system can anticipate).

Again, what's so special about DBus, besides the fact that it's the New Shiny?

> Imagine having to write a secure REST service where the only thing you had to worry about in the entire network protocol stack was the validity of the data expressed in the payload.

If I'm writing a secure service of any kind, you can be damn certain I'm thinking about a LOT more than input validation! Security encompasses waaaaaaaaaay more than that.

Even if security wasn't an issue, there is still a LOT more to worry about than input validation. Things like scalability, fault tolerance, concurrency, and consistency come to mind. There is no silver bullet for any of these, let alone an IPC system that solves them all at once!


> First, udev is Linux-specific--it uses netlink sockets to listen for Linux-specific hardware events.

I was thinking about this comment, and I realized this is probably the source of most of your angst, which leaves some great solutions on the table.

systemd isn't really creating a much more significant break with the systems you like, because it's building on top of what Linux, which for the most part has already made the break.

The problem is, projects like GNOME, which have software you want to use, are integrating more tightly with Linux and specifically bits of systemd.

I think the obvious solution is a bridge/better interface. The contracts that GNOME is going to rely upon are at least going to be pretty well defined, and if you've got another system that works better, it shouldn't be hard for it to provide an equivalent, even compatible, interface.

If it really is demonstrably better, GNOME and other projects will likely adopt your interface/abstraction, and systemd will end up having to communicate through your interface. Even if they don't, it is a comparatively simpler effort for a software community to support a relatively small set of touch points that they want GNOME to be aware of, and maintaining a fork or compatibility layer is a perfectly reasonable solution (indeed, BSD already does this for Linux runtimes).

I can understand why it'd not be a perfect solution from your perspective, but if a bunch of developers contributing to a work you care about are going a direction you don't like, it's about as good an outcome as one could hope for.


> I was thinking about this comment, and I realized this is probably the source of most of your angst

No, what gives me the most angst is the arrogance of a certain segment of Linux+systemd users who think that just because they can apt-get install systemd and write some minimal unit files for some trivial services somehow makes them domain experts on OS design. And these people seem to think that other users' requirements don't matter, since if they're not using systemd too, they're clearly doing it wrong.


That's a lot of angst for someone that is clearly going to crash and burn in short order and has zero impact on the design and architecture of the systems you and I work with...


This is a pretty good example of how a lot of the contempt for systemd seems to stem from one or both of ignorance of how systemd works and/or ignorance about what a good solution might look like: https://plus.google.com/+LennartPoetteringTheOneAndOnly/post...


> So, you basically want to turn IPC into CORBA.

No, very much not, because we don't really need an RPC mechanism here. We want something in the event/messaging space.

But it isn't even a want. If you don't have it, you end up with each of the your components actually being very tightly coupled to all the other components it talks to and you've got a truly monolithic mess on your hands.

> It's a bad idea to have the OS impose too much structure on your IPC, in the same way that it's a bad idea to have the base class in an object hierarchy try to take on too many subclass-specific responsibilities. This is because over-specialization of a component needlessly constrains the designs of systems that use it.

This isn't exactly a new concept or a new problem. There are plenty of existing cases where this is happening (basically every platform I can think of right this moment, though I'm sure there are plenty of exceptions), including in the current Linux udev mechanism.

> But we already have this universally-parsable data format: it's called human-readable text. It's why you can "grep" and "awk" and "sed" the outputs of "ls" and "find" and "cat", for example.

/me falls out of chair.

Yeah, that's worked out great. Never had a problem with init scripts not extracting the right column or handling a new variant in how the output comes out (or even better still, the dreaded "value with embedded whitespace").

But you know what? DBus is basically human readible with a bit more imposed structure than generic streams. So, I think you are arguing in support of the systemd approach without realizing it! ;-)

> First, udev is Linux-specific--it uses netlink sockets to listen for Linux-specific hardware events. Second, not everyone uses udev and dbus. mdev, smdev, eudev, and static /dev are also widely used and have well-defined use-cases that udev does not serve, and plenty of servers (and even my laptop) get along just fine without dbus.

Very true. I was speaking in generalities. Point being, udev is out there and very thoroughly established as something that people seem to generally want.

> The IPC layer does not know and cannot anticipate the security needs of every single application. If you try to design your IPC system to do this, you will fail to encompass every possible case.

You have some ambitious notions for the systemd project that go well beyond the goals that critics say are overly broad in scope. This is for addressing a relatively narrow set of problems that wouldn't even come close to defining 1% of IPC on a LInux system. I'm not suggesting we replace the entire Unix toolset with a complete new set of interfaces and programs (nor are the systemd guys). This is specifically for managing the interactions between devices & daemons... It's a well established problem domain with some well established roles & responsibilities and some pretty well understood data message/event structures.

While it might use systemd/dbus/whatever to get notifications about various services and system events, YOUR BANKING SOFTWARE IS NOT SUPPOSED TO USE SYSTEMD TO MOVE MONEY BETWEEN YOUR ACCOUNTS!

> Again, what's so special about DBus, besides the fact that it's the New Shiny?

DBus isn't the new shiny. It's the old shiny. The new shiny would probably be 0mq or some of the new datagram protocols that people are experimenting with, along with various extensible binary protocols like MessagePack and Cap'nProto.

What's special about DBus is that it is already being used very broadly on Unix platforms for this kind of function and is well integrated in to the system security model. The one bit of additional coolness it brings to the table is the support for socket activation, which simplifies the complexity of start ordering and discovery tremendously, which is indeed a VERY nice benefit, but could no doubt have been NIH'd independently.

> If I'm writing a secure service of any kind, you can be damn certain I'm thinking about a LOT more than input validation! Security encompasses waaaaaaaaaay more than that.

Yes, but the point is one derives substantial benefit from trusted components in the system that take care of their part of the problem. You don't benefit from having to reimplement an entire security apparatus with each component. This is basic security compartmentalization 101.

> Things like scalability, fault tolerance, concurrency, and consistency come to mind. There is no silver bullet for any of these, let alone an IPC system that solves them all at once!

You appear to be simultaneously claiming there is no silver bullet and being terribly upset that systemd isn't one.

Yes, it is no silver bullet. It's not even the huge sea change that some people seem to think it is. Rather, it is an incremental improvement over existing practices that gets rid of a bit more of the cruft and stupidity in the existing infrastructure. Doing that kind of thing right can really make a big difference for the system as a whole, but it isn't the apocalypse.


Somehow you think I'm talking about systemd and init scripts and the things they do. I'm not. The original question I replied to was about why pipes (or any OS-level IPC) shouldn't try to solve application-level problems.

My arguments are that the OS's IPC should not enforce an IPC record structure, but should enforce a consistent set of IPC access methods (i.e. pipes, sockets, shared memory, message queues, etc.) defined independently of applications. I think we're in agreement about the latter--if the OS were to let each application have it's own IPC access methods, then there would be as many access methods as there are applications (leading to tightly-coupled "truly monolithic mess").

I don't think we've reached agreement on the former. I claimed that there is no "best record structure" for all applications, so the OS shouldn't try to enforce one. I also mentioned that human-readable text is the universal data format, which is both a manifestation of this principle (i.e. the OS imposes no constraints on the structure of bytes passed between programs) and a desirable outcome since parsing text is super-simple to implement (by contrast, take a look at the examples in dbus-send(1) to see how painful the alternative can be). You disagree--you think the IPC system should also handle things like serialization and validation.

The problem is that serialization and validation are both application-specific (and even context-specific) concerns, and for the IPC system to address them, it has to gain knowledge from the application. But this lets the application set IPC access methods, which we've already agreed is a bad idea! My (extreme) example to prove this point was that pushing validation responsibility from the application into the IPC system would require it to handle ridiculous application-specific corner cases, like defining a socket class that makes sure that your bank account password won't be sent to the wrong host (still not sure how you concluded that that remark was about systemd). The point is, if you want your IPC system to handle validation for you, you're just asking for trouble.

The same type of problem occurs when you put serialization into the IPC system. The serializer has to know whether or not a string of bytes represents a valid application-defined record. If you make serialization the IPC system's responsibility, it needs application-level knowledge on whether or not an inbound message represents a valid message (which also leads to ridiculous corner cases).

DBus not only enforces structured records (bad), but also lets applications define their own IPC access methods (worse). The RPC-like nature of DBus means that both peers must not only agree on the interpretation of bytes in advance, but also agree on the semantics of accessing them. Unlike reading from a pipe, accessing the value of a DBus object by name can have arbitrary side-effects which the requester must be aware of. In the limit, this puts us into the undesirable situation of having each application-to-application pair agree on an IPC access method, leading to the tight coupling nightmare.

Don't get me wrong--DBus has its use-cases. OS-level IPC isn't one of them. I wish systemd folks took some time to think about this, but they're too busy trying to make DBus into OS-level IPC with no regards to the consequences. See kdbus and the SOCK_BUS socket class it exports.

Now, nitpicks:

> But you know what? DBus is basically human readible with a bit more imposed structure than generic streams

/me falls out of chair too.

Now you're just being daft :) The more structure you impose on bytes, the less human-readable it gets. For example, I don't think I have to explain to you why this comment is more legible as rendered in your browser (unstructured text) than as raw HTML (structured records).

> DBus isn't the new shiny. It's the old shiny

CORBA is the old shiny ;) See also: https://en.wikipedia.org/wiki/Remote_procedure_call#Other_RP...

> The one bit of additional coolness it brings to the table is the support for socket activation

Not the IPC system's responsibility. See also: https://en.wikipedia.org/wiki/Xinetd

> You don't benefit from having to reimplement an entire security apparatus with each component

Of course--you use a library and an RPC stub generator for this. Not really part of the "design principles of IPC" discussion we've got going, though.


> The original question I replied to was about why pipes (or any OS-level IPC) shouldn't try to solve application-level problems.

That maybe what you read, but the context of drdaemon's statement was specifically in response to a question about communications with the init daemon, and of course everything I said after was as well... Glad we got that settled.

> Not the IPC system's responsibility.

Hmm... IPC systems need to have ways of matching up the parties in a conversation, and having one where you don't have to enforce who calls whom first and parties don't have to mutually agree upon the specific endpoints in advance sure seems like something an IPC system might want to have... particularly one employed in an init system...

> See also: https://en.wikipedia.org/wiki/Xinetd

As discussed here: http://0pointer.de/blog/projects/systemd.html

There absolutely is a ton of overlap between what systemd does with socket activation and what Xinetd has evolved to... but as with evenone else doing OS design, there comes a point where you leave Xinetd behind and let the full potential of that trick work in your favour.

> Now you're just being daft :)

Me and the folks at Wikipedia: http://en.wikipedia.org/wiki/Comparison_of_data_serializatio...

Don't get me wrong, I think a lot of the Wikipedians are pretty daft, but they are as reasonable a judge of human readability as I can imagine, given what they do.

> CORBA is the old shiny ;)

CORBA is the old shiny-my-god-we-dont-need-nearly-all-of-that-and-it-really-benefits-a-bootstrapped-systems-so-there-is-a-chicken-and-egg-prolem-here. But yeah, close. I don't think anyone has seriously considered that since the OS/2 & Workplace OS days... and even then.

That said, I would say that THESE DAYS (unlike in its heyday), CORBA is a pretty awesome robust, feature rich _general purpose_ distributed IPC system.

> Of course--you use a library and an RPC stub generator for this.

Ah, so it is much more modularized if it runs as an executable piece of code in process than a piece of executable code out of process. Got it. ;-)

> Not really part of the "design principles of IPC" discussion we've got going, though.

Well, that's the discussion you're having. I'm trying to talk about the design constraints and appropriate solutions for the problem domain...


> As discussed here: http://0pointer.de/blog/projects/systemd.html

Lennart Poetterring claims that you should use his software instead of someone else's software! I'm SHOCKED! Full story at 11.

Seriously now, did you honestly think that he would say to use xinetd over systemd? Do you honestly believe a developer will advocate the use of a competing piece of software over something (s)he produced?

> There absolutely is a ton of overlap between what systemd does with socket activation and what Xinetd has evolved to... but as with evenone else doing OS design, there comes a point where you leave Xinetd behind and let the full potential of that trick work in your favour.

Unless you don't feel like replacing small, simple, easy-to-use, well-tested xinetd with the 200K-line pile of C code that is systemd.

Besides, I've got your socket activation right here: Start the daemon, have the daemon open a port, and let the kernel swap it to disk. The kernel will swap it back in when it receives a connection for it.

Benefits:

* the daemon preserves state between "activations" for free

* the kernel gives you this feature for free

Security:

* the daemon doesn't have to trust another userspace program with anything

* the daemon can use mlock() to prevent sensitive pages from getting swapped

* if this isn't enough, you can encrypt the swap partition to resist offline attacks

Resources:

* If disk is too expensive, disk is read-only, you have no swap, you have no CAP_IPC_LOCK, the daemon would need to mlock() too much RAM, and you can't encrypt your swap, there's xinetd.

* Need to apply filters or QoS controls on connections before waking up the daemon? That's what the firewall is for.

Trivia:

* You can have xinetd trigger whatever event you want, since all it does is fire up a program and run it. This includes alerting other programs, like a service manager, that it got a connection, and maybe even sending along the message (or the file descriptor) if you want. There is no need for systemd to subsume this responsibility.

As you can see, "socket activation" is by and large a marketing gimmick.

> Me and the folks at Wikipedia:...

You think an article that compares data serialization protocols somehow proves your ludicrous claim that human readable text is less readable than marked-up text? Maybe daft was too nice a word...

> Ah, so it is much more modularized if it runs as an executable piece of code in process than a piece of executable code out of process. Got it. ;-)

Sir/madam, have you ever written an Internet-facing daemon? Obviously the bulk of the RPC logic lives in a shared library. You know, a logically distinct module that can be independently installed, loaded once, and independently maintained.

Besides, procedurally-generated RPC-handling code adds no technical debt to your project, anymore than the compiler's generated assembler output does.

You seem to want to replace the RPC shared library with a separate process. Not only will this make create a performance bottleneck, but also it makes it a single point of failure. If it crashes, all your daemons lose their connections. This is obviously highly undesirable, especially on servers.

> Well, that's the discussion you're having. I'm trying to talk about the design constraints and appropriate solutions for the problem domain...

I think I'm done with you. You deserve everything systemd will ever do for you.


> Seriously now, did you honestly think that he would say to use xinetd over systemd?

No... but I thought he might be able to pretty adequately explain how systemd exploits socket activation and contrast it with xientd....

> Do you honestly believe a developer will advocate the use of a competing piece of software over something (s)he produced?

Well, I've certainly done it, so it is possible, but I wasn't referencing him as a persuasive voice... Even if I was, that'd be such a flawed and pathetic argument...

> Unless you don't feel like replacing small, simple, easy-to-use, well-tested xinetd with the 200K-line pile of C code that is systemd.

You might want to look at the code. The socket activation logic is a pretty clean & tight ~90K chunk of code in a handful of files... and for the record, xinetd isn't that slim, with nearly 25K lines of code spead over well over a hundred files, and that's if you only count the C source files.

> As you can see, "socket activation" is by and large a marketing gimmick.

Sigh... I can see you didn't read the article. The implementation differences aren't terribly different, and Lennart already made your points for you... Systemd does have some little tweaks that open up a bunch of different worlds of advantages.

> Sir/madam, have you ever written an Internet-facing daemon?

Yes, but of course, in this context we're primarily focused on AF_UNIX sockets...

> Obviously the bulk of the RPC logic lives in a shared library. You know, a logically distinct module that can be independently installed, loaded once, and independently maintained.

it's very common, for example, for web apps to have a separate process that parses and validates inbound HTTP requests RESTful requests before passing them on to the main application process. You can and do run web apps that are directly exposed to the Internet, but nobody suggests this is to make the request processing logic more modular...

> You seem to want to replace the RPC shared library with a separate process. Not only will this make create a performance bottleneck, but also it makes it a single point of failure. If it crashes, all your daemons lose their connections. This is obviously highly undesirable, especially on servers.

I see you are familiar with Erlang. ;-)

You raise a good point. Often to reduce failure rates people employ load balancer that work with various HA protocols to avoid losing connetions. What do load balancers do again? Oh yeah, they are separate processes receive in bound RPC requests, parse and validate them, attempt to mitigate any in bound attacks before routing and forwarding them to the application itself...

And of course, a lot of web applications are largely front ends to a database, which means they themselves are processing RPC requests, formatting, validating and transforming them before forwarding them to a database for execution...

..and let's not get started about middleware... ;-)

> You seem to want to replace the RPC shared library with a separate process.

No. I really don't. I'm just pointing out that if you are looking for small, modular and loosely coupled components that are fairly resilient, it's not like someone is going to say that moving a component form a shared library to a separate process is going to get critiqued on the basis that it intrinsically makes for more tightly coupled code.

Or wait, are you suggesting that systems where all these libraries are rolled up in to one process would be more modular? [looks at critique of how systemd puts too much stuff in to one process...]


> The don't solve race conditions in peers trying to locate each other (surprisingly difficult).

There's only one PID 1 - it would not be hard to locate its UNIX domain socket.

> They don't solve a standardized marshaling format.

Fair point.

> They don't come with an implementation to integrate with main loops for event polling.

They're file descriptors so they work with select, poll, etc.

> They have an inherent vulnerability in FD passing where you can cause the peer to lock up.

Please elaborate. There's nothing inherently vulnerable with FD passing. In fact, dbus relies on it so if you can't make FD passing secure then you can't make dbus secure either.

> You can get into deadlock situations in your messaging code if you aren't really careful about message sizes and when you order poll in/poll out.

I think you're overstating the difficulty of doing this correctly.

> They aren't introspect-able to see what the peer supports.

I for one am OK with PID 1 not being introspectable.

> They make it super easy to not maintain ABI.

It's quite possible to maintain ABI compatibility, but still, how much of a moving target should PID 1's ABI be anyways?

dbus is a great solution for normal applications, but PID 1 is special. The generality provided by dbus is unnecessary. PID 1 should not be servicing requests from ordinary users, so any security concerns with using UNIX domain sockets directly are moot. If there are certain actions, such as shutdown, that need to be triggered from non-root users, then there should be a separate, unprivileged, process that listens on dbus, implements authorization logic, and then relays the command to PID 1 over a UNIX domain socket using a very simple and easily-audited interface. That's good security and reliability engineering.


> They have an inherent vulnerability in FD passing where you can cause the peer to lock up.

(It's kind of unfair but I could not resist): Just like dbus today? http://www.ubuntu.com/usn/usn-2352-1/


Exactly (since userspace DBus has to use sockets, kdbus uses memfd). Imagine if you had to fix that in every program that ever wanted to do IPC instead of just one place. Sounds like a security and maintenance nightmare.


Arguably dbus is basically built is unix domain sockets with a pub/sub routing mechanism (and a ton better security).


> So recently, I just gave up and moved to FreeBSD. Not a single regret so far.

It all depends on the use cases.

For me, FreeBSD is a no go given my desktop usage requirements.


what about the desktop oriented BSD such as dragonfly or pc-bsd ?


Also not, since I require access to specific 3D programming software SDKs (vendor specific), modeling, video editing tools and .NET (work).

This currently makes me a Windows/Mac OS X guy in what concerns desktop usage.


There are a lot of people who agree with you. I feel a bit torn on some of the issues myself, but I'm in a position where I'll be dealing with the enterprise Linux distributions that are adopting systemd quite a lot, so regardless of what I run on my laptop (another BSD fan here), I appreciate the people who are building good resources for getting up to speed on systemd.


Viewing the comments, no it appears you are not the only one disgusted.

I'm just glad for Slackware at this point.


Disclaimer: I develop uselessd, probably have a warped mindset from being a Luddite who values transparency, and evil stuff like that.

The author of this piece makes the classic mistake of equating the init system as the process manager and process supervisor. These are, in fact, all separate stages. The init system runs as PID 1 and strictly speaking, the sole responsibility is to daemonize, reap its children, set the session and process group IDs, and optionally exec the process manager. The process manager then defines a basic framework for stopping, starting, restarting and checking status for services, at a minimum. The process supervisor then applies resource limits (or even has those as separate tools, like perp does with its runtools), process monitoring (whether through ptrace(2), cgroups, PID files, jails or whatnot), autorestart, inotify(7)/kqueue handlers, system load diagnostics and so forth. The shutdown stage is another separate part, often handled either in the initd or the process manager. Often, it just hooks to the argv[0] of standard tools like halt, reboot, poweroff, shutdown to execute killall routines, detach mount points, etc.

To stuff everything in the init system, I'd argue, is bad design. One must delegate, whether to auxiliary daemons, shell scripts, configuration syntax (in turn read and processed by daemons) or what have you.

sysvinit is certainly inadequate. The inittab is cryptic and clunky, and runlevels are a needlessly restrictive concept to express what is essentially a named service group that can be isolated/overlayed.

Of course, to start services on socket connections, you either use (x)inetd, or you reimplement a subset or (partial or otherwise) superset of it. There's no way around this, it's choosing to handle more on your own rather than delegate. In systemd's case, they do this to support socket families like AF_NETLINK.

As for systemd being documented, I'd say it's quote mediocre. The manpages proved to be inconsistent and incomplete, and for anyone but an end user or a minimally invested sysadmin, of little use whatsoever. Quantity is nice, but the quality department is lacking.

sysvinit's baroque and arduous shell scripts are not the fault of using shell scripts as a service medium, but have to deal with sysvinit's aforementioned cruft (inittab and runlevels) and the historical lack of any standard modules. BSD init has the latter in the form of /etc/rc.subr, which implements essential functions like rc_cmd and wait_for_pids. Exact functions vary from BSD to BSD, but more often than not, BSD init services are even shorter than systemd services: averaging 3-4 lines of code.

A unified logging sink is nothing novel, it's just that systemd is the first of its kind that gained momentum, but with its own unique set of issues. syslogd and kmsg were still passable, and the former also seamlessly integrated itself with databases.

Once again, changing the execution environment is a separate stage and has multiple ways of being done. Init-agnostic tools that wrap around syscalls are probably my favorite, but YMMV.

As for containers, it's about time Linux caught up to Solaris and FreeBSD.


> The init system runs as PID 1 and strictly speaking, the sole responsibility is to daemonize, reap its children, set the session and process group IDs, and optionally exec the process manager.

The process manager gets killed. How do you recover?

If you have respawn logic for it in PID 1, how do you log information about a failure to respawn the process manager?

Perhaps you build in some basic logic for logging. Where do you store the data? What if the user level syslog the user wants you to feed data to can't be brought up yet, because it depends on a file system that is not yet mounted?

There may very well be alternatives to the systemd design, but I've yet to see any that are remotely convincing, in that most of them fail to recognise substantial aspects of why systemd was designed the way it is, and just tear out stuff without proper consideration of the implications.

Most proposed alternative stacks to systemd falls down on the very first question above.

I agree with you that it doesn't seem like a great idea to stuff everything in the init system, but I don't agree that "one must delegate" unless the delegation reduces complexity, and I've not seen any convincing demonstrations that it does.

I'd love it if someone came up with something that provided the capabilities and guarantees that systemd does with indepenent, less coupled component, though.

But there's no way I'm giving up on the capabilities systemd are providing again.


Wait what? What happens if the process manager crashes if you're running non-systemd: you might respawn it but not be able to log the fact that you did so. Worst case, you fail to respawn it and your system crashes.

What happens if the process manager crashes if you're running systemd: the process manager is in PID1 (or, equivalently, in a tightly coupled process that PID1 depends on - because the whole point of your post was that you can never get to a state where PID1 is working but logging isn't working), so your system crashes, every time. How is that better? And if that's really what you want, it's easy to configure a decoupled init system to do that.

Hey, some people like their logs to be sent as email. Maybe we should move sendmail into PID1 as well.


I think the ideas presented in the 's6' init system address most of these issues, I don't know why none of the distributions picked it up as an alternative: http://skarnet.org/software/s6/why.html http://skarnet.org/software/s6/s6-svscan-1.html


It doesn't address the logging issue, as far as I can tell. It appears to rely on the same logging solution as the original daemontools. I used daemontools extensively for a while, and it was great, and I like Bernsteins design philosophy which appears to have been largely carried forwards into s6, but it was simplistic, and suffers from a number of the same problems as a "raw" SysV-init, such as putting us back at the mercy of badly written start/stop scripts, and no dependency management.

If someone could come up with a systemd replacement which manages to keep the systemd features while using a design philosophy more in line with that of Daemontools, that would be fantastic, but it'd end up looking very different to s6. Some stuff could certainly be cleanly layered on top (such as using a wrapper to avoid the start/stop problem using the same method of cgroup containment as systemd). Other things, such as explicit or implicit (via socket activation etc.) dependency management, I'm not so how you'd fit into that model easily.

I'd love it if someone tried, though. It would certainly make it easier to experiment with replacing specific subsets of functionality.


People actually _want_ the logging behavior of systemd? My impression is that it's the most widely hated part; I've heard endless stories of journald thrashing the filesystem forever, losing logs completely on corruption, etc. And even operating properly, its performance is comparable to grepping a flat text log, since despite having a "more efficient" format, it increased the actual data size by something like 4-10x.

Logs are essentially write-once, write-often, read-rarely data. As such, the optimal format is always going to be a flat, append-only file.


Also coredumps don't really belong into the journal, I'd turn that off.


> If someone could come up with a systemd replacement which manages to keep the systemd features while using a design philosophy more in line with that of Daemontools, that would be ...

... in the indicative rather than in the subjunctive, and in fact already mentioned here once. http://homepage.ntlworld.com./jonathan.deboynepollard/Softwa...

> The process manager gets killed. How do you recover?

In nosh terminology, this is the service manager. If it gets killed, the thing that spawned it starts another copy. This could be systemd, if one were running the service manager under systemd. It could be the nosh system manager. Of course, recovery is imperfect. If one designs a system like the nosh package, one makes an engineering tradeoff in the design; the same as one does when one designs a package like systemd. The system manager and the service manager are separate, but the underlying operating system kernel will re-parent orphaned service daemon processes if the service manager dies. One trades the risk of that for the greater separation of the twain, and greater simplicity of the twain. The program that one runs as process #1 is a lot simpler, being concerned only with system state, but there's no recovery in a very rare failure mode. Indeed, the simplicity makes that rarity even greater, if anything. systemd makes the tradeoff differently: there's recovery in a very rare failure mode (which I've yet to see occur in either system outwith me, with superuser privileges, sending signals by hand) at the expense of all of the logic for tracking service states, and for trying to recover them (in circumstances where one knows that the process has failed somehow and might possess corrupted service tracking data), all in that one program that runs as process #1.

> If you have respawn logic for it in PID 1, how do you log information about a failure to respawn the process manager?

In the log that is there for the system manager. See the manual page for system-manager, which explains the details of the (comparatively) small log directory and the (one) logging daemon that is directly controlled by the system-manager, both intended to be dedicated to logging only the stuff that is directly from the system manager and service manager.

> Perhaps you build in some basic logic for logging. Where do you store the data?

In a tmpfs, just like systemd-journald does in the same situation. /run/system-manager/log/ in this particular case. Strictly speaking, this "basic logging" isn't built-in. In theory, it is replaceable with whatever logging program one likes, as the system-manager just spawns a child process running cyclog and that name could be fairly simply made configurable. In practice, difficulties with the C++ runtime library on BSDs being placed on the /usr volume rather than the / volume, and indeed the cyclog program itself living on the /usr volume when it has to be under /usr/local/, have made it necessary to couple more tightly than wanted here, so far. But those problems could go away in the future; if the BSD people were persuaded to put the C++ runtime library in the same place as the C runtime library, for example.

> Most proposed alternative stacks to systemd falls down on the very first question above.

In many ways, that's because it's a poor question that focusses on a very rare circumstance. As I said, I've yet to see either system exhibit this failure mode in real-world use absent my deliberately triggering it. (Nor indeed have I ever seen it occur with upstart or launchd.) Much better questions are ones like "Where are inter-service dependencies and start/stop orderings recorded?", "Is there an XML parser in the program for process #1?", "What makes up a service bundle?", "How do system startup and shutdown operate?", "How does the system cope with service bundles that are on the /var volume when /var hasn't been mounted yet?", "How does the system handle service bundles in /etc when the / volume is read-only?", and "What does the system manager do?". Those are all answered in the package's manual pages and Guide, of course.


Isn't that a pretty narrow corner case? I can count the number of times the process manager has been killed on one hand.


Add enough machines, and "narrow corner cases" happens all the time and at all the wrong moments.

The bigger point is that there are lots of these "narrow corner cases" all over a typical SysV-init setup, not least due to tons of badly written init scripts. The number of times services have failed to start

To produce a systemd alternative, creating something that competes favorably with SysV-init is insufficient. Today you also need to demonstrate how you deal with those corner cases, or why they don't matter - many of us have no intention of going back to the bad old days.


Also you depend every-day on another process that is special in some sense just as the process manager: Xorg. If Xorg dies all your desktop applications die. By your line of reasoning Xorg should be moved into PID 1 too, which is definetely not a good idea.

I don't say that Xorg hasn't crashed, it did rarely when running RC code or proprietary drivers. In fact I probably had as many Xorg crashes as kernel panics, which says something about how stable Xorg is. Still I wouldn't want to run it as PID1, where a crash would really bring down everything.


That is a pretty bizarre argument. I would conclude from init and Xorg rarely crashing that it is possible to write a reasonably stable daemon, and that perhaps it's not a good trade-off to introduce a lot of complexity into those daemons to be able to recover from crashes.


I don't understand how you come to the conclusion that putting Xorg in pid 1 would be even a remotely fitting comparison.

For starters, as an example, I have 100 times as many servers than I have desktops to deal with - for a lot of us Xorg is not an important factor. But the process manager is vital to all of them - server and desktop alike if you want to keep them running. If the process manager fails, it doesn't matter if it wasn't Xorg that took things down.

Secondly, that X clients fail if the server fails is not a good argument for moving Xorg into pid 1 too, because it would not solve anything. If pid 1 crashes, you're out of luck - the best case fallback is to try to trigger a reboot.

Having (at least minimal) process management in pid 1 on the other hand serves the specific purpose of always retaining the ability to respawn required services - including X if needed. (Note that it is certainly not necessary to have as complicated respawn capabilities in pid 1 as Systemd does).

Having Xorg in pid 1 would not serve a comparable purpose at all: if it crashes, the process manager can respawn Xorg. If you then need to respawn X clients, and be able to recover from an Xorg crash, there are a number of ways to achieve that which can work fine as long as your process manager survives, including running the clients under a process manager, and have them interface with X via a solution like Xpra, or write an Xlib replacement to do state tracking in the client and allow for reconnects to the X server.

Desktop recoverability is also a lot less important for most people: Every one of our desktops have a human in front of it when it needs to be usable. Most of them are also rebooted regularly in "controlled" ways. Most applications running on them get restarted regularly. People see my usage as a bit weird when I keep my terminals and browsers open for a month or two at a time.

On the other hand, our servers are in separate data centres and need to be availably 24x7, and many have not been rebooted for years, and outside of Android and various embedded systems, this is where you find most Linux installs.

While we can remote reboot or power cycle most of them, with enough machines there is a substantial likelihood of complications if you reboot or shudder power cycle (last time we lost power to a rack, we lost 8 drives when it was restarted. Even with "just" reboots there is a substantial chance of problems that requires manual intervention to get the server functional again (disk checks running into problems; human error the last time something was updated etc.)

That makes it a big deal to increase the odds of the machines being resilient against becoming totally non-responsive.


I think you raised an interesting point here 'for a lot of us Xorg is not an important factor', I agree. The same could be said about some of the features that systemd provides that cause a lot of flames (binary logs). It has been said before that systemd is monolithic, and this is probably what makes switching so hard.

It is all-or-nothing, whereas if you could gradually replace the old sysvinit/policykit/consolekit/etc. stuff with systemd/logind then problems during that transition could be debugged more easily. You could also choose to not replace some components where the systemd/non-systemd replacement is broken.


   > The author of this piece makes the classic mistake 
   > of equating the init system as the process manager 
   > and process supervisor. 
I think it is a bit more subtle than that. The author makes the mistake of inferring an architecture from observed behavior and fails to ascertain where the warts come from, the architecture or the implementation. They aren't the only one, its a common problem. The result though is kind like playing 'architecture telephone' where each person implements what they think is the architecture implied and ends up with something subtly different than intended. The result is a hodgepodge of features around various parts of the system.

In the interest of full disclosure I must admit I was on duty when AT&T and Sun were creating the unholy love child of System V and BSD, I'm sorry.

The architecture, as bespoke by AT&T system engineers, was that process 1 was a pseudo process which configured the necessary services and devices which were appropriate for an administrator defined level of operation. Aka a 'run level.' I think they would have liked the systemd proposal, but they would no doubt take it completely out of the process space. I am sure they would have wanted it to be some sort of named stream into the inner consciousness of the kernel which could configure the events system so that the desired running configuration was made manifest. They always hated the BSD notion that init was just the first 'shell process' which happened to kick off various processes that made for a multi-user experience.

Originally users were just like init, in that you logged in and everything you did was a subprocess of your original login shell. It was a very elegant system, root's primal shell spawned getty, and getty would spawn a shell for a user when they logged in, everything from that point on would be owned by the user just like everything that happened before was owned by root. The user's login shell logged out and everything they had done got taken down and resources reclaimed. When the root shell (init) logged out all resources got reclaimed and the system halted.

But Linux, like SunOS before it, serves two masters. The server which has some pretty well defined semantics and the "desktop user" which has been influenced a whole bunch by microcomputer operating systems like Windows.

I wasn't the owner of the init requirements document, I think Livsey was, the important thing was that it was written in the context of a bigger systems picture, and frankly systemd doesn't have that same context. I think that is what comes across as confusion.


Getting tired over all the systemd hate. If you don't like it, don't use it. Instead of complaining and making useless-by-design wrappers and/or dumbed-down-versions, why not focus your efforts on making a new better init system and convincing people they should use it instead. systemd isn't final - it's software, and will come and go.

Not to mention, most of the systemd hate seems to be spread by only two main sources now, and both cite each other as sources (ironic a little).[1]

[1] http://www.jupiterbroadcasting.com/66417/systemd-haters-bust...

systemd was really designed with servers in mind, and really does bring a lot to the table for server admins.


The "new better init system" already exists. Several of them, in fact. The only difference? They have no intention of engaging in any shady realpolitik, or consolidating functionality unrelated to their core purpose.

Jupiter Broadcasting are an unreliable source, to say the least. I did watch that episode. When you use such pristine arguments as "Someone reimplemented systemd's D-Bus APIs, therefore systemd is portable!" (much like the Windows API is portable, because Wine exists) and claim that systemd is a "manufactured controversy" while responding to easy straw man arguments, there is a term for that kind of person: a shill.

I was also very amused by the Linux Action Show's coverage of uselessd. They spent the entire time whining about the name, thinking it makes fun of the systemd developers, when in fact it's making fun of ourselves. They also got mad over the use of the word "cruft" and later called us "butthurt BSD users".

Good to see that you bring some new insights, however. Very mature and enlightening.


Quite frankly, if this is the attitude one can expect from the uselessd developers... then I think this conversation is moot.

If a truly better init system already exists, then people who care strongly and/or have very specific use-cases where that init system excels exceptionally, then they will use it. Nobody is married to systemd.

One must also look at how many industry heavyweights are behind systemd now (even Canonical). I'm certain they have considered the pros and cons to systemd much more extensively than all of the armchair quarterbacks appearing in this thread. Perhaps you personally dislike systemd for what you think are good reasons, but know you are in the minority now (you weren't always).

Bottom line -- systemd is targeting servers, everything else is tertiary. Don't like it, then don't use it. But quit using every possible chance to spread needless hate. systemd is not an assault on you personally. No matter how loud you scream -- systemd is not going anywhere for the time being.


You're not even addressing any argument against systemd at all. You're just presenting a consolation:

"Hey, everybody, look at all the people using systemd! They must know better than you, so shut the fuck up and use whatever you want - no one is stopping you! By the way, systemd is meant for servers, even though the developers have never said anything like that and have made it clear that it's meant for all use cases."

In this regard, you are little more than a troll. Or a person who thinks popularity means quality. Both, even.


<quote>systemd was really designed with servers in mind, and really does bring a lot to the table for server admins.</quote>

Which is totally ironic too in that the server-admins hate it. (speaking just for myself here=) )

I am a sysadmin of a medium sized data-center. I am in charge of 100-150 servers at any given point. None of the changes that systemd 'fixes' benefit me or my systems. Boot times? What's the point when it takes 10-minutes for the drive-arrays to spin-up? Logging? I pray a system never dies and I have to access those rotten binary log-files from a live-cd. Network changes/configuration? Nope, every server is configured with static network configs. Power Management? Ha! That's funny. Downtime in minutes costs more then electricity does in a a month.

I could go on. But there is one major caveat: As a laptop user, systemd is fantastic.

As my Debian servers need to and/or get updated and start requiring systemd then I will just migrate them to OpenBSD. This process has already begun.

Systemd is changing things for the wrong group of people. Mobile/Desktop users have alot of wiggle room and areas that need improvement. Server admins need stability; in software, hardware, (script) syntaxes, and interfaces. Users need everything that systemd offers.

I will concede that systemd might be a good fit with Docker, and I am looking into that too; but I guarantee you it will be on it's own box and not homogeneous with the rest of my network.


All of Poettering's projects seems to be lifted straight from OSX.

Ran into a recent interview where he kept referring back to the OSX sound system when talking about Pulseaudio, and Avahi is zeroconf/bonjour. And with Systemd he constantly makes references to Launchd, the OSX "init".

BTW, Red Hat just now announced that the future of the company would be Openstack and the cloud. Fits perfectly with the push for containerization in Systemd.

More and more i get the impression that the "developers" mentioned as benefiting from Systemd are the likes of the Reddit crew. Reddit pretty much could not exist without Amazon's cloud services.

Meaning that for Poettering the future is two things, cloud computing and cloning OSX. And given the number of web monkeys that seems to sport a Mac, i am not surprised at all.

I just wish that they could avoid infecting the rest of the Linux environment...


I realize you were speaking in generalities but to be specific I don't hate systemd. I do dislike "emergent" architectures but that is more of a OCD systems analysis curse I have to deal with.

This statement, "systemd isn't final - it's software, and will come and go.", is the one that most captures my angst. And you can replace 'systemd' with 'linux' or 'gstreamer' or 'webkit' or 'gcc' or 'fsck' for that matter. Not only are they not 'final' but what they would be able to do if they were 'final' is left unspecified. That puts the system on the DAG equivalent of a drunken walk. And users don't seem to like it when their systems are evolving randomly.

I really enjoyed the early RFC process of the IETF because we could argue over what was and was not the responsibility for a protocol, what it had to do and what was optional, and what it would achieve when it was 'done.' Then people compared what they had coded up. When the architecture is the code and the code is the spec, my experience is that sometimes we lose track of where it was we were going in the first place.


To avoid using systemd in practice basically means switching distributions, or switching away from Linux entirely. Depending on your setup, this may be far from trivial.

I think systemd has a lot going for it, and it's been pretty stable on my Arch notebook, but I'm not too thrilled with the way it takes over so many tasks at once and eschews text log files. What's frustrating is that I didn't have much choice in the matter. Yeah, I could switch to another distro, but since Red Hat, Suse, and now Debian and Ubuntu are switching to systemd, that leaves Gentoo or BSD or something. Which are perfectly fine in their own right, but that's pretty drastic if I just want to avoid systemd.


> but since Red Hat, Suse, and now Debian and Ubuntu are switching to systemd

With so many heavyweight linux enterprise companies jumping on systemd, one must wonder what consideration they have given the issue? I'd wager, a lot. Also, note that systemd is really designed with servers in mind, so it's not surprising for a desktop/laptop distro user to find it bothersome (it wasn't designed with your use-case in mind). With that said, the beauty of Arch is you can yank systemd out and go with whatever init system you desire.


RH just announced that their future will be cloud computing (Openstack). I think Ubuntu is following right behind. Suse i can't comment on as i haven't followed that distro in ages. Debian is more of a puzzle, but i suspect it was a case of "don't have the resources to be contrarian".

As for the Systemd design. I Think it started with Poettering drooling over OSX Launchd (his other projects also seem to be straight OSX feature clones), that since then has been hitched on the cloud computing push within RH.

In essence, the kind of server that Systemd seems to favor are cloud computing instances where storage and networking can come and go as the back end gets configured for new needs.

Traditional static big iron and clusters don't really benefit much from the "adaptive" nature of Systemd. If those breaks they usually have a hot reserve taking over while the admins get to work figuring out what broke.


try reading the actual discussion when systemd was being proposed to be used by default. It wasn't because "don't have the resources to be contrarian".


systemd is designed for all use cases in mind. I have yet to see any sentiment that it's specifically for servers, desktops or embedded. Lennart's "Biggest Myths" would have your statement decried as an utter falsehood.


Characterizing criticism as "hate" is fallacious and serves the opposite function of what you wish. People see support of systemd as being just ignorance and whining. If you want to support systemd, then do it with actual arguments.


> The author of this piece makes the classic mistake of equating the init system as the process manager and process supervisor. > To stuff everything in the init system, I'd argue, is bad design.

The author is not making any mistake at all, or no more so than you are.

I'm sure you both value engineering principles like separation of concerns and a single source of truth.

The author believes that by removing the redundancy between initd / xinetd / supervisord / syslog the system is improved.

You disagree, and believe that these are separate concerns.

That's fine, you have different values / judgements in this matter. But saying he's `mistaken` for not agreeing with you is childish.


Well said. I recently migrated to FreeBSD after trying sytemd on Arch and seeing that Debian and Ubuntu are planning to move too.

The dead simple rc.conf file seems so much nicer than the stuff I was dealing with in the entire world of Linux-based systems, like going back to the way Arch used to be when I really liked it.


This. The FreeBSD rc system just works, is well documented and is small enough to understand by one person without too much effort.


As an init system it works fine, but you do end up having to find or invent a bunch of additional stuff if you want similar functionality to what's driving some of the systemd use-cases. The result might still be better (I haven't done a detailed architectural comparison), but you do need something. For example one of the things I find useful about the "systemd way" of things is that it provides, finally, a story about how to apply cgroups to services in a sane way. The kernel provides the APIs, but actually using them from userspace was not fun previously, with multiple incompatible systems, largely based on tangles of shell scripts that had broken corner cases.

With FreeBSD, my impression is that manual shell scripting is still the norm. Integrating RCTL (FreeBSD's resource-limiting facility) with service management basically consists of manually writing in a bunch of imperative calls to RCTL into scripts. There's no way to configure services with limits declaratively, ensure the right thing happens when services are started/stopped, etc., precisely because there's no integration between the RCTL facility and the process-management or init facilities. Or at least I haven't found a way. The closest is that if you need such integration only for jails, you do have the option of third-party "monolithic" management systems, such as CBSD.


RCTL is a stateful database. It's not there yet, but the right solution for managing this declaratively as with anything else on a Unix platform is Ansible/salt/cfengine or something like that, not building those tools into a superset service that manages everything.

I will also add that managing disparate platforms is never a reality from experience. There are perhaps two core platforms at a company and they are migrated together in blocks, all together. For us, we have a couple of legacy Ubuntu machines that are being canned this month. Everything else is Windows 2012 R2 and FreeBSD 10.

The "systemd way" is to provide a monolithic abstraction over many things with a DBus API. It's the equivalent of adding WMI and a registry to a Unix platform i.e. it's against the fundamental tenets of the operating system. Having managed windows systems for years, this is really not something I want to see. Time will tell, but if I'm not right about that then I'll eat all three of my hats.

And yes I have experience with systemd as well through evaluation of RHEL7. Within two hours, I'd hit a wall with timedatectl enabling NTP on the machine. The steps to debug the mess were horrible and the issue eventually just spontaneously disappeared.

That's reminiscent of the stateful nature of windows which brings back many years of pain in the 1990's and 00's for me.


don't you just have to turn off chrony to fix that?


I've been tinkering with NetBSD the last couple of weeks in a VMWare Fusion virtual machine and the RC system it uses it very nice. OpenBSD's is nice as well.


NetBSD can run as a xen dom0 host too though normally I use Alpine Linux/OpenRC as dom0 because it's so small.


>NetBSD can run as a xen dom0 host too

Isn't there a bunch of fine print that goes along with that?

>Alpine Linux

How? The installer doesn't even work.


Are there reasons why Linux couldn't just adopt it?


Feel free. Most people won't, as systemd solves very real problems that people care a great deal about, whether or not you like the way it has solved them.


15 years linux user here, systemd is pushing me hard towards leaving linux. Please tell me what very real problems people care a great dead bout systemd solved by turning log files from text to binary.

Also I care about being able to use my computer and for the first in 15 years a systemd update caused my computer to needlessly dropping into systemd emergency mode at boot and this emergency mode being broken I was effectively locked out of my computer because an optional external usb drive that was defined in fstab with no issue for a couple years now required a nofail option. Now consider that this computer is located in a remote location 1000 km away from where I live.

To me systemd already caused way more very real problems I do care a great deal about than it has solved reducing boot time by a few seconds is not something I care that much about.


For me Linux is pretty dead already because I can't entirely trust the direct it's going in having survived the Unix wars of the 1990s. There are so many parallels to that at the moment, it's not funny. There are large vendors pulling it in separate directions (Canonical, Redhat, Google). At the end of the day, much like back then, customers will suffer from terrible support, fragmentation and political battles.

I just want to get shit done and solve problems and anything that risks that gets outed now.

FreeBSD hits the sweet spot, probably followed by NetBSD.


"There are large vendors pulling it in separate directions (Canonical, Redhat, Google)."

It's pretty clear how that's going to shake out, isn't it? Google is pretty much a non-issue here; yes, Android and ChromeOS use a Linux kernel base, but they have no impact on any mainline distros, and there's no indication Google wants them to. So it reduces down to two parties fighting for control: Canonical and Red Hat. And Red Hat is going to win. Canonical doesn't have the resources to go its own way on more than a handful of fronts (this is why when Debian switched to systemd Upstart was killed off; Canonical is far too reliant on Debian as an upstream to fight every issue), and their requirement for a CLA to accept anyone else's code means they are entirely reliant on their own coders, as nobody wants to sign Canonical's CLAs. We'll see how long they can stick it out on Mir, but they don't have the resources to fight a war with Red Hat on two fronts, so that's the only issue I expect to see them fighting over.


Yes and RedHat is IBM circa 1997 and Canonical is HP circa 1997. The Sun of 1997 is Oracle (again).

Creeping up on their arses is Microsoft (again) with Azure and incredibly cheap commercial offerings.


I've not claimed that Systemd gets everything right. I've claimed it gets enough right enough that a lot of people will be entirely unwilling to give up those advantages and return to something that for many of us is now an inferior solution, just because there are things about Systemd we may not agree with.

For my part, I agree that binary logs was not necessary, though I've yet to encounter any issues with it, and journald certainly does provide a lot of functionality that makes it more pleasant to deal with logs than before. All of that could have been achieved while retaining text logs, though. But at the same time, it is still trivial to log to text files by telling journald to log to syslog if that matters to you.

Other things I do care about include getting rid of init scripts - that is a persistent source of problems. I'm inclined to believe not a single one of them are bug free, though that's probably a bit uncharitable. Unit files helps. So does cgroup containment to rid us of the abomination that is the need to rely on pid-files and hope that works reliably (it doesn't, since pretty much nobody are through enough when writing init scripts). Other things include better recoverability in cases where critical processes gets killed, and well thought out handling of early stage logging. And things like systemd-cgtop and systemd-cgls are nice.

I'm sure we'll eventually get solutions that split more of this functionality out into more cleanly separate components, and that'll be great, but until then I'm happy to stay with systemd.

As for the problems you ran into, that sucks, but any large change like this will have painful teething problems and they're not a good basis for judging whether it's a good long term solution - I've had plenty of boot failures caused by problems with init scripts as well.

Boot time is a long way down the list of benefits for me too - most of our servers have uptimes measured in years, and even my home laptop usually goes a month or two between reboots.


I couldn't agree more. If it become impossible to use linux without systemd then I won't be using linux any more.


There's nothing stopping a Linux distro doing this.... but

It would be a step backwards: it is simpler, and does less stuff, so booting would be slower and some features are missing.


How often do you reboot your kit? Boot time is such a stupid metric even on laptops and stuff where you just suspend/hibernate.

My BSD systems (not front-facing and therefore on a lesser patch cycle) rarely get rebooted and neither do the processes so this is indeed moot for me.

Proof:

http://i.imgur.com/tZsM82Q.png

Yes that's a memcached uptime on a host that has had 10,185,367,932 cache hits...


Suspend/hibernate on free unixes is a nightmare of incomplete support and buggy drivers, so not much of a solution. I'm don't know a single person who has a working laptop suspend/resume setup on FreeBSD (though it's theoretically possible), and it's not usually recommended to rely on it even if you could get it working. Linux has somewhat more complete support, but it's still very hit or miss, and it's common for stuff to be wonky after a resume, even when it does work.


I'll give you that to a degree. It does suck on FreeBSD with my X201. Nothing works but I'm being cheeky now and running it in VirtualBox on top of windows (which I need for other work).

OpenBSD however works wonderfully.


The main question is would it work? For me systemd or applications starting to support systemd only break things (something wrong with policykit/consolekit for example with sysvinit+systemd-shim) that used to work.

Also there are some peculiarities in the way the LSB init script compatibility is implemented in systemd: it tries to be 'smart' and remember their state. So you start an init script, and it failed for some reason / perhaps even exited with an error code, perhaps you are still developing that init script. Now fixing the problem and running the init script / systemctl start doesn't even try to run the script because it thinks it has run it yet. You first have to tell it to stop it (which fails), and only then you can run it again.


Why would booting be slower ?

My BSD systems boot quickly enough for me.


My FreeBSD system has a 30-second timeout during which the entire boot process is halted because it waits for a default route to the internet... which it won't get, because I haven't configured one.

It's pretty dumb, and not enough of a problem for me that I'd figure out how to work around it, but it's a pretty good example.


Just set in rc.conf:

   defaultrouter="NO"
It shouldn't hang.


I don't think this 30-second timeout is a bug in FreeBSD or in rc. You may want your server to wait for the network to become available. Ubuntu Server has the same "waiting for network" timeout:

http://askubuntu.com/questions/63456/waiting-for-network-con...


But the only reason this is necessary is when the boot system isn't smart enough to start whatever can safely be started through proper dependencies.

My experience is that a substantial amount of time is wasted weeding out undesired timeouts in startup scripts, because they lead to increasing downtime.


Slackware uses exactly this system.


The really sad thing is that Arch Linux use to pride itself on being BSD-like. It used a similar rc.conf system. Each service had exactly one init script, not the weird multiple file init script system Debian has. Before systemd, Arch was the closest you could get to BSD simplicity on Linux


Slackware and gentoo have both been closer for a very long time.


I appreciate your work on uselessd. Nothing demonstrates a counterpoint quite like written code. The last thing we need is another ranting systemd blog post.

We always need strong alternatives, even if they face the risk of being taken as simply a political statement, the effects of the statement will be seen in the decision-making down the road.


Yes! What the systemd "discussion" has been missing is viable alternatives that are somewhat comparable in features. Most of the flamewar is focused on people who consider the problems that systemd tries to solve non-issues.

There are some real issues being pointed out (particularly regarding monolithic design) but no-one has attempted to actually fix that in any way (in code, that is).

While it is unlikely that I will end up using uselessd (unless it "wins" in some way, e.g. in embedded space with uclibc and musl), I very much welcome the effort to bring out alternatives that address the same problems as systemd, yet trying to fix some of the issues there are.


There are plenty of viable alternatives: s6, runit, OpenRC, and so on. I'm not really convinced uselessd is a viable alternative - it keeps way too much of the badness of systemd, but I guess that makes it viable if you were willing to consider systemd to begin with.

A much better solution for the problem of user-facing applications (e.g. "desktop environment" software) depending on systemd's public dbus interfaces is to provide a fake service that gives them fake data - the same way you would sandbox Android apps for privacy by giving them a fake Contacts list, etc.

As for the other main "public interface" of systemd that things are starting to depend on, the systemd service file format, it would be easy to add support for this file format to any other process supervision system.


Hey, Rich. First of all, thanks for musl libc.

At the moment, yes. We do keep much of the internal systemd architecture in tact, but we do eventually aim on partially decoupling it, or at the very least expanding the breadth of configure prefixes for tuning its behavior. We are a pretty early stage project, after all.

Indeed, the systembsd and systemd-shim projects are working on the D-Bus interface reimplementation part.

Our goal right now is to be a minimal systemd base that can be plugged in interchangeably and have the vast majority of unit options be respected.

There already are systems that offer primitives to reuse systemd units. nosh is one of them, and there also exist scripts that can convert systemd services to SysV initscripts, and even the opposite (dshimv).


> To stuff everything in the init system, I'd argue, is bad design.

You've slayed the straw man... ;-)

systemd doesn't put everything in pid 1. It defines some mechanisms to orchestrate the whole thing that include pid 1.


Whether it's all in pid 1 or not is irrelevant. What matters is that it has a monolithic architecture, whereby breakage in any one part or their communication channels can bring down the whole system. This is not just a theoretical concern; it has REPEATEDLY happened.


> Whether it's all in pid 1 or not is irrelevant.

All of the existing mechanisms are also a "system" that compromises a ton of processes... If systemd is monolithic on these grounds, then so are they.

> What matters is that it has a monolithic architecture, whereby breakage in any one part or their communication channels can bring down the whole system.

Uh-huh... I think you are speaking to branding more than technology. Keep in mind that systemd is using existing components in much the same fashion they were already being used (hence the accusations about them "absorbing" udev).

If you look at the architecture, it has got very clear points of encapsulation that is much more structured than the loosey gooesy stuff that came before it.

> This is not just a theoretical concern; it has REPEATEDLY happened.

Yeah... with existing systems. There's any number of points of failure that are the stuff of legends in Unix system administration. Obviously, it will take time to get systemd thoroughly cleaned up, but it's not hard to look at the design and see how it provides plumbing to simplify and avoid a whole host of these scenarios.


Systems which do not use systemd simply do not have these problems because there is no analogous component. If syslogd goes down, the worst that happens is you don't get logs. Init doesn't go down because it essentially has no inputs. Individual services can go down if they're poorly written, but they won't bring the system down with them. Traditional systems (the hideousness that is "sysvinit") have plenty of other different problems (e.g. race conditions in process supervision), but deadlock or bringing down the whole system is not one of them.

With systemd on the other hand, all of the components under the systemd banner are tightly interconnected and communicating. In particular pid 1 has ongoing communication with multiple other components, and misbehavior from them can, both in theory and in practice, deadlock the whole system. In case you missed it, this is roughly what "monolithic architecture" means: even though the components are modular, they're designed for use in a tightly interwoven manner that's fragile. It's completely the opposite type of "monolithic" from the kernel, which has everything running in one address space, but with architectural modularity, where interdependency between components is kept fairly low.


> In particular pid 1 has ongoing communication with multiple other components, and misbehavior from them can, both in theory and in practice, deadlock the whole system. In case you missed it, this is roughly what "monolithic architecture" means: even though the components are modular, they're designed for use in a tightly interwoven manner that's fragile.

You mean like how if even one of my SysV init system start up scripts hung indefinitely, all subsequent components would never get started? Or are you referring to how the whole system would hang when the root filesystem device was temporarily unmounted (really fun with network filesystems, although to be fair, NFS implementations eventually became robust enough that this wouldn't be a complete disaster)? Or are you referring to fork bombs those race conditions you mentioned that would bring my system to a complete stand still? Or are you referring to how a race condition with date formatting in syslog actually hung my entire system time and again? Or perhaps you mean how a lot of init scripts had little (if any) retry logic such that you'd often end up without the critical component of your system not running... often in ways where you'd not find out about it or worse still not be able to do anything about it without some really intrusive intervention? Or maybe you are referring to how if you got your init startup order wrong for one of many critical components, you'd have a deadlock before you ever got a chance to actually fail.. Or maybe you're referring to how the right kind of getty failures triggered by a weird byte in a config file could turn your system to a paperweight?

It's so hard to tell which scenario you are referring to. ;-)


> If you look at the architecture, it has got very clear points of encapsulation that is much more structured than the loosey gooesy stuff that came before it.

Then why can't it offer a stable interface that lets me swap out e.g. udev with eudev, like I could before?

That's what makes it monolithic - not the implementation details but the absence of well-defined interfaces between the pieces.


> Then why can't it offer a stable interface that lets me swap out e.g. udev with eudev, like I could before?

I'm not sure it can't.... To the extent it _doesn't_, I imagine it is not much of a priority, since eudev is a fork from udev, and is lacking the enhancements to udev the systemd project has been working on.


From experience with Linux init scripts, I'm far less concerned about systemd than SysV-init style boot processes, to be honest. I lost track of the number of boot issues related to poorly written init scripts I've dealt with many years ago.


I have an anecdote that occurred a short while while ago. We had a server with several database instances (with their own init scripts) running on it.

The scripts were buggy in such a way that starting the database would bring it up okay, but prevent the rest of the instances from starting. Also, using the "stop" directive would successfully stop the database... and all the others, as well.

The bug probably occurred because the init scripts were horrible to begin with and had been copied (ugh) to accommodate more instances, without the necessary modifications to not screw things up.


Sounds familiar..

One of my "favourite" problems with init scripts for service stop/start is that way too many of them basically throws their hands up if the contents of the pid-file doesn't match what it expects. Never mind that 90% of the time when I want to actually run stop/start/restart, it is because something has crashed or is misbehaving, and there's a high likelihood the pid file does not reflect reality.

So a far too common scenario is: Process dies. Tries to run "start". Nothing happens, because the pid-file exists and the script doesn't verify that the pid actually matches a running process (or it checks that it matches a running process, but not that the process with that pid is actually what we want).

Ok, so we try "restart" or "stop". Gets an error, because the pid-file content does not match a running process, and rather than then cleaning out the pid file and starting the process, it just bails.

Basically I don't trust init scripts from anyone but distro maintainers themselves, and even then there are often plenty of edge cases that cause problems.

Regardless of systemd, I really like the systemd solution to this of using cgroups to ensure it can keep proper track of exactly which processes belongs to a service without resorting to brittle pid-files which seem to rarely be properly implemented. Of course that cgroups approach could be implemented as a separate tool, but pid-files badly needs to die.


I wasn't talking about systemd, in particular. I was using a hypothetical example to counter the OP's point. That said, systemd's main.c still does have significantly more baggage than most other systems I've seen (never looked into Solaris SMF internals, for instance).


Case in point .. today .. rebuilding an X11 desktop system on Gentoo, some weird set of dependencies around gnome beneath the window manager wants to pull in systemd. I finally work out a way around it, but it wastes half an hour of my time.

My take: Containers are not well managed by general, daemon-oriented process supervisors with a localhost-oriented purview. However, those supervisors would do well to use container-related features to better secure and manage daemons as appropriate. In future, processes will be more likely managed across clusters by parallel capable supervisory systems with high availability goals and network infrastructure configuration, load and topology knowledge. Less and less people will even see the init system, except perhaps behind a logo or as it flashes past while booting their device in debug mode.

(Edit: stumbled on http://www.gossamer-threads.com/lists/gentoo/user/284741 which explains the scenario .. would hate to be on BSD)


I think systemd actually clears up a loot of stuff. As the article describes.

The main thing that scares me is the binary loging format I can think of some benefits but mostly it just seams scary. I guess I will get go se later if the benifits outweighs the rest.


I was actually pretty happy with the way upstart handles logging... it's about as transparent, and easier to deal with.


I'm new on the boat about systemd debate so I'm still reading and reviewing the situation. But the more I read the more I'm getting away from systemd.

In principal everybody is on terms with the need for a new and modern init system. But yet I'm not even sold on this issue. sysvinit is still holding stance with extra tools and doing it's job cleanly. By introducing a fully reimplemented and still controversial system with many dependencies and with need for many reimplementation on our existing software we are not helping the issue but blurring the waters.

And What's the fascination about boot times?

Nowadays on desktops nobody boots. You just boot once and hibernate/suspend forever. And for servers, if you are rebooting you are doing something wrong. So pulling efforts from building controversial init systems to optimize hibernate/suspend in the kernel would be a better effort on this field.


boot times are the least important reason to use systemd.


There's a lot of negativity going on here...

As far as my experience goes, I've found it actually works really well on all the servers I've moved to CentOS 7 and on the Fedora desktop I play around with (my main dev machines are Macs) it's significantly improved boot time...

I'm sure there are some valid concerns about design and such, but as far as my usage in production goes, I can't say I've had a single problem with it... It makes it a lot easier when I need to write files then the messy init scripts before also.


Out of confused mind.)

There is no fundamental problem that it "solves" which other UNIXes presumably still does have. The problem does not exist. AIX, Solaris, *BSD and many old-school Linux guys will tell you that.

Also, any old-school guy will tell you that a kitchen-sink, put-it-all-in design is a wrong way.

btw, user processes supervision is a task of an OS kernel, which it handles via a bunch of specialized syscalls, not of some "man-in-the-middle" user-level daemons.

There is actually nothing to talk about, except some ambitions and bad designs.


btw, user processes supervision is a task of an OS kernel, which it handles via a bunch of specialized syscalls, not of some "man-in-the-middle" user-level daemons.

I'm pretty sure /sbin/init runs in userspace even on *BSDs and Solaris, and does process supervision.


You would be surprised how a few processes it supervises. getties (remember these?) what else?

initscripts has nothing to do with /sbin/init, surprise?


Solaris guys certainly wouldn't tell you that, unless they haven't used Solaris in the last decade. Since Solaris 10 (2005), it has a unified process supervision and init system, with declarative config files, dependency boot, and all that other good stuff that's just now coming to Linux. It's part of the Service Management Facility (SMF), which replaced the old init-script-based system. The Illumos distributions all use SMF as well, not BSD or SysV-style init/rc scripts. A few design decisions are now seen as warts (today, XML probably wouldn't have been chosen for the config files), but I think SMF is pretty widely seen as an improvement vs. managing services with shell scripts.


Please tell us about these improvements over shell init scripts.

Let's say a have a program to run - nginx - very popular program. Tell us, please, what improvements the good stuff gives me?

BTW, in my opinion, the SMF has been written "because we can" (use XML for configs and Java for its processing) not because shell init scripts has been broken. There are many strange reasons why some programs were written or has been adopted.


"Also, any old-school guy will tell you that a kitchen-sink, put-it-all-in design is a wrong way."

That's pretty close to a no true Scotsman fallacy, isn't it?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: