The fact that the vast majority of these bugfixes and CVEs are minor is a testament to the continuing stability and quality of Debian and its ecosystem. I've run Debian on several machines for years and have upgraded confidently without a hitch. A big thank you to the maintainers and developers for making this possible.
If only Debian had the documentation that Arch does. I know there's man and help, but the Debian wiki seems sparse and out of date, where the Arch wiki is incredibly thorough. Sometimes I find myself at the Arch wiki to do things in Debian, but that doesn't always work. So I generally stick to Fedora, which at least sticks close to upstream.
This used to be a thing! I remember the first time I installed Debian (back in the 2.0 days, I think) and it came with the "HOWTO" packages, which were essentially precursors to the semi free-format Arch wiki. That's how I learned how to configure kernel modules, what the difference between installing Linux using UMSDOS vs. a real partition, and even how ReiserFS worked.
I really feel like easy internet access has taken good documentation away from us. It's too easy to Google for something, find some crappy blog with barely enough instructions to fix the problem. We've become reactive, rather than pro-actively educating ourselves.
Most packages have pretty good documentation about anything Debian-specific installed in /usr/share/doc/<packagename>/README.Debian[.gz] and /usr/share/doc/<packagename>/NEWS.Debian[.gz].
Arch is very stable when using mainline packages, which are the default packages...
This is "Arch is unstable" thing is very much a myth pushed forward by people who only dabbled with Arch once or twice then had it break, then completely dismissed it as unstable. Without first learning how to use it even at an intermediate level - not even expert.
I too had Arch break when I was a newbie but I learned quickly how not to and it's been incredibly stable ever since. Not worse than Debian and more so than Fedora in my experience.
In retrospect my actions which had broke it originally had little to do with ArchLinux itself but a general inexperience with Linux and over-eagerness when using beta software (which you have to opt in to, it's not default with Arch).
The end result is that I've come out of it a far better Linux user with skills highly applicable to Debian and other distros. It's a far easy place to learn Linux/Unix properly and become familiarized with the OS/dir structure/config than any other distro.
My experience as an experienced user of e.g. NixOS is that if you leave an Arch system alone for a couple of months and then try to update it, something will break horribly, because something will have changed in a non-backwards-compatible way and so I have to go and try to figure out what the new configuration should be, without good error messages. At least with NixOS, it’ll probably fail during the rebuild stage, rather than once you’ve restarted some random service or decided to reboot. I don’t really want to be doing that - I don’t have a love for system administration, I want a system which gets out of my way for me to do my work.
I don’t have to be careful about opting in to beta software or anything else with NixOS. If I want to try something, I can do so, and roll right back if it doesn’t work out. The destructive nature of Arch package installs and updates is its biggest flaw.
It is very much a matter of perspective. Arch is more than stable enough for personal use, but is highly unstable in the context of enterprise servers, especially compared with the likes of Debian and RHEL.
Meh. Debian user of at least 15 years here. I found Arch quite refreshing, in that the setup felt like a modern Slackware (eg Wicd back when NetworkMangler was super-opaque). But the updates still made me queasy.
> This is "Arch is unstable" thing is very much a myth pushed forward by people who only dabbled with Arch once or twice then had it break, then completely dismissed it as unstable.
This accurately describes my experience. I stopped using Arch around the time git broke when I updated. I'm not clear what mistake I made. What level of Arch intermediate or expert level Arch knowledge is required to keep git working?
Not really. Arch has a strict upstream rule, they avoid patching or maintiain dead shit. So when the hellish time when KDE4.0 came out, they released it immediately. Anyone upgrading on a 3.5 system was severely disadvantages
I used Arch for years and years, and it would just break sometimes on regular package updates. Nothing that couldn't be fixed with a live CD, but still annoying.
Not enough data points to really count, but I have had a lot of bad experiences with Debian updates failing or suddenly deciding that vim conflicts with something so it has to remove half my packages... Arch, however, has always been dependable in the five-odd years I've run it full time. Including machines that don't get upgraded for many months at a time. I don't know if I'd put in on a "server" that I ran for other people, but personally I have yet to see problems arising from the rolling release model.
I trust so much its stability that on the Debian servers I maintain I turn on unattended upgraded with automatic reboot.
The maintainers never seem to push anything not security related, most of the changes being backports of fixes from upstream.
Obliviously the price of stability is that the system runs slightly old software versions, but together with containers this is not a problem anymore. Stable OS running cutting edge containers is a combo that works well for me.
Yeah, well, rightfully so. I used to love Debian and I've grown more and more disillusioned by it. Their main problem is two fold: 1) They mess with the packages they distribute. Like, a lot, rather than trust upstream. And 2) they don't update things that seriously need to get updated.
Example 1: Using System Python on Debian is a notoriously terrible experience for example, because they split up the venv module from Python itself and distribute it separately. I've had so many issues because of that and there's no good reason to do it. It's part of the stdlib, it doesn't need a separate package.
Example 2: Debian Jessie shipped with the ancient pip 1.5.6 (https://packages.debian.org/jessie/python-pip - upstream is at 9.0.1 since 2016) which never got updated, just stayed stuck there, wtf? Such old versions of pip are lacking support for a bunch of things that python packages today use on setup, so it would install things in a weird way and then packages would be broken, users wouldn't understand why. One of the many cases of not giving users access to a more recent version of a package, causing the user's experience to be severely impacted (and of course, they won't know to blame debian or the pip version in this case).
Edit: Incidentally, that last bit is why I always, always do `pip install --upgrade pip wheel setuptools` whenever I create a new virtual environment to make sure I don't get issues installing packages.
> 1) They mess with the packages they distribute. Like, a lot, rather than trust upstream
Having had quite a lot of experience of both, what I call, maintained packages and wild west packages (e.g. pypi), I would trust the former over the latter any day of the week. Developers generally do not make tremendously good maintainers. They tend to to only really care about the latest versions of anything, don't take particular care to make sure their package works well with other packages, rarely backport security fixes, and frequently do everything in a unilateral way. (I've seen it all, subtly different tarballs uploaded under the same version number, old releases deleted, breaking changes released under minor version bumps, addition of user-hostile "features", downloading & installing unexpected random binaries from the internet in attempt to make everything "just work", and of course they're forever bundling odd versions of other packages which they then don't maintain...)
Maintainers working on a "distribution" have a responsibility to make the whole distribution of software work well & stably together, and they generally do so quite well with the odd slip up (I'm looking in the direction of the debian-openssl screwup).
These days I tend to use a debian base system with Nix on top of it for when I need specific, repeatable or more recent versions of something.
Having package maintainers touch any code, even to backport security fixes, is far from ideal.
In 2006, a big security flaw was introduced into Debian's OpenSSL by the maintainer, because he changed code that he did not fully understand. The flaw wasn't discovered until 2008:
But we've got to remember that attitudes to security and understanding of subtleties around crypto code were very different a decade ago, the output from automated checking tools was trusted significantly more, and we're talking about OpenSSL, a project whose source is generally acknowledged to be full of Weird And Inadvisable Shit anyway. The example you're leaning on was basically a perfect storm that resulted in the worst possible scenario.
On the other hand, package maintainers fix security issues on versions of packages the developers have stopped caring about every day of the week, and of course these never hit the headlines.
And of course I'd never suggest that any package maintainer could ever know better than you. Though unfortunately such superior and unilateral attitudes are one of the things that maintainers are needed to protect against.
> Maintainers working on a "distribution" have a responsibility to make the whole distribution of software work well & stably together, and they generally do so quite well with the odd slip up (I'm looking in the direction of the debian-openssl screwup).
Is there a policy that requires Debian package maintainers to seek mentorship of upstream when patching core packages?
I understand your point about maintained packages. But it's only a significant advantage if the maintainer's decisions must be filtered through the lens of the software maintainers. Otherwise one must trade the narrow expertise of the software maintainer for the wider expertise of the package maintainer.
That doesn't seem like a wise tradeoff IMO, and I don't see how a future openssl debacle could be prevented otherwise.
I honestly think it is a good idea, no matter which distribution or OS you are on, to never ever depend on system python/pip (or perl for that matter), just install pyenv / pipenv / pipsi (or perlbrew) so you have complete control.
From a distribution perspective I sometimes wonder if it wouldn't be a better idea to call the system python/perl something like system-python/system-perl to make it more obvious they are not meant for use, and have a /usr/bin/python that tells you to install pyenv...
I don't know the story behind the separate venv package (and can't imagine why this is a separate package).
pip relies on bundled libraries and tries hard to fight any unbundling (by using slightly patched versions of the dependencies if I remember correctly). This makes it difficult to package. You can see as an additional proof of Debian messing with packages while on the Debian side, this is just the application of a distro-wide policy. If every piece of software should get an exception to the rule, we don't have a distribution anymore.
> If every piece of software should get an exception to the rule, we don't have a distribution anymore.
And from what i can tell, quite a bit of upstream would see that as a good thing. They just want to stuff their thing into a container and call it a day, distros be damned.
To me that is like going back to the DOS days of software, or even close to the present day of Windows software, where everything is dangling off in their own sub-dir with all their snowflake variants of dependencies.
Damn it, go back a few years and Microsoft go burned by this when they discovered that there was a vulnerability in the VC++ redistributable dlls. All they could do was issue updates for their own software, a tool for scanning the drive for vulnerable copies of the dlls, and a wish that customers would badger their software providers for updated versions of affected software.
I don't mind that Debian will patch upstream packages to make them fit better with the overall Debian system. If I wanted a no frills system, with minimal integration, that strictly followed upstream for everything, then I'd move back to Slackware.
Their policy on language specific package management is also pretty reasonable. There's no point duplicating effort packaging things in both Pip and the Debian repos when those package managers work fine under Debian. It lets Python devs use the same tools as every other Python developer on other systems, and avoids Debian specific instructions for package installation. It also completely side steps the problem of following upstream for all of those packages.
The advice I've always followed is to always use the language specific package management unless I'm specifically building a package for Debian.
The author of xscreensaver became so upset by similar circumstances that he threw down a gauntlet; ship the ancient package with an updated splash screen warning users of how old it is, or stop shipping it.
The real problem there was that Debian left the splash screen in place, complete with the email of the author. Thus people would contact him directly rather than Debian when an issue was noticed.
Please note: Starting with Debian 7, the minor number is not part of the Debian release number, and numbers with a minor component like 8.7 or 9.4 now indicate a point release. Basically, only security updates and major bug fixes, with new updated installation media images. This, 9.4, is not a new major release of Debian.
I started installing Debian on desktop. It is far more stable than Ubuntu. And if you install the non-free Debian, there are usually no issues with wifi or graphic drivers (you need the non-free version though).
There was time where Ubuntu on desktop made sense, but now Debian is just better and more stable.
That's my experience, too. I have nothing against Ubuntu, but when I've had issues with it, they're always... some Ubuntu addon to Debian, and it turns out that I don't care about whatever feature that is. So last time I had reason to reinstall the OS for the main desktop machine, I just went back to Debian. Haven't had any regrets.
(All of my experiments with Docker/K8s/Terraform/etc. are running on VMs under FreeBSD/BHyve, another OS I'm solidly back to loving after being distracted by shiny objects, but that's a different story.)
Unstable (aka Sid) rarely lives up to its name, having ran it as my daily driver for a few years. Comparatively, the months I spent on Arch were marred by packaging issues resulting in me switching back to Debian.
I want the maintainer to hold back an update and write patches when there are noteworthy regressions that hurt my workflow. The Debian package maintainers consistently act in my best interest, not that of the developer's or outside political interests. Turns out I find that really useful.
The "unstable" there means that everything changes all the time. It does not mean that things break (although they do break way more often than stable - when I tried it, I did have to reinstall the OS every few years).
I was under the impression that `unstable` is... well, not stable (that it breaks). Is it usable for someone who doesn't want to spend much time fixing the system?
In Debian 'unstable' means that the package versions installed aren't stable. That is, there is a high level of package churn just like any rolling release. It isn't any more buggy/crash prone than any other rolling release. There is just a risk at running bleeding edge vs. vetted/known working software.
From my perspective the mindset with Debian is that distribution packages are never going to be the super latest, but are going to usually work well.
If I need something bleeding edge it's straightforward to just download the source from wherever and install it myself with stow in /usr/local, which makes it easy to remove/upgrade if needed. I anyways do pretty much everything in VMs, so package availability for my "base" system (which I mostly use only to run VMs) is not that big of a deal. If I need bleeding edge I can always spin up an arch linux VM...
Debian testing hardly breaks anything unless one does a blind dist-upgrade instead of upgrade. The normal "apt-get upgrade" attempts to install the latest software versions unless the process requires the removal of other software (except the older version of course) so that (newer versions bugs aside) it won't break the system. The "apt-get dist-upgrade" will attempt to install the latest version even though it could force the removal of other packages, which can completely bork the system in many ways (missing libraries, etc). When I use it I often do a normal upgrade followed by a dist-upgrade and check all packages it would uninstall before allowing it to proceed, if I do. I've burned myself multiple times before learning this:)
Ironically, the "dist-upgrade" must be explicitly called on the command line, but is the default setting in Synaptic (called "smart upgrade" there). Synaptic is a GUI package manager most newbies would probably use in place of the command line, so I would expect its default settings to be more on the safe side.
Debian stable has never been terribly old for several generations now.
Does what you do with your computer matter? Then you probably want to consider doing it on a stable platform.
Do you want to experiment? Sure thing, have fun — but be aware that you may lose data as a result.
Debian stable has been an excellent platform for me for the better part of a decade. But I'm unafraid to install newer version of some software where appropriate (e.g. emacs, or graphics drivers).
same here: see my profile for how I set up from scratch debian for desktop and debian for xen+kubernetes. I personally like to have complete control of what gets installed in my system in terms of packages, have been running debian for years and it's been serving me very well.
I do think it's good to have Ubuntu to compete in the "turn key" / "new linux user" / "enterprise support" spaces, but it's good to be able to run plain debian without issues these days.
When I tried changing into Ubuntu on the desktop (many years ago) the packages it added on the default installation kept breaking everything.
I am pretty sure that if you have a very standard machine and stay mostly focused on the GUI Ubuntu gives you a pretty good experience, but I did give up on it already.
"better and more stable." How so? I never had any issue on the desktop using Ubuntu and it has better driver support. Debian has only old packages and the only way to get new one is to use testing / unstable.
For the same reasons I don't run Ubuntu on a server, I won't run it on my desktop, either. Too many times my Ubuntu systems have gotten themselves into some weird state where I can't keep using it and have to reinstall from scratch. For situations where I need more modern versions of stuff, there are either third-party apt repositories, there are static binaries I can download, or I can compile them myself and install them into /opt.
I've yet to find a server-focussed package that is both not sufficiently up-to-date in the main or back ports repos and there isn't a vendor repo for it.
Sure, adding a 1 line text file in `/etc/apt/sources.list.d` and a gpg keyring to `/etc/apt/trusted.gpg.d` is probably slightly more work than running `apt-add-repository`, but if that's what defines how you pick a server distro, I'd like to very much never work with you please.
PPAs make me very nervous. They're often on Launchpad (which appears to offer credibility) but AFAIK there are no checks on them, they're private repos the owner has given access to. Sure there are some, like inkscape-dev that I've can trust without too much issue.
AIUI installed packages get to run scripts with superuser privileges. This to me says "Danger Will Robinson!".
Some are definitely operated by smart people e.g. Ondřej Surý offers his PHP packages for Ubuntu via a PPA (and via a regular apt repo for Debian).
As for the packages: they're literally just regular .deb packages, which means they're installed as root, which means yes they could do anything.
Of course you can also examine/extract them just like with a regular repo, but it's definitely an issue of trust for most people.
I trust that e.g the Varnish project, or Percona or the aforementioned Ondřej Surý are not putting shifty shit in their packages. I can't trust JimmyB on Launchpad the same way.
Since the headline here is Debian 9, it is worth nothing that changes in Debian 9 address a small part of this overall problem. One can nowadays at least not give one repository owner the power to sign impostors of every repository that one has configured, by associating a public key directly with an individual repository rather than placing it into a global keyring.
The rest of the problem still exists, though. Only one package manager that I know of even tries to address it, by having pre/post-install/upgrade/remove actions use a specialist declarative language for things like making directories and dedicated user accounts. But even that has a loophole that allows arbitrary scripts to be executed from within that language.
The loophole exists because the provided language simply isn't sufficient for everyday needs. It's quite hard to come up with one that is, whilst not being as worrysome as general-purpose shell script, too.
I concur, it's very hard to beat the diversity of package retention policies
of all the different PPAs. There's no other solution among the package
managers that allows you to get your system to the state that you can't
replicate on some other server because the versions went missing.
Can someone explain this to me? I've had an issue with Postgres on Debian stable because Postgres had to address a CVE which changed its dump format in a backwards-incompatible way. This meant that my local Debian on my laptop couldn't load a dump I created on a patched Ubuntu server. Postgres 9.6.8 is the one that fixes this CVE in the 9.6 series.
The 9.6.8 version indeed isn't on stable yet. So is the latter link just buggy? Or is Postgres getting filtered out?
Edit: Never mind, after messing with the filter (guess there's no Debian Security Advisory for this CVE), I was able to get Postgres in the second link:
So this has taught me something I didn't know about all Debian: not all CVEs have the same priority or have to be fixed in point releases. Many CVEs don't even get acknowledged by a Debian Security Advisory at all. I guess if I want security to be the utmost priority, I should be using OpenBSD.
Kind of the point for me of using Debian is to not have to go shopping around all of the web for packages and because I want to have a uniform packaging standard: Debian Policy. That's why I don't trust anyone but Debian to produce Debian packages. It's a lot of hard work to make sure all packages adhere to a common standard, a standard that upstream distributors will not follow.
In this case, however, the Postgres packager is also the Debian packager, so I decided to trust his direct upstream Debian packages and installed them. This is a rare exception, and I don't want to be doing this all the time.
How is debian nowdays for setting up things like containers?
I've tried lxc in the past few years ago and got it working but it was not a pleasant experience, templates were broken.
I would love them to gain back more server marketshare.
For comparison, I just found this blog post [1], claiming the following:
> According to the docker images command, the debian:jessie-slim container clocks in at 88MB, compared to the full-fat debian:jessie container at 123MB. For reference, the ubuntu:xenial and centos:7 base containers are 130MB and 193MB respectively, where as the alpine base container is only 4MB, as mentioned in my post about the alpine base container.
This is always a dilemma for me... Alpine allows for really small packages, but I never got used to its packaging tools (and I don't feel confident writing Dockerfiles with it). A matter of practice I guess. Besides, Docker caches layers so if you're using some common base layer (like Ubuntu or Debian) it's probably already on the machine. So I usually write Dockerfiles as `FROM debian` and then later adapt them to Alpine if they're meant to be distributed.
Yes, I did like you before. But once I got used to Alpine, apk (its package manager), and ash (its shell), I feel myself at home. Furthermore, in most occasions I find newer updated software on Alpine vs. Debian.
Also besides disk space, I really have no idea how docker manages memory, but my assumption (naive?) is that smaller images will consume less RAM memory. I didn’t benchmark this though.
Sorry for late reply. Depends on what is in the package. Docker is just a thin separation of processes, so the situation is very similar to size of packages on host Linux OS. What matters is the size of binaries and loaded data, which I guess is not that different between distributions.
Docker-ce from upstream docker works well on Debian.
I've not looked at "naked" lxc for quite some time - but I've used Lxd on Ubuntu a bit, and that's a great experience.
If what you want is "containers" (single-purpose, ephemeral) - I'd say for now the docker tooling is best. But there's not much isolation - there lxc is better.
I've yet to try alternative "run times" like rkt - but I believe that's the better way forward compared to docker. For now we use docker mostly for packing up an application and managing dependencies - not for resource/security isolation.
Depending on your use case, Debian Stable can be a great thing. If it worked great on a given computer (i.e. everything works), and I was just writing or using programs that were themselves fairly "stable," then it will work.
If you're a developer and don't mind older packages, or if you install and manage your dev tools from outside the distribution, it can also work for you. This is a lot of people.
But if you rely on your distribution to provide reasonably up-to-date packages (and for better or worse, that's me right now), then Debian Stable might not be the right choice. When I do development in Linux, I tend to be more comfortable in Fedora, which is much more aggressive when it comes to updating packages during the life of a release. I'm not crazy about major upgrades every 6 months, though you can do it once a year if you wish.
For better or worse, I think that Ubuntu + PPAs is the path of least resistance -- if you trust the maintainers of your PPAs, that is.
Frankly, I do not understand the fawning over Debian. It became the same kind of "try to be everything and do nothing well" distribution Slackware was.
Up until 8 it was still good for servers but 8 brought systemd, which maybe has a place on a Desktop, but is definitely not anything useful for servers. Ok, so lets pretend it is a desktop. Oh wait, it is a desktop that does not contain the lastest video drivers? It is a desktop that does not contain seamless switch between free nvidia and proprietary nvidia?
Systemd is an init system that tries to do everything: system init, hardware monitoring to load drivers on demand, configure networking, rerun failed jobs, service management, and it comes with a binary-format logger.
The claimed advantages of systemd are mostly relevant, IMHO, for mobile devices that change configuration often. It brings no particular advantage to servers or desktops, unless you tend to swap out lots of peripherals on your desktop.
The disadvantages are partially political -- systemd tries to take over every job it can; the originator has made it clear that his vision is for every system function to be handled by systemd -- and partially philosophical: it used to be the case that most subsystems were independent of each other, so that they could be replaced more or less easily if the sysadmin had different needs.
The current stable Debian still makes it possible to run a different init system with just a line of package installation and a reboot.
> The claimed advantages of systemd are mostly relevant, IMHO, for mobile devices that change configuration often. It brings no particular advantage to servers or desktops, unless you tend to swap out lots of peripherals on your desktop.
Systemd provides huge advantages for administering desktops and servers as well. I'd actually go so far as to say those are the main reasons that systemd won (it's the default init system for all of the top ten Linux distributions by marketshare).
Creating unit files is trivial and you're sure that you're getting high quality service supervision, unlike with home grown scripts.
Seeing the logs of every service is also trivial since it's standardized. No more spelunking through /var/log or some random location.
There's also other parts I'm probably forgetting right now, but in general having a standard interface for service interaction is really, really useful.
Sometimes I get the impression that Linux folks underestimate the benefits of standardization. For example there's still no good, standard way to check the exact name and version of a distribution. If they can't agree about something as banal as that... (I think there is a convergence towards /etc/os-release, but when was the first distro launched? 1992? :) )
Unless you expect anything but most trivial use cases, e.g. some user-owned
directory under /var/run (/run). Then you need to wild-guess at what
combination of options needs to be set. Good thing that those options are at
least described in docs.
> Seeing the logs of every service is also trivial since it's standardized. No more spelunking through /var/log or some random location.
syslog already was standard. We gained nothing really from journald.
Many services don't log to syslog. For example, on Debian derivatives, /etc/init.d/networking was logging to the console only. Good luck to get the output once the system has booted. With systemd/journald, you get the info.
So? How do you tell between commercial and OSS projects in journald?
And about that "quick" part, I'd rather have a way to do that that's either
easy to recall or easy to re-develop. journald's magical incantations that
don't compose to any consistent language (unlike Vim or Emacs keybindings, for
instance) nor are the same as in virtually any other software are not
convenient by any stretch. I have my memory cluttered enough by half a dozen
of different languages I use, their runtimes, runtimes of languages I don't
write in but need for some software I run, twice that of DSLs and specialized
languages, options and functions of dozens of services, wire formats of
network protocols, management procedures of various Linux subsystems, and so
on. I don't need an option to a command to tell me what you ask, I can work
my way through grep/sed/awk faster than you read man page, thank you very
much.
Those were just random examples to prove a point (daemons/services can have very varied origins).
The real question is:
> I have two service from totally different origins. What do I type in my terminal to see their logs, with syslog or other widely used init systems currently in use, except for systemd?
`less /var/log/<service>.log` or `ag <service> /var/log` or something similar usually works pretty well. I’m not sure why I need a command line utility to do something that the filesystem+standard Unix utilities does just fine. The whole point of Unix is to compose general purpose utilities to get the job done rather than to create a special tool for each task.
Cause they don’t. Every service puts stuff wherever it wants, especially poorly written services. At least with systemd services do the right thing, by default.
At some point Unix will learn that in the wild opt-out is better than opt-in. When you make doing the right thing easy, everyone will do it. When you don’t, everyone does what they want, which is generally bad.
The fact that we have to ag or grep -R is a kludge. There’s NO technical reason the folder structure shouldn’t be flexible but standardized and also NO reason why there shouldn’t be some system scaffolding to help with that. Call it systemd or whateverd, it should be there.
But syslog already solved this problem (as others have said): you pipe all your output to syslog (even via TCP/UDP) and then syslog determines where the output should go via the entries in syslog.conf. And you can even write a simple utility that wraps an arbitrary executable and redirects its stdout/stderr to syslog as well, solving the problem for 90% of the applications that don't have builtin syslog support.
I feel like I'm having the same discussion, here, on HackerNews, when I talk to people about Emacs. Yes, technically we can do anything. In the end it's all code, if my time was free I could just do anything :)
But sensible defaults are important. Software design is important. Unix basically skirts the issue by throwing all these decisions in the user's lap. I can take these decisions, but I don't always want to have to do it. And there's many situations where you can't necessarily do it, either due to technical limitations, political limitations, budget constraints, etc.
In short, I'm glad systemd is winning. Sure, the average user loses a bit on the purity front but he gains a lot on the practical front.
Declarative system service configuration (i.e. init scripts or unit files in systemd language) are definitely an advantage.
I was initial skeptical about their benefit - after all, an init script is just shell, it can be debugged literally with `sh -x`.
But that also means a) people write/ship fucking terrible init scripts, mostly because b) it's not necessarily simple to write an init script in pure shell.
The problem with systemd is the politics and the project's approach to the community (including their self-defined role/goals).
Claiming it is not a necessary part of the fallacy. Basing one's argument on only those two is.
The Uselessd Guy has some rather sharp comments, such as "you’ve been living under a rock for years", for that sort of thing. Even the Debian Hoo-Hah involved analysis of four systems.
"The politics" that you critique include failures to look around and learn from (and indeed about) what already exists and has been done. Don't make the same mistake.
Expressing inter-daemon/inter-service dependencies (restart this, want X, Y, and Z restarted subsequently) has repeatedly bitten our team under SysV init (in say, RHEL <= 6), but becomes a totally solvable issue under SystemD.
(It should be noted that while our company runs an awful lot of 1st party stuff, what our team works with is exclusively 3rd party stuff over which we have precisely no control)
One big advantage is user services/daemons. I've recently converted most everything I used to start in .xsession to systemd user services and it worked very well. I did it mostly as a way to learn about systemd, but I'm happy with the results and it makes it much easier to diagnose issues and maintain a consistent environment.
This is a meta comment about the sister thread. I know flame wars are discouraged (and maybe that wasn't parent's goal), but I have to say I always enjoy these discussions here on HN (and lobsters too).
Even though it is an old discussion (and will probably remain this way for a long time), the arguments are almost always purely technical and create a good picture of the pros and cons of systemd. As someone who migrated away from Linux before the systemd inception, I'm always learning a lot on "flame wars" like these. So thanks, I guess, and please keep the technical arguments coming :)