Hacker News new | past | comments | ask | show | jobs | submit login
Avoiding Complexity with Systemd (mgdm.net)
291 points by irl_ on June 27, 2021 | hide | past | favorite | 347 comments



For me, systemd is the best thing since sliced bread.

As a programmer, I now don't need to care about dropping privileges, managing logging, daemonization (the moment I need to do the double-fork dance again, chairs will be flying, I swear), dropping into a chroot, and do half-arsed guesses "is it up and running yet?" from a convoluted mess of shell code that looks like a bunch of hair stuck down a drain for a month.

I just write an easy-to-debug program which I can launch from command line and see it run, and when I'm satisfied, a systemd unit from a cookie-cutter template is going to make it run. Service dependencies are now a breeze, too.

If I need to limit resources, I can just declare the limits in the unit. If I want a custom networking/mount namespace, it's taken care of.


I agree. It puts the system administrator in control of a lot of these things. Sometimes that can be annoying (the developer knows that systems calls they need) but often it is a huge benefit. I think socket-passing especially is a huge win, it shifts a huge amount of complexity out of each application and gives huge benefits as the administrator can configure the sockets however they want without needing each application to support each feature independently. Furthermore it removes one of the most common reasons why applications need to be started as root.

I wrote about this previously here: https://kevincox.ca/2021/04/15/my-ideal-service/#socket-pass...


It's great... until something unexpected happens. Like your NIC doesn't have link on the ethernet cable and systemd waits for minutes without allowing you to abort waiting on the network because other units depend on it.

Or how about adding new buggy DNS code that doesn't work in common scenarios? Oh, sorry, here's another CVE because we didn't create enough test cases for the corner cases that are actually important.

Or oops, "nobody uses ntsysv", right? Or "We don't need to implement chkconfig even though we broke it".

systemd is a monolithic beast that is absorbing everything else in the system without considering that some of its decisions should be able to be disabled, and a lot of the design decisions are half baked. I don't believe in the philosophy of design its author has embraced. Progress is good, but please, stop breaking shit that has worked for decades. Anyone can write new code that partially implements a feature, but it takes real effort to responsibly migrate users from tools that worked to your new shiny half assed kitchen sink.


Wait until it automagically fails.

It may be great for less-skilled people, but for anyone running anything where it's too critical to outsource support it's then necessary to have a systemd expert inhouse (and such a person has proven extremely hard to find).


I have never encountered the need for this and I honestly doubt you have either. Systemd fails mysteriously far less often than poorly written init files I have seen and it is honestly not that hard to debug almost every failure I've ever encountered. And I have worked on some very large scale systems with systemd. A systemd file I helped write has run many millions of times over without issue. I honestly find this comment impossible to believe


I’m a big fan of systemd, but we’ve definitely run into actual bugs in it. Especially the early RHEL7 days (7.0-7.3). Some were fixed upstream, some we’ve worked around, etc.


In the beginning, the road was quite bumpy, and my irk was that people's use cases were being dismissed in a very cavalier fashion: like, if I didn't think of a workflow, or a use case, or a piece of software that doesn't play nice which you cannot change, it's not valid and let it burn. Oh, and whoever points these things out, or ridicules me for this attitude, is a troll or a hater.

In 10 years, though, reading recent bug reports, I can see that the project's leads have grown out of it, largely.

Case in point, they had recently put a bug into a 24x release that made a lot of machines, including mine, unbootable, but the fix was just as quick.


There is still no way to filter application output at collection time, which can functionally make journald useless if you have an application that is too chatty, and will likely never be given that (AFAICT) there is hefty ideological opposition from the systems developers¹.

¹https://github.com/systemd/systemd/issues/6432


Isn't this something that, to a degree, might be mitigated by rate limiting journald has?

IMO Lennart is right in that collection should be optimized to a point where it is (almost) never the bottleneck. I, personally, would put a filtering program between the chatty application and journald.


The scenario I personally have is:

- Legacy application, with no access to source code.

- Company that wrote the application hasn't existed for a decade.

- It logs several useless lines every second. This is non-configurable, and includes a timestamp which makes each linue distinct.

- It also logs some very important lines that _must_ be responded to.

I don't think this is a unique situation to find yourself in, and currently this application is permitted - explicitly - to spam the journal to a point that it is effectively useless. Yes, you can use a filtering application to sit in between systemd but that's a workaround and not an actual solution (as stated in the issue linked above):

- it could not be "plugged in" via overlay (i.e. modification of Exec... is needed - this gets ugly especially if you have ExecStartPre/ExecStopPost)

- SyslogIdentifier has to be set to produce proper name in journald

- The filter itself has to be quite reliable (to not hang) and a bit sophisticated to properly handle signals, process shutdown, low disk space etc.


What you need is a pipe through grep. It’s as reliable as it gets. I don’t understand, though, why your log filter should all of a sudden grow disk space monitoring and signal handling. This is what systemd and journald should be doing, and in their recent versions, they are quite capable.


How about to open this exceedly verbose legacy program with IDA and fill with NOPs all the places where this program logs something useless? I mean, this is an activity which really should be named as "hacking", in opposition to playing with init scripts.


Or maybe discovering configurability of log/debug levels via environment variables.


I generally agree, but will say that’s not always easy if it’s something system level that’s being induced to be too chatty —- you’re stuck replacing the upstream unit’s Exec, which frankly kind of sucks. I’m also trusting the filter to be as reliable, which may or may not be true.

Rate limiting works, but it’s a pretty blunt instrument and can lead to losing your actually valuable entries vs capturing garbage.


To be clear it definitely used to be more buggy, but I haven't encountered a genuine bug in a long time


Might depend on your environment too, I generally find the newer you can get for systemd the better off you are (which is really quite high praise!). Unfortunately the life cycle for some of our environments is quite long for things like RHEL7 -> RHEL8.


Ever worked on systems where you're on the hook for the guarantee that you can keep it running?

We had 3 major breakages due to systemd, which is why we went back to something pretty much anyone who knows a bit of bash scripting can fix.


And you didn't have separate test and production sites?


A support contract from your distro of choice e.g. RedHat, Canonical (Ubuntu), SUSE, etc would potentially fill this void.


OP > "where it's too critical to outsource support"


Waiting for the last 8 years, and?


As a systems guy with a focus more on ops, I agree. It's not all roses - journald/journalctl and binary logging can go die in a pit of fire for example - however setting LimitNOFile= in a unit is just really, really nice (as well as CPU limits and all sorts of other cgroup/namespace needs). But let me just mention again that journald/journalctl can go die in a pit of fire - if it wasn't for everyone adding rsyslog to create regular text files we would be a world of hurt. But in return we get almost complete, painless cgroup level handling right in the unit file with a simple key=value structure (so really, you don't have to know anything at all about cgroups or namespaces to be very effective). Most options have a doc for them - it's usually easier for me to find an obscure systemd setting with a nice blurb about what it does (in non-programmer speak) than it is to dig up an obscure sysctl setting, e.g.

I can/could do without timesyncd and resolved (it's easy - just use chrony e.g.) but I like udevd being now part of systemd. It would be nice to not write /etc/udev/rules.d/ and instead have a foo.udev unit type, perhaps that is in our future (we do have .device units, but it's not the same - yet? the future). In this ballpark I think it's more on each distro picking and choosing - Ubuntu for example drank the kool-aide much deeper than RHEL - RHEL for example uses chrony out of the box, not timesyncd. However udevd and logind seem to be common across all distros now, as another user commented the KillBackground=yes setting in logind is just horrible to have as a default. The whole "homed" thing makes me sad that it's even being coded, I hope nobody adopts that (I dislike it for the same reason I dislike automount); someone out there wants it though.

The ability to dynamically edit a unit (systemctl edit) and to dynamically alter the running service constraints (systemctl set-property), all PID file type needs are handled in /run (getting rid of the nasty SysV stale unexpected crash reboot pid problem which many scripts failed to handle properly). Users having the ability to use their own private init items (systemctl --user) is great - timers, socket activation, custom login units, all very well extended down into the user's control to leverage. I'm sort of 50/50 on cron vs. timers, that's more of a use case by use case decision (example: tossing a https://healthchecks.io "&& curl ..." is just a lot quicker and easier in cron, but running a dyndns script on my laptop with a timer is nicer).

Touching on systemctl edit, it's really easy now to show folks (think a DBA team who only has the fundamental ops skill) how to quickly chain their After= and Before= needs for start/stop of their (whatever) without having to go down a rabbit hole - it's simple to use, the words and design are accessible and familiar, the method by which it works is a little obtuse (it's rooted in understanding the "dot-d" sub-include design pattern). On RHEL at least it uses nano as the default editor, annoying to me but good for casual non-vim users and easy enough to override using $EDITOR.

I used SysVinit for all the same years as everyone else (Solaris to Debian to Red Hat, ops touches it all) and wrote many my fair share of complex init units to start DB2, Oracle, java appservers (anyone remember ATG Dynamo?); systemd handles natively what 75% of that work was/is (managing PID files, watching/restarting failures, implementing namespaces/cgroups, handling dependency chains, etc.); for those complex scenarios (looking at you, Tomcat) you can still just have a unit launch a very complex shellscript "like in the old days". I haven't looked in awhile, but last time I knew in RHEL7, Red Hat did exactly that with Tomcat - just had the systemd unit launch a script.

It is, however, a real bear to debug sometimes - it's far easier to "bash -x /etc/init.d/..." and figure out what in the world is going wrong than it is to debug systemd unit failures. But, the same holds true for trying to debug DBus (if you've never tried / had to, it's not fun at all without deep dbus knowledge). I would like to see the future add more ops-oriented debugging methodology - if you've every used "pcs" (the commandline tooling for Pacemaker offered by RHEL), we could really use "systemctl debug-start" type of interfaces to the commandline offer the same experience as "bash -x" days of old. There are debug settings, they're just not ergonomically dialed in for the ops user, IMHO - systemctl debug-start would save people a lot of headaches.


I generally love it but had some weird bug a month or two ago where something in the distro (AWS Linux 2) added a log trimmer config for squid that made systemd restart the squids, not reload but restart, every thirty minutes, on all the hosts, so all the clients got connection reset every thirty minutes. The signals to restart came from PID 1 but the fix was commenting out the log trimmer config for squid. Hard to debug - turning high level of debugging on and PID 1 logged restarting the squid but not why.


> added a log trimmer config for squid

I feel your pain - I'm curious, was it a drop-in /etc/logrotate.d/ config which was sending a HUP? I don't run squid but Google'd up that it will take a USR1 to rotate logs which should not close the HTTP connections (allegedly HUP closes them). Perhaps the AL2 folks chose the wrong signal? https://wiki.squid-cache.org/SquidFaq/InstallingSquid#squid_...


Root causing it is on my todo. For now I just commented the config in my user-data.sh


But systemd itself was setting the squid service into stop then start. I don’t think by that time systemd knew the reason was to do with logs.


I'm wondering if this new config made squid exceed any limits, causing it to hard stop. I'd expect the log trimmer to have/cause a bug in squid itself.


It was sharply at :00 and :30 so I think it was time based and not reality based.


> journald/journalctl and binary logging can go die in a pit of fire for example

This is, of course, not a problem, because as systemd folk are wont to point out, systemd is not, in fact, monolithic, meaning they use well-defined interfaces and can be swapped out for an alternative.


journalctl is required I believe, but you can turn it into a dumb pipe straight to syslog.


That's true, but you can get all that without taking over the entire system. Upstart was a more lightweigh contender to systemd which would give you all that but none of the "Enterprise Linux Userspace Daemon" crap.


1. This can, and should, all be done without systemd, and not with idiosyncratic shell scripts and guesses.

2. Some of the systemd logging is binary, so good luck with that if there's a problem.

3. Have you tried non-systemd init systems other than sysvinit?

4. Yes, it is convenient when everything below your development is cenrtalized by a single entity. It can easily provide a consistently useful underpinning. But there's a price - overly strong coupling of the init system, the kernel and part of the user-space, centralized control of, well, almost all of how things work on the system, and stagnation of the ecosystem due to there being only one game in town.


> 1. This can, and should, all be done without systemd, and not with idiosyncratic shell scripts and guesses.

What other system is there, right now, that can do this so well?

I'm definitely onboard with the issues around tight coupling, I'm really not a fan of binary logs etc. But the unit files are pretty awesome IMHO.

So serious question - what else does those as well or better?


In addition to s6 itself, the author of s6 also wrote a survey <https://skarnet.org/software/s6/why.html> that provides a good large-scale view of the design space.

(My gripe with how systemd does a traditional init’s job is mostly the unit and dependency types, all of which AFAICT are specified in terms of actions on state transitions, not consistency conditions on states, so for all that it has a positive boatload of them I can’t actually figure out how to specify which configurations are permissible for my system.)


I stopped reading when they started attacking bind. Sheer ignorance. Also cathedral is good for him and his elite team but not in general?

Unix has always been worse is better. You may disagree but it is the secret sauce. YAGNI for that potentially ideal system. Sadly. I like mathematically solid systems but the people with good enough systems explore the solution space much quicker.


I’m not sure what you are referring to. Do you mean some other page on that website? There’s a fair bit of design philosophy there, and I vaguely remember reading it and agreeing with some parts while disagreeing with others, but that was years ago. I’ve no interest in defending an arbitrarily chosen subset of the author’s views in this thread.

Indeed I don’t even necessarily agree with everything said on the page I linked to (which I did reread before posting the comment), but regardless of what I think about the particular point in design space (s6) it advocates, I still consider it a good overview of the space itself and prior art in general, and that’s the only thing I claimed to offer. I do have some thoughts about init systems, but I don’t feel they’re ready to put them up for discussion here, so I haven’t.


> Software that does more instead of less is, simply put, badly designed software. Trying to come up with an all-encompassing solution is always a sign of developer hubris and inexperience, and never a sign of good engineering. Ever. Remember sendmail, BIND, INN,

Yeah from the skarnet page above. BIND had many troubles over the years but mostly because we were all learning about secure code practices in C etc. bind was just a name server. Never heard of it reading mail. There was a lot to learn about network exploits, for sure. But attacking bind as doing too much seems disingeneous. And the whole tone reminds me of the same tone as https://suckless.org/ who cares if some browser takes a Gig of RAM or my window manager is ginormous. My laptop is hard pressed to use all the RAM it has and if systemd and the kernel is running sixty daemons but the unit files are easy to write as a here doc in cloud init, then win win win.

(Running postfix and bind on my personal cloud VMs; have run Apache, haproxy, nginx and lighttpd, as well as built in Python web servers).



Looks quite neat, though perhaps just a little more complex than systemd service files, I guess they essentially boil down to similar things.

Where systemd has one file with a bunch of settings in, this is split into a directory of single-purpose files in s6. I'd hesitate to call it "better", but from a surface reading it seems roughly equivalent from a usability perspective.


According to the author, Laurent Bercot, the code for s6 is much cleaner than the systemd code. For anyone curious I guess the best way to find out is to compare and make a judgment of one’s own.


While I appreciate clean code when I have to work on it, it's somewhat orthogonal to my requirements when we're talking about running systems rather than building them.

I guess it could be preferred as a second-order factor.


In some ways yes, in other ways no. I've looked at both, and there is always a certain amount of "ugliness" that comes with writing low level C software for Unix.


I would rather have complexity in the engine code than in the unit files. A clean implementation is less important than a clean interface.


I'm not familiar with s6 myself. But - the choice between a single file or multiple files in a directory is a relatively minor issue in the overall scheme of things. i.e. if s6 is "better" or "worse", it's not because of this fact.


I really don't understand the scenario where binary logging is a problem. journalctl is a command just like, I dunno, gzip, and people are fine with gzipped logs. If something goes horribly wrong with your system, you're not looking at logs with an oscilloscope, you're looking at logs by mounting the disk on some other working OS - whether it's the initramfs, or a live CD, or whatever. You can run gunzip < /mnt/brokensystem/var/log/messages.1.gz from that other working OS to read text logs; you can also run journalctl --root /mnt/brokensystem to read journald logs.

(Not to mention UNIX has log files that have been in a binary format since time immemorial, like utmp and wtmp.)


Binary logging would be fine if it Just Worked, but my experience is that journald/journalctl does not just work, and when it breaks I don't have logs.

For example, if the system clock is not monotonic, my text logs are still written sequentially in order, and it's easy to figure out what happens. It is the stated view of the journald maintainers that, if your clock is not monotonic, you get to keep both pieces. https://github.com/systemd/systemd/issues/662 (The specific issue described there sounds complicated, but the fundamental problem behind it, which I have hit multiple times, is "if your system can't maintain a monotonic clock at all times, including early boot, then your logs will be mangled.")

Similarly, a text log line is complete the moment it's written, even if the process that wrote it goes away immediately afterwards. Journald, by contrast, performs asynchronous metadata lookups for _each_ log line at some later time, which it apparently does not cache. This means that, when a process dies suddenly, the final messages will not appear in "journalctl -u", because they failed to get tagged with metadata. (I can't find the bug for this, so I haven't verified that it's still open and unfixed at this time. But it has certainly existed for years at this point.)

So no, in principle I have no objection to _competent_ binary logs. Journald does not meet that bar.


> (I can't find the bug for this, so I haven't verified that it's still open and unfixed at this time. But it has certainly existed for years at this point.)

https://github.com/systemd/systemd/issues/2913

I am so happy you mentioned this, as I am in a position of developing a daemon used by people who insist on using systemd, and I keep asking them for logs when things fail and, in fact, the most critical logs -- arguably the only ones that ever truly matter: the ones that come immediately before the daemon terminates for some reason -- are often missing. Now I know why :/.

https://unix.stackexchange.com/questions/291483/how-to-fix-u...

> So no, in principle I have no objection to _competent_ binary logs. Journald does not meet that bar.

Yeah... I hadn't even gotten to the end of your message before immediately going "omg I have to look into this" and the most apt description that was coming to my mind for a logging system that fails to actually store logs -- and particularly the most critical death throws to the log before termination -- is 100% "incompetent" :/.


> You can run gunzip < /mnt/brokensystem/var/log/messages.1.gz from that other working OS to read text logs; you can also run journalctl --root /mnt/brokensystem to read journald logs.

It's subtle, and you've accidentally missed it like many do - you made an assumption that everything logs through journald. On my personal system where I do not have rsyslog (aka trying to live the journald life) I have non-journald text logging for: httpd, sa (sar/sysstat), lightdm, audit, atop, Xorg, cups, fdsync and samba. If we just stick to httpd, sa and samba (most folks know how those work) it shows how logging is way more complex than what is captured in the journal - these apps by design maintain their own logs.

So now you launch your rescue ISO (let's assume modern sysrescuecd which has journalctl) and get your filesystems mounted, you have to employ two different techniques - journalctl for that single-use format and then your traditional find/grep skills for everything else. You don't know why it crashed, how it crashed and are on a fishing expedition. Was it an RPM upgrade? (logged to yum.log or rpm.log, not journald) Was it update-initramfs running out of disk space truncating your initrd? (sometimes logged to journald depending on distro, sometimes not). Don't know until you start following breadcrumbs, find/grep is the superior toolset (much like the trebuchet is the superior siege weapon).

It's not that you cannot do it, it's that journald-only (no rsyslog) forces using a specific method with specific tools to access what should be extremely easy to access data. Keeping logging all in the same format (text) is what I want, it's not that I don't know how to use journalctl; I don't want to use journalctl, it's a pig in lipstick.

> (Not to mention UNIX has log files that have been in a binary format since time immemorial, like utmp and wtmp.)

I agree with this, they belong over in /var/lib/ somewhere and those files bug me, always have. "Just because these other guys did it" is however a logical fallacy, they are (IMO) just as wrong because they're more like database files than they are logs in my opinion. (the secure/auth.log is more a "log")


If you want to use journald for those programs you could just configure them to pipe the logs there, or you could just disable journald logging and have it pipe its logs to the syslog. If your distro didn't configure all those programs to log to the same place, that's more of a distro configuration problem than a problem with any specific syslogger. I personally dislike having a bunch of services that try to implement their own log rotation, I would rather have that handled by the system.


System Ops (at scale, large company, lots of teams) in a nutshell: you did not build the system or choose what kind, it was most likely installed using vendor defaults by the systems owner and you were called in because it's misbehaving. Most likely 50 other people have touched various parts of it, some with skill some without.

"Give me an IP, username and password - what OS is it?" are about all you start with and go from there. It's probably a critical system to someone, and everyone swears on a stack of bibles that nobody did anything, touched anything or made a change. You have very specific domain knowledge (kernel, grub, SAN/storage, systemd, dbus, etc.) and typically ask a lot of questions to the systems owner as your fingers are flying ruling out reasons (low hanging fruit common issues).


Ok so complain to the ops department that they need to unify their logs. That's their problem, not yours. If the company is big I would expect them to be doing that anyway, either they coalesce around a journald-type thing that aggregates the logs locally, or they'll use another centralized service like datadog, splunk, etc. Edit: If you are ops then this is your entire wheelhouse, you should be able to solve it at scale without messing everything up.


It has been my career experience that 5+ digit employee companies more resemble Chiba City than they do the USS Enterprise.


Sure, but that's entirely the problem those centralized logging services were made to solve. You make it really easy for everyone in the company to put their logs in the right place.


1. journalctl is painfully slow. Most frequent commands I use: journalctl -u myunit --since=today and journalctl -u myunit -n 1000 take minutes on loaded servers with large amount of logs. 1st command is instant for daily rotated plain text logs and 2nd is almost instant for any plain text logs.

In theory it is possible to make a binary log database which will work fast for many queries including these, but journalctl is not fast at all.

2. People are fine with gzip because log compression is usually done in a way which makes log corruption/loss highly unlikely:

gzip log.ystd, fsync log.ystd.gz, rm log.ystd

At any time you can reset a server and you will either have uncompressed or compressed logs, not some partially written binary blob.


Say the computer writing the log was destroyed, you were able to recover the storage, but the only computer available did not have journalctl.

Or if you're writing to network storage and would like to analyze the logs from your haiku box.

Text is not perfect, but it's the one thing that is always available.

Also I'm not sure about the journalctl format, but in general binary formats don't handle partial corruption well. Which is something kind of important for logs.


The journalctl binary format seems to handle corruption pretty well. That was a design criteria.

Everyone forgets or tries to ignore that text files ARE A BINARY FORMAT. It is encoded in 7-bit ASCII with records delimited by 0x0a bytes.

Corruption tends to be missing data, and so the reader has to jump ahead to find the next synchronization byte, aka 0x0a. This also leads to log parsers producing complete trash as they try to parse a line that has a new timestamp right in the middle of it.

Or there's a 4K block containing some text and then padded to the end with 0x00 bytes. And then the log continues adding more after reboot. Again, that's fixed by ignoring data until the next non-zero byte and/or 0x0a byte. This problem makes it really obvious that text logs are binary files.

See the format definition at https://www.freedesktop.org/wiki/Software/systemd/journal-fi...

And here, this isn't perfect but if you had to hack out the text with no journalctl available you could try this:

grep -a -z 'SYSLOG_TIMESTAMP=\|MESSAGE=' /var/log/journal/69d27b356a94476da859461d3a3bc6fd/system@4fd7dfdde574402786d1a1ab2575f8fb-0000000001fc01f1-0005c59a802abcff.journal | sed -e 's/SYSLOG_TIMESTAMP=\|MESSAGE=/\n&/g'


The journalctl binary format seems to handle corruption pretty well. That was a design criteria.

Thanks for pointing that out. I guess thats why they came up with their own format instead of just using sqlite, or something else that is already a standard.

Everyone forgets or tries to ignore that text files ARE A BINARY FORMAT

That's a bit pedantic, even for HN standards :-)

But yes, I know all about fragile log parsers and race conditions of multiple processes writing to the same file. I was just thinking about a scenario where you end up having to read raw logs when things go haywire.


> the only computer available did not have journalctl

This is exactly what I don't understand. This is a world where no other computer exists? I have bigger things to worry about (even if you scope the problem down to to "no other computer with journalctl installed exists on my network").

> Or if you're writing to network storage and would like to analyze the logs from your haiku box.

I don't have any Haiku boxes, but I do have Windows boxes. At my day job, where I'm a Linux sysadmin for a large finance company, my workstation is Windows, and I don't have admin on it. So even with conventional UNIX logs, a much more common case is - as I mentioned - that I'd want to read them from a computer that doesn't have gzip installed.

But we're fine with gzip logs, because the way I'd actually do this is to get a Linux computer running.

(Also, keep in mind that the filesystem itself is a binary format. If you're really worried about reading logs from Haiku, you wouldn't put /var/log on NFS because that sounds like a terrible idea, you'd log to a FAT filesystem. But nobody actually does that. Everyone's logs are on ext4 or XFS or btrfs or whatever, and nobody says those formats are a bad idea.)


Note that not every journald/journalctl is created equal. It's easy to end up with logs written on one computer that journalctl on another computer can't read, even if the latter is a newer version, depending on which settings each one was compiled with (which is mostly a distro question.)


All right, then that seems like a reasonable objection to the format, not simply "It's a binary format." gzip files created anywhere can be read by any version of gzip.

A standalone (and cross-platform) journalctl file reader seems like a useful thing to have around and not a terribly difficult thing for someone to build.


This is exactly what I don't understand. This is a world where no other computer exists?

What about a windows user that has a smart device of some sort that is acting up. They're able to dump the files, but now they have to read them. It's a more realistic scenario than a haiku box, but its the same idea.

And for the record, I am against gzip logs too. I think if you need to save that many logs you should be exporting them to another system instead of archiving them locally.


"I really don't understand the scenario where binary logging is a problem."

It's not a problem - it's just not UNIX.


[flagged]


I think the argument is really about transferablilty of skills. I already know how to manipulate compressed files because I have to do that in other places. And once you realize logs are just text files, I can immediately transfer all my skills of dealing with text files to dealing with logs.

But now I have to learn another set of tools (or at least another command to convert it to text files). It's not really a huge issue (I'm not really a systemd hater), but there can be a bit of dread when all the tools move off of standard formats, like simple text files, to custom formats, requiring you to learn idiosyncrasies of lots of different packages.

(And the reverse is true. Knowing how to use journalctl only helps me with systemd, and nowhere else. It's a piece of knowledge helpful in only one area that I cannot transfer anywhere else)


I think this is exactly it. Journalctl doesn't resemble any tools I already know, so I have to look up how to use it each time. Binary logs aren't the problem; we have had those for ages (e.g lastlog). I just wish journalctl were a little more familiar.


> Journalctl doesn't resemble any tools I already know

What about "tail"?


I think you can pump all the journal log files to a text file and then open them in emacs and start writing Python to analyze ;)


It's also just a really good tool for reading logs. Sure, it's always possible to cobble something together that merges a bunch of text logs and orders entries by date, but with journalctl that's just what it does and it's as simple as `journalctl -u spam -u eggs -u spam`.


journactl makes it _possible_ to read the log TFTFY.


> I know that it can be achieved without binary logs, but does any popular distribution implement compression out of the box?

Are you saying that since distros don't optimally configure a program out of the box, we should scrap and replace the whole program instead of just fixing its default configuration?


> we should scrap and replace the whole program instead of just fixing its default configuration?

Loads of distributions assisted that upstream works out of the box with systemd. That way the work is shared across distributions. Not sure why you make such a strange suggestion. Loads of work has been saved thanks to systemd. So much more is shared across distributions it's kind of crazy to look back.


I was talking about rsyslog there, not systemd.


> I was talking about rsyslog there, not systemd.

It's not at all clear what you're talking about, honestly. First you talk about distributions, then a vague "we", then some weird "scrap and replace" nonsense. No idea who you mean with a sentence such as "we scrap and replace rsyslog". It really makes no sense, if there's a developer that works on rsyslog the work will continue. Distributions might change a default, but it didn't seem to be about distributions.

To make things a bit clearer I talked about all the benefits of having systemd across distributions, plus how the work is shared. This because you seemed to not understand the benefits that systemd has for distributions, in response to the "since distros don't optimally configure a program out of the box". Again, with systemd work it often allowed things to be done upstream. Configuring things in a distribution is a waste of time, previously distributions often wrote their own init scripts.

One person specifically mentions that it's nice when stuff is good by default. Complaining that it's some fault of the distribution completely misses the point: systemd allows stuff to be shared across distributions! No need to complain that either the default wasn't ok, the init script had a bug, etc. It's shared by default. Similarly, once a mistake is found, the bugfix can be made in one place.

That's the nice bit. Your weird comment isn't clear at all, plus completely misses what's nice about systemd for distributions.


Stagnation of the ecosystem should not be an argument here as systemd is trying to replace the sysvinit that has its roots in Unix System V released almost 40 years ago.


The choice isn't systemd or roll-your-own-init-system. There are alternatives like runit, openrc etc.

You can do the same painless setup (arguably even easier) with runit as the base template requirement is literally just

    #!/bin/sh
    the_executeable &
Granted, logs in runit are optional but there are problems with default logging too, e.g. Docker will keep filling logs until the disk is full unless you explicitly tell it not to in either its configuration or your own custom log rotation rules. Neither of which are default.


I'm glad to see more people sway to systemd. Systemd is 10 years in the making and it was met with skepticism right from the first day. Some of that is now slowly changing with systemd being accepted in more and more distributions. Service and runlevel management wasn't any better in the sysv era, nor were any of the multitude of custom start and boot scripts.

I remember when it took multiple days testing the configuration on different distributions, editions and versions to just get a single daemon process to start without failure. Then do the whole thing over again because the Debian based distros did not use the RedHat startup tools, different (or no) runlevel configurations, different service binders, different NTP daemon, different terminal deamon, etc.. And of course the French customers wanted the service to work with Mandriva, the German customers want SUSE support with extended security options like dropping permissions after boot.

Just like the article mentions you can define a portable service model with security and failure handling built in. There wasn't even anything that came close back in the day. Systemd may not have been built with the Unix philosophy in mind, but at some point that becomes a secondary concern.

Systemd unifies all systemd resources in units which work anywhere, its expandable and extendable, user friendly, allows for remote monitoring etc.


For people who had well working low maintenence environments, systemd came in and changed everything - breaking things, requiring changes to get things working again.

Its not just breaking init.d scripts, it’s ntp, dns, syslog. Systems throughout the OS fail to things that were no longer short commands with muscle memeory, there were now ridiculous convoluted commands like systemd-resolve --status instead of 30 years of typing cat /etc/resolv.conf

Even when you remember and type that in, you don’t get a simple list of nameserver and host, you get 100 lines of text you have to spend effort parsing to work out what’s going on.

When it’s less mental effort to run tcpdump port 53 to see where your DNS is going, there’s a problem.

For decades it was /etc/init.d/myserice restart

Now is it systemctl restart myservice or systemctl myservice restart? I have no idea as I’m not at a computer.

Or the restart fails it doesn’t tell you why, it gives you two locations to look for log files about why it might have broken. Init.d scripts didn’t do that. Even if there was something really wrong that log files don’t reveal, running init.d with bash -x allowed easy debugging

Systemd came in and changed working processes and systems and gave very little benefit to people with working processes and systems from a operator point of view.


I doubt answering rants is useful, but I'll try to give factual counter-arguments.

> there were now ridiculous convoluted commands like systemd-resolve --status instead of 30 years of typing cat /etc/resolv.conf

systemd-resolved is not enabled by default in Debian and many distributions, and it is not needed in any way by systemd. If you don't like it, don't use it!

Your rant does not sound very serious. Did you really have "ntp" or "syslog" in your muscle memory? That's strange, because most syslog daemon did not have a `syslog` command.

Anyway, systemd-resolved was created because it has uses. And for systems that used a dns cache (dnsmasq, etc), rejoice, because the config is now simpler than it was.

> For decades it was /etc/init.d/myserice restart

> Now is it systemctl restart myservice or systemctl myservice restart? I have no idea as I’m not at a computer.

Before systemd, at least on Debian, for a few years the recommended way was NOT calling `/etc/init.d/something`, but instead `service apache restart`. Since sysv was unsuitable for many uses, several alternatives emerged, like "runit", or "upstart" for Ubuntu. So, before systemd, the recommended way changed with the distribution.

Thanks to systemd, most linux installs now use `systemctl restart service1 service2`. Note that you can now act on multiple services at the same time. You can use this feature as a mnemonic.

> Or the restart fails it doesn’t tell you why, ... Init.d scripts didn’t do that.

In many cases, init.d scripts told you nothing when they failed. Each service has its own procedure. Nowdays you can always see what happened with the command systemctl prints on failure.

And `systemctl cat s1` display starting instructions that are rarely longer than a dozen of lines. I remember init.d scripts that were hundreds of lines long, and awfully hard to understand.


I tend to agree with most of the points you're making but I do want to point out that "If you don't like it, don't use it" isn't helpful advice for system administrators who weren't given a choice in the matter. It's fine if you're designing a system from the ground up, but most of the time, most of us have to work with what we're given.

I think most of the resentment against systemd is that it felt like it was forced down our throats. Systemd may be an improvement over the mess of shell scripts we had before but it isn't perfect. It's one thing to choose to give up decades of experience so one can voluntarily switch to a new system. It feels very difference when the choice isn't one's own.


If you can't opt out of systemd-resolved due to job policies, that's really not systemd's fault and there's nothing they can do to solve that situation. Why complain about it in that context?


What does this have to do with job policies? Systemd was forced down our throats by Red Hat adding hard dependencies on it to other software under their control, e.g., GNOME. Other distros then adopted systemd under duress, since many of the packages that Red Hat made depend on it are important to the Linux ecosystem, and the other distros didn't have the resources to fork them all.


> Systemd was forced down our throats by Red Hat […]

That's a very emotional take on the whole thing. As I saw it, systemd happened and the Ubuntu developers eventually concluded "well, that's better than upstart, let's use that." Plenty of other distros made the same rational decisions. Meanwhile, the GNOME developers thought "great, people are converging around a modern init system, we can actually integrate with it now," and so they did.

Also, GNOME is not under Red Hat's control. They contribute a lot, but the leadership rarely has a majority of Red Hat employees. While a large number of contributors work there (of course they would - Red Hat is big!), the majority are - again - from elsewhere. I can think of plenty of recent features that people assume are Red Hat driven and I can assure you they definitely are not.

What your take is doing is discounting a very large number of peoples' wisdom, time, and effort, by claiming their contributions are made as helpless victims of some conspiracy or as evil supporters of it. Both of these ideas are harmful.


> Meanwhile, the GNOME developers thought "great, people are converging around a modern init system, we can actually integrate with it now," and so they did.

> plenty of recent features that people assume are Red Hat driven and I can assure you they definitely are not.

My claim is specifically that the people who added the hard dependency on systemd to GNOME were Red Hat employees. I'm not talking at all on who wrote or merged any other code in it.


Your claims went further than that and it feels very disengenuous for you to try and walk them back now.


When was I ever talking about changes to GNOME other than the systemd dependency? What am I walking back?


> Systemd was forced down our throats by Red Hat adding hard dependencies on it to other software under their control, e.g., GNOME. Other distros then adopted systemd under duress

What utter drivel. I've participated in the discussions around systemd in various distributions. There was a huge amount of discussion, then one by one distributions switched. Some quickly, some took various years. Again, some distributions took various years to switch.

That you can only say things such as "forced down our throats" and "duress" says enough. Not capable to actually hold a discussion, let's be emotional and without any actual facts.


> That you can only say things such as "forced down our throats" and "duress" says enough. Not capable to actually hold a discussion, let's be emotional and without any actual facts.

Does this sound any better? Red Hat used their influence over GNOME (and other programs) to add a hard dependency on systemd to it. This forced other distros to either switch to systemd or drop support for GNOME. I suspect that had Red Hat employees not added hard dependencies on systemd to any other software, that no distributions other than Fedora and RHEL and its clones would require it.


Nope, it's still entirely incorrect. Again, the release team misjudged things. I was part of the GNOME release team at that time. We were actually warned about it, then misjudged it ("it'll be fine").

Further, it wasn't even a hard dependency. You're really not understanding components and APIs.

> This forced other distros to either switch to systemd or drop support for GNOME.

No, again entirely incorrect. GNOME runs without systemd. A few distributions worked on ensuring GNOME runs without it. It took a while to make that happen, so for a bit some distributions needed to keep some components back. But still: you're talking about systemd while it was an interaction of a few components. Systemd consists of loads of bits.

GNOME _runs_ on distributions without systemd! It took work to make that happen, we coordinated to ensure the problems would be solved.

> I suspect that had Red Hat employees not added hard dependencies on systemd to any other software, that no distributions other than Fedora and RHEL and its clones would require it.

Again, you're so incorrect it's not funny. Arch was really quick to switch to systemd. I help out with Mageia, they really wanted to switch as well, but it took (volunteer) time to make it happen. Opensuse took a while, but still, they would've switched.

The only unique ones were Ubuntu (political crap) and Debian (partly due to political influence by Ubuntu).

Systemd was selected on merit by loads of distributions, not this conspiracy thing you're pretending it to be.


Not GP.

The fact that it took _work_ to get GNOME to run without systemd is a bad thing in my book.

Also, to claim that systemd was selected on merit without anything backing up the claim of "merit" is disingenuous; plenty of worse solutions end up winning all the time.


It took work because systemd (more precisely logind) solved real problems and so the work had to be done again for the sake of non-systemd distros.


Can you tell me what problems logind solved? Because on my own machine, I don't have it, and I don't need it. In fact, it's a struggle to make sure it's not pulled in as a dependency of anything.


The point is that the replacements being talked about, systemd-resolved for dns, systemd-timesyncd for ntp, and systemd-networkd for whatever else you would prefer to use for network config, are not a mandatory part of systemd. You can use systemd without using these other components and it will work perfectly well with whatever other services you want to use for dns, ntp, and ip networking. These other services are not dependencies of GNOME, either.

People are conflating systemd itself with all the optional services it comes bundled with. I don't even believe resolved and networkd are enabled by default, at least not by the vendor. Whether a distro enables it depends on the distro.


You have to be an expert on systemd to figure out how not to use systemd.


This would be a comment with two parts. First one will be counterarguments, second will be a generalized response. Please bear.

Part One:

---------

> systemd-resolved is not enabled by default in Debian and many distributions, and it is not needed in any way by systemd. If you don't like it, don't use it!

It's not possible to ask systemd about which parts are enabled and up to which extent. It always adds a discovery phase before starting to make changes in a system. If you don't do this discovery, you're probably in a wrestling party with systemd. If you do this discovery, it costs you time. Systemd SHALL provide a way to see how much of its enabled up to what extent.

Network Manager got this right. If there's a distribution native configuration file for an interface, Network Manager ducks out and leaves system to its own. If systemd finds an equivalent service running both in its own ecosystem and from another package, it either overrides it or collides head-on with it, so either systemd equivalent works, or nothing works as expected. Neat(!).

> Before systemd, at least on Debian, for a few years the recommended way was NOT calling `/etc/init.d/something`, but instead `service apache restart`.

That'd be RedHat family of distributions. Debian doesn't have service command out of the box. Using both for over a decade, RH uses service, Debian uses /etc/init.d

> Thanks to systemd, most linux installs now use `systemctl restart service1 service2`. Note that you can now act on multiple services at the same time. You can use this feature as a mnemonic

There was a thousand ways to do that before systemd, systemd added yet another way. It's not bad, but it was not novel in any way.

> In many cases, init.d scripts told you nothing when they failed. Each service has its own procedure. Nowdays you can always see what happened with the command systemctl prints on failure.

systemd reports the failure in the lines of "service failed to start. go look to the logs. I also probably intercepted them, so journalctl may work".

Any good init script reports the error as "I cannot find my run file and/or process, please go get the logs. Something is probably borked". It's more/less the same thing. Better reporting in systemd requires systemd targeted service files, which are not mandatory, or most practical all the time.

Part 2:

-------

Whenever something comes up about systemd, this summarized conversation comes to life:

    U1: We were happy before systemd. It broke too many things, it also tried to hijack everything. Now everything is different.
    U2: *defend systemd in various and countless ways*
    U3/U1: You're wrong.
    *A flame war ensues for some time*
This is neither productive, nor beneficiary to anyone. I'm using this thing called linux for more that 15 years. It's nearing 20. I've used init.d, mudur, upstart, systemd, etc. They all have advantages and disadvantages

However, neither of these systems have this fierce defendant army of systemd. To be brutally honest, systemd has a lot of advantages, makes many things more practical, and faster in many ways.

OTOH, systemd is not radically fast. It's faster, but not blindingly. It's practical, but not always. It's hijacking of services makes some stuff very backwards. Replacement of NTP and resolved makes things hard to manage. Default binary logs makes some admins and external systems go blind.

I'm not against progress or systemd in general, but please be a little more accommodating for neutral and negative comments about systemd. Not all the commenters are bone headed caveman who love their flint stones and reject lighters with all their life!


>It's not possible to ask systemd about which parts are enabled and up to which extent. It always adds a discovery phase before starting to make changes in a system. If you don't do this discovery, you're probably in a wrestling party with systemd. If you do this discovery, it costs you time. Systemd SHALL provide a way to see how much of its enabled up to what extent.

It absolutely is possible to do this. It's the same as finding out if any service is enabled or not. Every view of a service includes it's state and whether or not it is enabled. So if you want to check an individual service, look at systemctl status $service, and if you want to look at all services: systemctl list-unit-files

And if you want to look at only the enabled ones, you do the Unix thing and "| grep enabled"

I don't see why systemd needs to hardcode a command for this.


Just want to add, when you systemd list-unit-files | grep enabled, you do need to know the first field is the actual state and the second field is the vendor default state.

But this comment is totally right. If you're a Linux sysadmin, how hard is it seriously to type into a search engine "systemd list enabled services." Exactly this command very helpfully comes up in DuckDuckGo's knowledge graph bubble so you don't even need to follow the link to askubuntu. I'm sure Google search does the same.


>There was a thousand ways to do that before systemd, systemd added yet another way. It's not bad, but it was not novel in any way.

The difference here is that systemd actually went to the individual distro maintainers and listened to their concerns, made the necessary changes, and convinced them all to adopt it. That's damn hard to do in the Linux world, I commend anyone who can do it successfully.

Regarding your part 2: For whatever reason, there is an absurd amount of misinformation posted whenever systemd comes up. If you posted something that was wrong about some other service manager, I would correct that too. You deserve to know the right answer to things, for your sake, not for the sake of systemd (or any other program). Please don't dismiss attempts to correct misinformation as being unproductive, it's the flame war which is the unproductive part.


"Systems throughout the OS fail to things that were no longer short commands with muscle memeory, there were now ridiculous convoluted commands like systemd-resolve --status instead of 30 years of typing cat /etc/resolv.conf"

As a sysadmin, for me things like that were very, very minor issues.

The main problem was that systemd had awful documentation, written by people who'd clearly never had to use systemd in anger and just assumed that everything would work swimmingly (and please don't say read the man pages.. those are barely adequate).

When things broke there were no simple and obvious ways to fix it, you had to dive in to its labyrinthine spaghetti architecture and hope and prayed you somehow got the Rube Goldberg machine to work.

Hopefully that's improved by now, and there's some canonical documentation that really shows you how it all fits together and how to fix it when it falls apart.


The documentation (man pages) is still pretty bad. It's always a guessing game for me where to look up the config options: systemd.exec, or systemd.service, or systemd.limits, etc.


I think you're missing a few things here:

* Services started with sysvinit would put logs where they want, which is fine if you know where they are but per-service you might be guessing. Having everything always in the same place is handy.

* sysvinit wasn't giving you any of these security benefits.

* If your system really was working fine before, why did you need to upgrade it to a newer distribution with systemd?


> * If your system really was working fine before, why did you need to upgrade it to a newer distribution with systemd?

Because the only two alternatives are 1) running mainstream distros from before 2014 or 2) running obscure distros that still don't use systemd.


yes, people forget that we are a village.

Contrary to “popular misconception”, Linux is not a settler's freehold where every holdout makes their own rules.

Admins have to work with a diversity of systems. They can choose how to setup new ones, but they have to work with a range of them. What everyone does has an effect on everyone else. If some distro introduces a new way of doing things, it has some chance of ending up affecting a lot of people - they might have to adapt to also support that way, or handle it in some way.

In that way we are a village, things are connected, and that’s why there is some degree of "social control" - looking across the neighbor’s fence and meddling with their way of solving the problem - it can in some sense become ours, if we are unlucky.

Fortunately, we can relentlessly copy good solutions from others in the village too.


So the admins should really not rant about systemd but complain about the distributions who switched to systemd or about their employers who force them to use such distributions.


Or just buckle down and view a new chance to learn something as an enjoyable opportunity. Dammit people we aren’t paid for being experts but the ability to become expert-like in new areas.


More than one thing can be bad.


That's the marketplace of ideas in action. Volunteer-run distributions like Debian and Arch have switched over. If the whole world where RHEL, you might have a point, but it's not.

You can contribute to your own distro, and the "veteran UNIX admins" made Devuan. If it still counts as obscure and you don't want it to be, you - yes, you - can do something about it.

The free/open-source community is a do-ocracy. The things that are worked on are the things the people doing the work want to work on. If you have a well-paid sysadmin job where you are providing your employer value by using the work they provide you for free - and, in particular, using their ongoing work which they continuously provide you for free, because you feel like a mainstream distro from 2014 doesn't suit your needs - then you can either be grateful for what you get for free or you can contribute back.

(Which doesn't necessarily have to be contributing your own work. I'm sure if you get your employer to donate one FTE's salary to Devuan, you can change its obscurity pretty quickly!)


Or you could've actually improved init.d and add all the features to is, and make easy to use/create.


I suspect many sysadmins, certainly the ones who complain about syslog, were quite happy with init.d

Traditionally developers would write code and packagers would package them into a distro specific rpm/deb -- including the init scripts (which may be pushed back upstream). Developers wanted to bypass this slow process, and there were far more developers than people willing to package the software.

Personally I've never had a problem writing an init.d script.


All I ever could do was add a line to rc.local that started a supervisor process to run my stuff. But I have many service unit’s as part of user-data.sh or cloud init and they work and restart on failure and are visible to other admins I haven’t talked to etc. it was surprisingly easy.


> For decades it was /etc/init.d/myserice restart

You should always use "service myservice start" instead of "/etc/init.d/myservice start". Running "/etc/init.d/myservice start" directly means the service ends up accidentally inheriting parts of your shell's state (environment variables, limits, current directory), which is a different environment from when the service is started at boot. The "service" command carefully cleans up most of the environment before running the script, making it much more similar to what will happen when it starts automatically on next boot.

And if you were used to "service myservice start", it now automatically forwards to "systemctl start myservice" when the /etc/init.d/myservice script does not exist, so it keeps working nearly the same after the transition to systemd.


> Or the restart fails it doesn’t tell you why, it gives you two locations to look for log files about why it might have broken. Init.d scripts didn’t do that.

Some things are a matter of preference, but this bit is just wrong. Init.d was hilariously worse. Some services had their own configuration locations, some had those exposed via /etc/defaults, some used syslog, some redirected stdout/stderr, some redirected one and discarded the other.

You're right that there are two places now - logs are either in the journal or in app-specific log location. And stdout/err go to the journal. Those 2 places mean fewer places to look through than we had before.


> For decades it was /etc/init.d/myserice restart

Thing is, it was never this command, it was always https://linux.die.net/man/8/service , but your command also worked in 99% of the times, until it didn't and restarted service misbehave. Systemd streamlined whole experience.


I only used Linux mostly during university, while at CERN and only after most UNIXes gave up fighting against it.

From what I remember, everyone else on UNIX world was already following up on systemd like systems before it got adopted on Linux world, and on Red-Hat/SuSE distributions daemon scripts existed for quite a long time as well, so no it wasn't * /etc/init.d/myserice restart* for decades.


> Now is it systemctl restart myservice or systemctl myservice restart

Actually long before that it was service restart myservice, which still works.


Unfortunately, it was `service myservice restart`, which is the wrong order if you want to be able to do multiple things at once (or even just align with any other `cmd subcmd args` program).


  service myserver restart

  /etc/init.d/myserver restart
Seems the traditional order to me

  $ service myserver status
  $ service myserver reload
  $ service myserver restart
makes perfect sense when tackling a single service


The traditional order is more like

    # /etc/init.d/myserver reload
    unknown command reload
    # /etc/init.d/myserver restart
    unknown command restart
    # /etc/init.d/myserver stop
    myserver: warning: frobulator did not fully unfurl
    # /etc/init.d/myserver start
    myserver: still running on pid 1234
    # kill -TERM 1234
    kill: (1234) - No such process
    # grep "running on pid" /etc/init.d/myserver
            echo "$0: still running on pid `cat /var/run/myserver.pid`"
    # rm -f /var/run/myserver.pid
    # /etc/init.d/myserver start
What you're calling "traditional" is the minor, purely UI-facade-level consistency we got after 10 years trying to clean that mess up. But actually we eventually gave up, and went on to invent something better.


I sometimes feel a bit weird when I see complaints like this. Because I have never experienced broken by systemd.

For logging, can't you write a systemd service in bash with -x flag?


Recently a systemd update that came with a flatcar linux update added/enabled a systemd-resolved stub that hogged port 53, preventing the actual DNS server on the machine from starting up. Does that count as broken by systemd?


No. Sounds like broken distribution.


Oh, you expect the error message when you restart? Oh, sweet summer child. You won't get that error message until someone connects to the unix socket and the daemon lazily gets started, and crashes because you had a typo in your config file. You didn't know that, because you were under the understanding that the "restart" succeeded, since it said it did and returned EXIT_SUCCESS.


I used Linux about a decade before systemd came along, and I welcome it. I think it made almost everything better, and not by a small margin.


If you are using UNIX like environments, you are forced to remember arcane commands - after almost 25 years of using Linux, I still have to open the man page for something as ubiquitous as grep - because there's so many options and you are always finding new use cases.

The best solution is to maintain infinite bash history and stop trying to remember arcane stuff.

Don't even get me started on tools like jq - so convoluted that I use python instead


It took like six attempts but I finally like jq. Very handy for one liners or quick bash scripts that read data from AWS cli output. It yeah in general prefer to remember the search terms for the working examples. I Google “Awk add total” at least one a month.


> Now is it systemctl restart myservice or systemctl myservice restart? I have no idea as I’m not at a computer.

I felt so alone in the world until this moment


Well, by the time the systemd gets as old as sysvinit currently is, everybody will have the correct one in their muscle memory.


I agree with the GP in that I can never remember the order of args for systemctl either. But I also accept that systemd is here to stay (and I like it, especially for writing services, and across distros!), and I'll get used to it eventually. For example, I managed to get over `ifconfig -> ip addr` fairly quicker, even if it was really annoying at the start!


You do know that resolving something involves other things than a simple udp query to port 53?


Here's a decent overview if anyone is interested in details.

https://zwischenzugs.com/2018/06/08/anatomy-of-a-linux-dns-l...


The key Unix skill isn’t typing cat /etc/resolv.conf but strace -f -ff -o /tmp/1 -p 33333 to figure out how your DNS resolution is occurring. Maybe it is the traditional way, maybe some code is configured to use a network placed resolver. You can find out.


Tab tab...


I think it's also because those people that didn't want systemd have just moved on. I moved my servers to alpine and my desktop to FreeBSD. It's just not a thing in my thoughts anymore. I wouldn't write about it. So it seems the Linux community is more aligned now.

However alpine is working on a similar thing based on s6 but with modularity and lightweightness as design goals. This sounds great to me. I'm not against the idea of a service manager, but I think systemd is overreaching.


Indeed, they've either moved on or grudgingly accept it, even if they don't like it. Same thing happened with solaris, aix, hpux people who had to learn to accept linux as being the way of the world.

I'm sure many people - especially developers who use OSX all the time - love systemd. That's fine, but people won't love systemd, just like solaris people learnt to accept linux.

I'm also sure not everyone loves it. Some have moved on, some haven't but now tolerate it, they've spent the time to cope with it, maybe it's costing them more time every day than pre-systemd, but it's not big enough problem to move on. That's just life. I'm sure some people didn't like it when program manager was replaced by the start button in NT 4 either.

It seems that systemd fanboys just can't acknowlege some people don't like their new world order, which is rather sad in itself.


If anyone's interested here are the details: https://skarnet.com/projects/service-manager.html


There's a lot of poorly-understood incidental complexity in the systemd codebase, and this can bite users even when doing basic service and runlevel management. The systemd approach is to try and make it 100% declarative based on simple .ini files, but the semantics of this seemingly "declarative" configuration was never properly specified. Even many systemd fans seem to be quite aware of this, and there seems to be a common understanding that some ground-up reimplementation of these ideas based on a clearer underlying "philosophy" will be needed at some point. Systemd has been a successful experiment in many ways, but relying on throwaway experimental code for one's basic production needs is not a good idea.


> [...] but the semantics of this seemingly "declarative" configuration was never properly specified.

What's missing from docs like [0] that makes you say that?

[0] https://www.freedesktop.org/software/systemd/man/systemd.uni...


I know so very little of systemd as it doesn't touch anything I do.

I can pattern match that https://blog.darknedgy.net/technology/2015/10/11/0/ - "Structural and semantic deficiencies in the systemd architecture for real-world service management, a technical treatise" from six years ago (Discussion at https://news.ycombinator.com/item?id=10370348 ) might be relevant.

Are those points reasonable? Dunno. If so, have they been addressed? No clue.


https://blog.darknedgy.net/technology/2020/05/02/0/ is a 2020 post from the same blog, pointing out that these issues basically remain unaddressed. But again, this is not just some "anti-systemd" talking point; the pro-systemd side also acknowledges this! They just think sysv-init was even worse.


I honestly would like to see a better service manager than systemd, but just my opinion from following development of these things: a huge reason why it can't happen comes from underlying deficiencies in the kernel, and with Unix. The real core issues can't be addressed without a large amount of changes there, which are outside the control of a service manager.


I'd love to see a later development of a proper system manager that learns from systemd's faults and allows for compatible interfaces for transition.

Much like Pipewire exposes PulseAudio interfaces but is implemented differently.


To me it's the other way around -- pipewire is the sound server equivalent of systemd. For better or for worse.


What would you prefer then? Jack?


I just use systemd and pipewire, it's not a problem for me personally. They're the worst options... except for all the others :)


> semantics of this seemingly "declarative" configuration was never properly specified

Yeah, and it is not only underspecified, but too weak to be useful, which just pushes all the init.d logic somewhere else. What you've accomplished is moving it somewhere nonstandard, great job.

Also, the command line ergonomics suck. Systemd is deeply unfriendly to humans.

It was a power play in support of a long-term RH strategy, supported by a lot of bad faith arguments. Fascinating to watch as sociology, less nice as a forced-user.


It's interesting to see these allegations ("power play in support of a long-term RH strategy") without any proof or for that matter any explanation of what the strategy would be.


I believe that Red Hat employees were some of the main contributors to Gnome, which quickly made systemd a hard requirement, forcing distros which used it as the default desktop to make systemd the default init system.


> I believe that Red Hat employees were some of the main contributors to Gnome, which quickly made systemd a hard requirement, forcing distros which used it as the default desktop to make systemd the default init system.

Yet another person writing fantasy, not facts. Systemd wasn't "quickly" made a hard dependency. It was a soft dependency for ages, then eventually the release team made a mistake around systemd-logind. I was part of the GNOME release team at that time. Still, GNOME runs without systemd. Meanwhile we had loads of discussions with loads of distributions.

Yet the things you write: quickly done, apparently lots of people were secretly paid by Red Hat, forcing distributions? All devoid of any facts, just emotions that dismiss the amount of work me and loads of volunteers have done.


Perhaps my framing of the situation was a little simplistic. To provide more facts, let me point out that the question of which init system should be default in Debian was first asked[0] to the tech ctte in October of 2013, however it was already clear the previous month that the GNOME packagers were trying to make systemd the required init system[1] in Debian (which had GNOME as its default graphical environment already):

"Debian GNOME packagers are planning the same AFAIK; they rather just rely on systemd (as init system, not just some dependencies). In the end, the number of distributions not having systemd decreases."

That was written by Olav Vitters, of the GNOME Release Team, who later admitted[2] "Personally I’m totally biased and think the only realistic choice is systemd."

You could argue that any blame for this dependency therefore lies with the Debian packagers, rather than Red Hat employees, but actually, if you look into the history, there was already a push for making GNOME dependent on systemd three years earlier by none other than Lennart Poettering.[3]

Even at that time, Josselin Mouette, founder of the Debian GNOME team, obsequiously replied "I don’t have anything against requiring systemd, since it is definitely the best init system out there currently" and later acknowledged the influence Red Hat had over the direction of GNOME, saying "Red Hat being the company spending the most on GNOME, it is obvious that their employees work on making things work for their distribution" and "on the whole we don’t intend to diverge from the upstream design, on which a lot of good work has been done."[4]

So there was definitely pressure on Debian from GNOME to make systemd the default init, and pressure from Red Hat to make GNOME depend on systemd. Whether or not these decisions were all coordinated in advance in a smoke-filled room is beside the point, given that things worked out exactly the way such a conspiracy would have wanted.

[0] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=727708#5

[1] https://blogs.gnome.org/ovitters/2013/09/25/gnome-and-logind...

[2] https://blogs.gnome.org/ovitters/2014/02/03/my-thoughts-on-t...

[3] https://mail.gnome.org/archives/desktop-devel-list/2011-May/...

[4] https://raphaelhertzog.com/2012/01/27/people-behind-debian-j...


How about the two democratic votes in favor of debian made in a system that is more democratic than real world voting? Or that Arch independently made the switch?


You're missing the initial Debian vote, which was made in the Technical Committee, and which came down to a tie.

If you want to claim that process was democratic, you have to believe that the members of the committee perfectly represent the opinions of all Debian developers (to say nothing of Debian users), and therefore have to excuse the fact that the vote had to be settled by giving one of those people, Bdale Garbee (HP's CTO of Linux), effectively two votes.

It was only three days after the tech committee's decision that Mark Shuttleworth announced that Ubuntu would fall in line by abandoning Upstart[0], and not until many months later that the first General Resolution was put forward to try overturning the committee.[1] Of course by that time everyone was tired of the arguments and it would have soured relations with Canonical to force Ubuntu back away from systemd, so the GR was doomed from the start.

[0] https://www.admin-magazine.com/News/Ubuntu-Abandons-Upstart

[1] https://www.debian.org/vote/2014/vote_003


It's not a hard requirement, GNOME still runs on the BSDs.


Thankfully, sanity eventually did prevail in GNOME (just related to this matter, not in general), but there was a period of time where systemd absolutely was a hard dependency,


I'm not sure what period you're alluding to, but if that did happen, it seems it happened because things stalled on the BSD side, not because of any changes in GNOME. Can you please elaborate what you mean? I'm interested to get a BSD developer's view of the history, but I've only seen a few vague blog posts on this matter.


> Thankfully, sanity eventually did prevail in GNOME (just related to this matter, not in general)

You've rewritten history to pretend you're right. Then you follow up with more drivel? Sorry, aside from trolling, what is your point?


> You've rewritten history

This seems pretty clear that GNOME used to depend on systemd. How am I rewriting anything? https://wiki.gentoo.org/wiki/Hard_dependencies_on_systemd#Pa...


GNOME relies on some API that has been implemented by elogind. Nothing in this bit changed. Gentoo (I think with the help of some others) ensured it was implemented (forked) so the API could still be used without systemd.

I thought you were the same as the other person. You're continuing the same argument, so it really doesn't matter if you're the same or not: "quickly made systemd a hard requirement" is bullshit. Further, it could easily be worked around.

If you notice e.g. the history of Ubuntu it happens regularly that you hold back a component if there's a problem integrating it. This happens across multiple distributions. It isn't something unique, nor special.

GNOME nor Red Hat did NOT "quickly made systemd a hard requirement". Gentoo was great to ensure that the API that was depended upon was implemented separately. Aside from that, a distribution could also hold back the logind change that caused this change. Skipping over all of these details is great to make this into some big conspiracy story. However, it is rewriting history. It wasn't something unique. Yeah, GNOME release team misjudged one thing. But it actually took a few years before it became an issue. Not this drivel with "OMG they added a hard dependency". It wasn't like that.


That's not a full picture, apparently that was due to some specific problem in Gentoo: https://blogs.gnome.org/ovitters/2013/09/25/gnome-and-logind...

AFAIK the GNOME Wayland session still depends on logind, but that's more because there has been no interest in getting it to work on BSD yet.


I hope you're right and we get a better, simpler declarative format in the future. systemd isn't bad but i've found when writing service files that some stuff is not as obvious as it should be.


> semantics of this seemingly "declarative" configuration

What do you mean? Why the scare quotes? I'm not aware of anything in .service and related files which isn't declarative.


The question is, was it better before systemd?


No way in hell


I don't think it's that people are warming to systemd. It's more that there are two kinds of people now:

1. People to young to remember stable software. 2. People who have given up, and just accepted that Linux too "just needs a reboot everynow and then to kinda fix whatever got broken".

systemd has normalized the instability of shitty system software. And just like how you don't see front page news every day about 1.3M traffic deaths per year because it's not news, you don't see people up in arms about shitty Linux system software.

It's normal now. It didn't use to be.

Yes, ALSA is better than OSS, and then PulseAudio and now pipewire. It can do more. But when did it become acceptable to get shit, just because the shit could do more things?

Pipewire is not bug free (I have a bug that's preventing me from an important use case), but it's sure more reliable than PulseAudio, while still being more capable.

So maybe Pipewire is showing a trend towards coders actually giving a shit?


3. People who realized that the FOSS community they were originally so happy to have found - of people who actually care about writing software that works, rather than user-hostile malware designed around profit margins - effectively doesn't exist anymore, and refuse to update their software anymore at all because they know that recent versions are unusably broken, to the point of requiring "a reboot everynow and then to kinda fix whatever got broken".


I’ve since moved away from systemd for all my Linux boxes, work and home.

We still cannot block systemd from making a network socket connection so security model is shot right there by the virtue of systemd running as a root process.

In the old days of systemd, no network sockets were made.

Systemd has become a veritable octopus.

Now, I use openrc and am evaluating S6.


Selinux can block systemd from making network sockets.

What do you mean by "security model" in this case? What model is that?


By "cannot block systemd from making a network socket connection", I think GP meant that your system will break if you block systemd from making network socket connections, not that it's physically impossible to do so.


Yes. If you borked your systemd config file, system-wide failure. Non-bootable, non-useable.

Also if intransigent network connection occurs (fuZzing or not), untested errors occur … notably in PID 1 which isn’t easy to debug.


But the solution is for it to not try make the socket rather than needing another system to correct bad behaviour of the first system?


There ye go. Make system process small as efficient, not to mention easily auditable.


Systemd is already in root mode. Just need to do malicious buffer overruns … somewhere, somehow … at root level … someday.

And it’s codebase is too ginormously huge.


Service and runlevel management wasn't any better in the sysv era, nor were any of the multitude of custom start and boot scripts.

They might not have been better or more robust, but they where easier to understand and reason about. You could explain the entire thing to the most junior of sysadmins in a few minuets, tell them to read the boot scripts, and they would basically understand how everything worked.


Really? Have you ever written init scripts for a hand full of services for your typical saas application? It’s always been a mess, and I’m very happy systemd was copied from / inspired by Apple’s launchd.


I'm not saying it was necessarily easier to use, just easier to understand what was going on. Explaining step by step how a Unix systems started up used to be trivial and make sense. I don't hate systemd, I just don't understand it. But that could also be symptom of me being old.


Naa. I'm young and generally positive about systemd, but I'd happily admit that the complexity gap is huge. It's practically impossible to explain systemd to anyone not intimately familiar with linux and systems programming without a bunch of handwaving.


This. I can easily believe that systemd is an improvement in many ways for people who have time to understand it (especially in NixOS, as noted by another commenter). But I'm not happy that so many parts of Linux now have such a steep learning curve.


But system init is a hard, complex problem. You can’t create a simple solution for that, since there is an inherent complexity. I prefer systemd over having a bunch of bash scripts trying to do service restart, logging, dependency management and failing at it. You would still get the same complexity but at a different (worse) level.


> But system init is a hard, complex problem. [...] a bunch of bash scripts trying to do service restart, logging, dependency management and failing at it.

Playing devil's advocate: system init by itself is easy, just have a single script starting each daemon in sequence, like it was done in the distant past (IIRC, "init" started both the getty for each terminal, and ran a single startup script). It's the "service restart, logging, dependency management" part that's complicated. And unfortunately, since nowadays devices are often hot-pluggable, you can't really escape from the "dependency management" part.


Dependency management is not only due to hot-pluggability, but inherent dependencies between different services. This is the same problem as with package managers and I would not necessarily say it is easy.


> But system init is a hard, complex problem.

It's not. Read the shell scripts that openbsd uses to init. Simple, straightforward, easy to understand.


The complexity is buried in the huge work the OpenBSD devs make to keep the kernel and the base system small, elegant and consistent.

I read your comment more as a tribute to the excellent work of the OpenBSD team than a denial of the thesis of the complexity of the init process.


> The complexity is buried in the huge work the OpenBSD devs make to keep the kernel and the base system small, elegant and consistent.

>> You can’t create a simple solution for that, since there is an inherent complexity.

They didn't bury the complexity, they removed it. And I agree, that's hard to do. It'd be nice if the systemd folks put in the same effort to remove the complexity from their system.


As a DevOps, having to know a bunch of Linux and system programming is a job requirement. In the old days Unix system admins were very familiar with this also.


This mess is easy to solve with a library of functions to call, e.g. /etc/rc.d/init.d/functions, which can be imported and used. Unfortunately, there is no standard API for this.


Until service restarts and error recover, log storage sneaks. In the past I just call pm2/forever(or something like that, I write nodejs most) to do the rest. Because roll those yourself... is really a pain.

And a few months ago I retried that with systemd. It's really just about 10 line of configs. And you are done.

Besides that, it also has a build-in scheduler with a command for you to tell when was the task runs, did it success? And what about the outputs. Although you could say it is just a cron replacement with better ui. But why no? I don't really care about the unix philosophy, I just care what do solve the problem for me.


> Service and runlevel management wasn't any better in the sysv era, nor were any of the multitude of custom start and boot scripts.

Things would have been fine for a lot of people if they had stopped at SysV script replacing, and general start up.

At this point, with all the additional functionality continuously being added, I'm waiting for systemd to fulfil Zawinski's Law:

> Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can.

* http://www.catb.org/jargon/html/Z/Zawinskis-Law.html

* https://en.wikipedia.org/wiki/Jamie_Zawinski


> Systemd is 10 years in making

Systemd is 10 years in making, and still manages to brick production servers.

The problem is not with SystemD or its coding as such, but the ideology it came with, and bad developers who push it.

The last attempts to make it saner basically reverted it back to sysvinit. So, not much difference now.


> The last attempts to make it saner basically reverted it back to sysvinit

Uhm, how so?


How likely is it that the prolific systemd team and tech decision makers in Linux distributions don’t have a design instinct and came up with this ball of mud full of accidental and unneeded complexity? Have you considered that there may be teams and requirements outside your current sphere of experience?


It certainly does sometimes happen that difficult software projects end up being implemented by people who (at least at the start) don't understand the problems involved, for reasons related to the "winner's curse" [1].

That is, people who underestimate the difficulty of a project are more likely to attempt it, and people who don't understand the area well are more likely to underestimate its difficulty.

[1] https://en.wikipedia.org/wiki/Winner%27s_curse


It happens but it’s not clear what you intend to say with that, so maybe just say it? I don’t think the systemd team could have imagined the success and scope of the project from day 1. Another explanation for their success is that the team was onto something, and by using proper engineering practices (work incrementally on pieces that are individually useful) became successful. Think T S Kuhn’s progressive research program.


I'm saying that I don't think reasoning by considering questions like

« How likely is it that the prolific systemd team and tech decision makers in Linux distributions don’t have a design instinct and came up with this ball of mud full of accidental and unneeded complexity? »

is likely to be fruitful.

If people want to discuss whether systemd is well designed, it would be better to look at the design directly.


Agreed that this line of questioning is not likely to be fruitful. The alternative of discussing the design would have my preference normally, but i am not sure that it works any better for this hyper polarized topic.


Maybe there are, but that doesn't help me.

The systemd way seems to be "one tool of complexity 40 to 50 things" rather than "50 tools of complexity 1 each doing one thing".

When you only need 10 things, you only need 10 simple tools of complexity 10, rather than one tool of complexity 40.


That’s not how complexity works. Inherent complexity can’t really be outsourced meaningfully.


For SystemD, I see none.

Though, I see unfulfilled urge to give Linux "serious enterprise grade" twist, and bog everything in "serious enterprise frameworks of doom"


If you think that systemd is an ‘enterprise framework of doom’ then you must not have worked in Java enterprise software development.


Having suffered under both, I'd totally spend the rest of my life developing enterprise Java if it meant systemd went away forever.


My frustration with systemd is that it creeps out over time into other functionality, and that it's difficult to find the right documentation.

Like when logind was changed to kill background processes when you log out, by making KillUserProcesses=yes the default. Some Linux distros left that as is, others overrode it in /etc/systemd/logind.conf. So, figuring out what was happening, and how to fix it, was confusing. I had no idea it would have been logind doing that.

Similar for changes introduced with systemd-resolved.


In particular, "background processes" includes screen/tmux sessions that you started. This change completely broke the largest reason that people use screen or tmux.


That was a time when I was sorely tempted to dump a huge load of manure all over Lennart Poettering.


Ouch, what on earth does "log out" even mean? What a terrible way to run any computer other than maybe a single user laptop.


Not sure if sarcastic, but the systemd default makes sense. Why should random processes linger in a background, with an ill defined “not answering a signal” hack? There is no distinction between a frozen process and tmux. They can easily register a user service and continue running there.


It wasn't respecting things that have been considered normal for a while. Even calling setsid(), which tmux does, didn't save it from being killed. Same for nohup.

When the systemd people say session, they mean you would have to specifically do some kind of dbus call I'm not familiar with.


> They can easily register a user service and continue running there.

Typically this was an issue on B) workstations, and B) servers, where you might log in as a regular user without super user access.

And this[1] (might?) now work as advertised in the documentation - but the discussion around the bug doesn't induce great confidence...

https://github.com/systemd/systemd/issues/3388

[1] example no 5 here: https://www.man7.org/linux/man-pages/man1/systemd-run.1.html...


Are you seriously claiming that processes ignoring or handling signals rather than dying is an ill defined hack? That's a well defined, long established, normal way of doing things on UNIX and Linux systems. Also, that really isn't a good way of detecting frozen processes, since in your world, the only answers you'd ever get are "it's frozen" or "it wasn't frozen and now it's dead".

> They can easily register a user service and continue running there.

So systemd came in and broke the 50-year-old way of doing things, and now the programs that got broken should all have to add systemd-specific code to work again?


> Are you seriously claiming that processes ignoring or handling signals rather than dying is an ill defined hack?

Yes. Just because it is 50-years-old doesn’t make it suddenly a valid approach. There are 4 states here - process running with usual semantics for closing, process running that would like to linger in the background, and the frozen version of this two. How should a service manager/resource handle — that is very much in-scope for systemd differentiate between a frozen process that should very much be cleaned up in the no-linger case, and any process wanting to linger? And there are well-established ways to determine whether a process is frozen — pinging it expecting a specific answer.

Systemd differentiates between no-linger/linger by introducing user services - one should only write a trivial alias/wrapper script for tmux and can continue to use it to their liking. Also, it has a compile time flag, as well as a runtime config one - it is up to the distro or the user to revert back to the old way.


What ways are there to tell if a process is frozen that only work if it registers a systemd user service?


Is setsid() also a hack?


Wow, are people seriously still fussing about this? Systemd made the call to use a more secure default, they should be applauded for it. People who want the insecure way to be the default should take it up with their distro.


Killing processes that explicitly called setsid() seems wrong to me, yes. It works for every other Unix like system.

And it leaves enough info (process is it's own separate session group) for logind to know it should leave it alone.


How is killing processes when the user "logs out" at all good for security? If the processes were malicious or vulnerable, they could have done their damage or been exploited while the user was still logged in.


You don't think malicious code could wait for certain specific times to trigger, rather than running immediately during an active login session where it could be traced back to?


My problem with systemd is that it's just so poorly written.

The ideas are not inherently bad. But they're not thought through, and the implementation is pure garbage.

Like taking the most stable software in the world[1], and going "nah, I'll just replace it with proof-of-concept code, leaving a TODO for error handling. It'll be fine.".

And then the "awesomeness" of placing configuration files whereever the fuck you want. Like /lib. Yes, "lib" sounds like "this is where you place settings file in Unix. At least there's no other central place to put settings".

[1] Yes, slight hyperbole. But I've not had Linux init crash since the mid-90s, pre-systemd


My understanding is that default unit files provided by the systemd packages are in /usr/lib where they can be read-only, whereas users can add/override them by dropping their own unit files into /etc (which is more likely to be read-write).

This provides a clean separation between the default configuration and the user configuration.

Can you explain why this is a bad thing?

A counter-example that comes to mind is when a package upgrade requires manual intervention due to file conflicts in /etc. That's what happens when the packager's default configuration interferes with the user's custom configuration.


> My understanding is that default unit files provided by the systemd packages are in /usr/lib where they can be read-only, whereas users can add/override them by dropping their own unit files into /etc (which is more likely to be read-write).

> This provides a clean separation between the default configuration and the user configuration.

> Can you explain why this is a bad thing?

The normal way to do this is to put the default configuration in /usr/share/.


> Can you explain why this is a bad thing?

It's not. That's my point.

systemd does ask some good questions. E.g. I think the logfile situation needed a major shake-up in unix. Too many log file formats, in text, often completely unparsable (if you're lucky then a regex will work for 99.99% of log lines, but not all), and all unique. And the same mistakes being made over and over again. E.g. "oh, we don't log timezone", or even "meh, it's up to the user to parse the time with timezone correctly, even though things like 'MST' is not even unique".

But did systemd fix that? No. It's just that now I have logs in journalctl AND nginx, AND a bunch of other files. Thanks, standard number 15 that was supposed to unify it all. If you build it, they won't just come. Especially when the implementation is bad.

Believe it or not, the above is actually the pro-systemd argument.

Now, for what you describe: Yes. Exactly. I'm saying systemd DOESN'T do this. I'm saying this is a large successful part of Unix, that systemd ignored.

> A counter-example that comes to mind is when a package upgrade requires manual intervention due to file conflicts in /etc. That's what happens when the packager's default configuration interferes with the user's custom configuration.

Is that actually a problem? Maybe I've been spoiled by Debian, but with the combination of the upgrader showing the diff, and `foo.d` config directories, I've never had this problem in about 25 years of running Debian.

But I believe you if you say that others have this problem, and believe you when you say it's real. But how exactly is systemd fixing it? It doesn't resolve the conflict, if it works like you say where it's a per-file override. That sounds like breaking the system (usually in subtle ways) instead of hilighting the newly arrived inconsistency. That's just sweeping problems under the rug.

It's not enough to say vaguely that "this is different, so probably solves some problem, somehow. And it didn't cause me personally a problem, so fuck everyone else".


Not going to really argue here, but I have to use mlocate every single time I want to find a unit file, because there's no telling where it would be.

Perhaps blame lays on the distro packager, but still, it ends up to a user being strange.


For what it's worth:

systemctl | grep ssh

systemctl cat sshd

I find it useful to find the unit name, and 'cat' displays the location.


You can also do 'systemctl edit <service>' which will open up an editor in which you can see the existing configuration and edit the overrides in /etc/


thanks, I had no idea that existed.


Care to elaborate? Off the bat, your comment comes across as a cynical rant due to its high use of strong words (garbage, fuck) and lack of examples. And even if you have anecdotes, to be convincing, it would have compare something like bug density to the software projects that collectively replaces. As written, your statement is unlikely to convince anyone that isn’t yet already.


Not OP.

I my experience, systemd config is simple because it handles all the complexity. Inside it’s guts, it is much more complicated than a sysv system — naturally so because it can do so much more. Those folks loves using the latest and greatest kernel function for all its glory.

All works well - until it don’t. When something is broken, suddenly you have to understand all the interdependent components to debug.

Back in the days, these were not so uncommon, because bugs or simply unimplemented features………


Agree with that. It breaks catastrophically.

But it's also overengineered. Like starting a daemon on first connect is "neat trick", but should never have gone beyond that.

Like: oh, I want to restart this daemon, because I want it to re-run its init code (possibly with new settings), but you CAN'T, because some idiot decided that it'll only actually start when someone connects to its unix socket, so running "restart" is a no-op.


> but you CAN'T, because some idiot decided that it'll only actually start when someone connects to its unix socket, so running "restart" is a no-op.

FTA:

> It can then boot your service on the first request, or you can do systemctl start lunchd yourself if you think that would take a while.


You can, but it doesn't. If I know I want to start it I can just connect with the client.

Sounds like you're arguing something like "yes, it's broken. But you can just reimplement everything and it won't be broken anymore".

Yes, I can also turn off "kill user processes on logout", but that doesn't make that not-broken.


No, I'm arguing there's a way to force-start even socket-activated services.

But this is really a moot point. Systemd's socket activation is really meant for system services which would otherwise be in the critical path of system boot. 'Regular' client-facing services that people normally run–webapps, etc.–are not really the target use case. It's fine to start them up in the normal way, with WantedBy=multi-user.target in the [Install] section. And I have never seen people use socket activation for them anyway. So you are basically arguing a strawman here.


> So you are basically arguing a strawman here.

gpsd is an example that immediately comes to mind that was set to start on-demand. Which is ridiculous.

So it's a straw man that actually exists, making it not a straw man.


Starting a daemon on first connect is essential for fast boot times of a system with multiple dependent network services. This is mostly a desktop use case though. Not sure if it can be disabled for servers.


But I would also like to see data showing how often desktop users reboot (on purpose, that is, not because systemd or something says "you should reboot now" because it's shitty software that doesn't just work cross updates).

Like, who even boots their computer anymore? Isn't the typical user on a laptop, and just suspends it?

My workplace even had to install corp software that forces a reboot every N days (with warnings ahead of time) because people just Do. Not. Reboot.

And even for the people that do, at what cost, here? You have a bunch of services and services completely broken, but "they started just fine" (except they didn't start), and only break once you actually need them.

So to me this really looks like it applies neither to servers nor desktop. I'm really not seeing any use case except fetishizing boot times.

And for me this always SPENDS human wait time, not save it. I try to use a service, and nope, it needs to "boot up" first. Could you not have done that already, WTF? (and maybe it fails to boot, which I only find out about now that I'm already in the zone to use it)

Are we really optimizing for kernel developers, here? Can't they just disable the services they don't need, to speed it up?

And we have eleventy billion cores now. Really? You can't start a 645kB gpsd? It takes what, 3ms?


> So to me this really looks like it applies neither to servers nor desktop.

It applies to both. We need desktops to boot up fast, because you said it yourself, sometimes they just need to. And no one likes waiting around for their machines to boot. Can you imagine the volume of complaints about long boot times that would come in to large-scale distros from annoyed users? That alone makes it a high priority.

And on top of that, we need servers to boot up fast, because nowadays they're virtualized and started/stopped constantly when services are scaled up and down. Can you imagine trying to scale up a fleet of servers and waiting a couple of minutes for each one to boot?


> We need desktops to boot up fast, because you said it yourself, sometimes they just need to

I didn't say that. Because they don't.

> And no one likes waiting around for their machines to boot.

Nobody cares, if it's once for every month or two. Which it is.

> we need servers to boot up fast

But it's not actually booted until the service is up, so it's moot.


> I didn't say that. Because they don't.

Yes, you did. Here's a refresher:

> My workplace even had to install corp software that forces a reboot every N days (with warnings ahead of time) because people just Do. Not. Reboot.

I.e. sometimes computers just need to reboot, and there's nothing you can do about it.

> Nobody cares

If nobody cares, then why do people hate rebooting so much?


> I.e. sometimes computers just need to reboot, and there's nothing you can do about it.

And this is the attitude that brings us shitty software, and "I dunno, just reboot to fix the problem?", which is what we have now.

Short of kernel upgrades they really really don't.

But if you've bought in to "oh computers just need to reboot sometimes", then I guess you fall into the category of people who have just given up on reliable software, or you don't know that there is an alternative and no this was not normal.

>>> We need desktops to boot up fast, because you said it yourself, sometimes they just need to

>> I didn't say that. Because they don't.

> Yes you did[…] > people just Do. Not. Reboot.

So is that what I said? I believe you did not read what you quoted.

The main reason people reboot is because of shitty software that requires reboots. So if you want to go self-fulfilling prophecy, then systemd is optimizing for boot times because it's low quality software that requires periodic reboots?

But maybe you count forced reboots once a month (or every two months) for kernel upgrades (but also the above arguments since they also run systemd and therefore need reboots). Fine.

So in order to save ten seconds per month (from a boot time of a minute or so, including "bios" and grub wait times, etc.., so not even a large percentage) this fd-passing silently breaks heaps of services, wasting hours here and there? And that's a good idea?

And all for what? Because you chose to have installed services you don't need, and don't use? And if you do use them, then the time was not saved anyway, but just created a second wait-period where you wait for the service to start up?

And ALL of these services could in any case be fully started while you were typing your username and password.

So what use case exactly is being optimized? The computer was idle for maybe half the time between power-on and loaded desktop environment anyway.

> If nobody cares, then why do people hate rebooting so much?

Because all their state is lost. All their open windows, half-finished emails, notepad, window layout, tmux sessions, the running terminal stuff they don't have in tmux sessions, etc… etc…


> And ALL of these services could in any case be fully started while you were typing your username and password.

This is the key point you are refusing to hear. No, all of the services on a modern Linux machine can't be started while you're typing in your credentials. So they're started lazily, on-demand, one of the classic techniques for performance optimization and a hallmark of good engineering.


Of course they can. How many services do you they there are, installed, and how long do you think it takes to start them?

How long do you think it takes to start gpsd, or pcsd? Even my laptop has 12 CPU threads, all idle during this time. And including human reaction time (noticing that the login screen has appeared) this is, what, 10 seconds? 120 CPU-seconds is not enough? All desktops run on SSD now too, right?

In fact, how many services do you even think are installed by default?

And Linux, being a multitasking OS, doesn't even have just that window.

But you know, maybe it's a tight race. You could try it. How long does it take to trigger all those?

> a hallmark of good engineering.

In the abstract, as a "neat idea", yes. In actual implementation when actually looking at the requirements and second order effects, absolutely not.

You know you could go even further. You could simply not spin up the VM when the user asks to spin up a VM. Just allocate the IP address. And then when the first IP packet arrives destined for the VM, that's when you spin it up.

That's also a neat idea, and in fact it's the exact SAME idea, but it's absolutely clearly a very bad idea[1] here too.

So do you do this, with your VMs? It's cleary "started lazily, on-demand, one of the classic techniques for performance optimization and a hallmark of good engineering".

[1] Yes, very specific environments could use something like this, but as a default it's completely bananas.


> But it's not actually booted until the service is up, so it's moot.

With per second billing, fast boot times save money and enable lower fixed capacity, further lowering cost.


But no it doesn't. Until your service is started, your service is NOT actually booted. That's what I said.

You are not paying per-second for the VM. The VM itself adds zero value to you. It's the service that's running (or in this case, not) that you're paying for.

Who cares how long it takes before systemd calls listen()? Nobody derives value from that. You're not paying for that. You're paying for the SERVICE to be ready. And if you're not, then why are you even spinning up a VM, if it's not going to run a service?

It's just clever accounting.


Starting services in parallel will reduce overall service start up time as well, even if services are dependent on each other, because services often do work before they connect to a dependent service. Without socket activation that is a race condition.


It's a pile of proof-of-concept broken pieces duct taped together into a big mess.

Here's an example: Someone read that fd-passing is a thing, so now systemd listens to just about everything and spawns stuff on-demand.

Now, that may seem like a good idea, if you think it up in a vaccuum and don't have experience with the real world. It's a great idea, if you're in high school. But to have it actually accepted? WTF is even happening?

Oh, let's do this for time sync from GPS. Great. All that time that could have been spent verifying the signal and all, completely wasted, because some jerk thought that it's better to waste 15 minutes of the human waiting, just to save 100kB of RAM.

It's a monumentally bad idea.

And more specifics: Like I said, when you replace init you need to have it not crash.

And then restarting daemons with systemctl almost to a rule fails, and fails silently. Often I have to just kill the daemon manually, and systemctl start it again.

But people aren't complaining about systemd anymore because now there's two kinds of people:

1. People to young to remember stable software.

2. People who have given up, and just accepted that Linux too "just needs a reboot every now and then to kinda fix whatever got broken".

But maybe the trend is changing? Pipewire looks like it's not actually shit (unlike PulseAudio which has plagued us forever), and while it has some important bugs in edge cases, it's actually more reliable than what it's replacing(!)

> As written, your statement is unlikely to convince anyone that isn’t yet already.

It's hard to convince people who don't care. Or indeed those who don't know that no, actually, short of a kernel upgrade "reboot to fix that problem whenever it happens" is not normal, and is a serious bug.

Pre-systemd Linux had as a selling point that it's actually stable, compared to Windows at least. But Windows has gotten much better in the past decade in reliability, and Linux much worse.

systemd is on the level of a re-think by a pretty bright high school student. And that's not a good thing. It's a very bad thing.

> to be convincing, it would have compare something like bug density to the software projects that collectively replaces

You're asking me to be data-driven, while being fully aware that systemd isn't, right? Your argument is essentially fallacy by implying that status quo is data-driven.

It's hard to take your suggestion at face value. Especially with many of the same people pushing systemd at the time making up shit like "We know that Unity is objectively the best user experience in the world"[1] (that's why it lost, because nobody liked it, right?[2]).

At the same time I also fall into group (2), above. I don't have time to wrestle in the mud with people who don't care.

[1] a quote like that, I may not have gotten the words just right. but the word "objectively" (without data) was there. [2] and I don't even particularly care about window managers. Before Unity I hadn't bothered switching from "whatever the default is on this system" in most cases.


You can just disable the fd passing for services that don't need it. I'm not sure what your actual issue is. I haven't had to reboot any more with systemd than I did with sysvinit or openrc, or the slackware rc init, or anything else really. If you have an actual crash that is causing you problems, you should consider reporting it or submitting a patch to fix it, just like you would with any other open source that you depend on.

>systemd is on the level of a re-think by a pretty bright high school student.

That seems like an odd statement, I believe systemd was inspired by other established unix service managers like macOS launchd and solaris SMF. The design is definitely not perfect but I wouldn't say the history was ignored when making it.


> You can just disable the fd passing for services that don't need it. I'm not sure what your actual issue is.

First of all this just sounds like "I don't know what your problem with systemd is, you can just choose to not use it". Second, my point is that it's an absolutely terrible idea, and should never have been done.

> I believe systemd was inspired by other established unix service managers like macOS launchd and solaris SMF. The design is definitely not perfect but I wouldn't say the history was ignored when making it.

A high school student can read up on things and then make something they, without real world experience, thinks seems like it's a good idea. And with no experience about what it takes to make software reliable.


If you think the socket activation takes too long you can just turn it off and start that service unconditionally. What's the problem? Other service managers support this too, it's not a systemd only thing. Maybe it's a bad idea for some services but others would seemingly disagree that it's always a terrible idea, and not just those who are systemd developers. You may just be using it for the wrong services -- it's most useful with services where the startup time is less important than reducing the overall memory usage on the system.

If you have experience making software reliable, please consider submitting bug reports and patches to help the project, like you would do with any other open source that you depend on. I'm sure contributions to improve the testing and CI would be appreciated. A high school student can also trash talk loudly about things they didn't take the time to fully understand (and I admit I did a lot of that when I was a teenager in high school), but it takes real expertise to illustrate what the actual problem is and to contribute a fix for it in a positive way.


> it's most useful with services where the startup time is less important than

So now it's not about startup time at all?

> please consider submitting bug reports and patches to help the project,

Who says I don't?

But systemd needs a few full time adults, not just a patch here and there to fix the launch-while-brainstorming culture that gave us the current situation.


It can also be about initial startup time of the system, not startup time of any individual service. Please follow the site guidelines, c.f. the part about "don't cross-examine" -- this is just a technical tool, I'm sure we both can come up with some ways that it could be useful, even if we wouldn't use them ourselves.

>But systemd needs a few full time adults, not just a patch here and there to fix the launch-while-brainstorming culture that gave us the current situation.

Then start working it on it full time, and get some of your friends hired too? Surely you can find someone to pay for that, if it's useful? What else is it that would satisfy you here? I'd be happy if another group was committing full time to systemd (or another similar open source project) just to fix bugs. I fully support you if you decide to do that.


> Then start working it on it full time

I have told you multiple times why not.


Sorry I think I missed that. In that case you will have to find someone else who can do it and figure out how to get them paid. If you want help doing that, don't hesitate to ask.


> Here's an example: Someone read that fd-passing is a thing, so now systemd listens to just about everything and spawns stuff on-demand.

This feature benefits some people. Even if your service requires socket activation, you can still set the service to startup and you can still control it with normal command lines.

> It's a great idea, if you're in high school.

You make this accusation twice, but you give no-one any reason to believe it (unless you think "using an optional feature I don't want to use and which isn't significantly easier to use than to not use" is a reason, but most people would say that it is a good example of flexibility).

> And then restarting daemons with systemctl almost to a rule fails, and fails silently.

I have encountered an issues with using `service blah restart` on Ubuntu, which means if it isn't connected to an interactive session it doesn't work properly. I was able to fix this by switching to `systemctl restart blah`. Perhaps you're experiencing something similar? I imagine Ubuntu's service wrapper is probably taken from Debian so it could be quite widespread.

The fact that `systemctl (re)start` doesn't always give useful feedback is irritating. I am usually too bothered by something not working to have noticed when and why it gives no output about a failed service. A command should always output something on error and it should be sensitive enough to notice whether it has succeeded or failed.

> Linux too "just needs a reboot every now and then to kinda fix whatever got broken".

This is another accusation you make twice. I have been using Linux since the 90s, and I cannot agree that we have to restart Linux distros more often now than before. Can you give any examples of circumstances when you choose to reboot?

> You're asking me to be data-driven, while being fully aware that systemd isn't, right? Your argument is essentially fallacy by implying that status quo is data-driven.

Systemd might have been adopted on theoretical grounds. But that doesn't mean that an empirical objection is useless or irrelevant. If you can show that the theory doesn't match the data, or the data is worse for systemd than some alternatives due to unforeseen consequences or the difficulty of dealing with the larger spec, then this might lead to improvements to systemd or adoption of some alternative.

> At the same time I also fall into group (2), above. I don't have time to wrestle in the mud with people who don't care.

You don't fall into group (2) above. Group (2) is a subset of people who do not complain about systemd, but you are complaining about systemd. Moreover, your comment slings a good lot of mud, so it's hard to take that as a valid objection. You at least should work to clean up the mud you threw unnecessarily.


> This feature benefits some people. Even if your service requires socket activation, you can still set the service to startup and you can still control it with normal command lines.

But it's broken by design. It's a bad idea. It's "neat", but "neat" doesn't add value.

> You make this accusation twice, but you give no-one any reason to believe it (unless you think "using an optional feature I don't want to use and which isn't significantly easier to use than to not use" is a reason, but most people would say that it is a good example of flexibility).

Not sure what you mean. Using this feature pushes orders of magnitude of complexity onto users, and greatly reduces the ability to error handle or even know the status of services.

Having a daemon listening to a socket is clearly orders of magnitude easier for end users. It means everything is in agreement. You check if the service is turned on (systemctl or whatever), you check if the process is running (ps, etc), and you check listening ports (netstat, nmap, etc...), they all agree that the service either is or isn't running. And if it's running, it's successfully run its initialization and should be usable.

We already have this experience with inetd-based services. fd-passing is reinventing the past, poorly. fd-passing actually would have made more sense back in the 90s, because spawning a new process was more expensive. Both in terms of code and CPU power that overhead doesn't actually matter anymore for the vast majority of cases.

Again, I don't know what your question is. Is it "why would I possibly want to know if a service is ready do its duties or not?"? Because if it is, I don't know what to tell you.

> Perhaps you're experiencing something similar?

I mean things like restarting nginx, and either it just plain didn't (and thus fresh TLS certificates weren't picked up), or it failed to start up again and now nobody's listening to port 80/443 at all.

> Can you give any examples of circumstances when you choose to reboot?

I've filed bugs, but don't want to doxx myself. Something more systematic is that sometimes after running apt-get upgrade it's recommended that I reboot (for non-kernel reasons). The fact that someone would even write that message is a sign that the author doesn't care.

> Systemd might have been adopted on theoretical grounds. But that doesn't mean that an empirical objection is useless or irrelevant.

I agree. But this is a very common tactic for people who just don't want to have a discussion, too. I'm sure in this case you're saying it in good faith, but you should be aware of the assymetry of asking one side to provide data when the other side has none. And the cost of collecting interpreting that data (depending on which parts of this, what, a human-year?), and the risk of systemd people dismissing that data anyway, because "yeah, I guess the data supports your point of view, but I don't like it so you can fork the repo to implement it if you want. Bye.".

So I'm not saying this appeal to data is in bad faith, but it is a bit naive.

> Group (2) is a subset of people who do not complain about systemd, but you are complaining about systemd. Moreover, your comment slings a good lot of mud, so it's hard to take that as a valid objection. You at least should work to clean up the mud you threw unnecessarily.

Best comparison is that I can complain about the corruption of politicians without inviting an argument that I myself should become one, to drain the swamp.

IOW: I don't have the time, and if nobody else cares, then even if I did then I don't see that I would succeed in a sea of people who don't care about software reliability.


>But it's broken by design. It's a bad idea.

You keep saying this, without giving evidence to back it up. People are still running Linux in memory constrained environments, those didn't go away now that the 90s are over.

>Using this feature pushes orders of magnitude of complexity onto users, and greatly reduces the ability to error handle or even know the status of services.

To be clear, it sounds like what you're suggesting is that these services implement their own fd holding logic, which is going to be even more complex, and is exactly what systemd is trying to prevent from happening.

>You check if the service is turned on (systemctl or whatever), you check if the process is running (ps, etc), and you check listening ports (netstat, nmap, etc...), they all agree that the service either is or isn't running. And if it's running, it's successfully run its initialization and should be usable.

This isn't really correct, netstat or nmap won't show process status at all. You really don't know what the real status of that port is unless you've run lsof or something else that scans the open fds of the processes, and such a tool would make it obvious when systemd (or some other fd holder) has the fd open. Also, systemctl will display this separate socket/service units, so you can just check if the socket unit is running but not the service.


> You keep saying this, without giving evidence to back it up

I have, repeatedly. But like in this thread you just reply with the same question asked one more time:

https://news.ycombinator.com/item?id=27653716

> People are still running Linux in memory constrained environments

So why do they have all these memory-hungry services they don't need on standby?

Does that mean that I can DoS these machines simply by connecting to all the open ports, thus starting up the heavy daemons in the constrained environment?

Why is that a good thing?

>> You check if the service is turned on (systemctl or whatever), you check if the process is running (ps, etc), and you check listening ports (netstat, nmap, etc...), they all agree that the service either is or isn't running. And if it's running, it's successfully run its initialization and should be usable.

> This isn't really correct, netstat or nmap won't show process status at all.

This is HN, not reddit, so I'm going to assume you're not just trolling.

    netstat -na | grep tcp.*443
Yes, actually, netstat will show you if you have an HTTPS server running. It will show you if you have an SSH server running.

Same argument with nmap.

Compare this with the fd-passing model, where you can have every port on your system bound, and it tells you nothing (while troubleshooting) which services are actually up.

Do you not see how "all the ports are bound" then becomes completely useless in troubleshooting and checking status?

Will it tell you if you're actually running SSH on port 443? No, of course not. That's not how troubleshooting works, like at all.


I ask the same question because I haven't yet seen a better alternative that was given. If you have one, please show it, it would be very interesting to me. Otherwise, it sounds like you may not have that much experience with these tools, which is understandable. I can help find solutions, if you're interested.

>Does that mean that I can DoS these machines simply by connecting to all the open ports, thus starting up the heavy daemons in the constrained environment?

I'm not sure I'm understanding this question? A lot of machines are not open to the public internet, so this probably doesn't apply there. You can also use some cgroup managing tool (like systemd) to restrict memory usage to the process and configure the OOM killer behavior, so that would also prevent DoS attacks.

> Yes, actually, netstat will show you if you have an HTTPS server running. It will show you if you have an SSH server running.

Actually no, this is wrong, at least for me when I tried the version of netstat that ships with debian. It only shows if something has the port open -- that thing could be an fd holding service (like inted or systemd or something else), or it could be a load balancer, or it could be another service that is incorrectly configured to use the wrong port, etc. So you're right that this complicates the system but this isn't really systemd's fault, and there is nothing that a service manager can really do about this. The only way to know for sure is to use a different tool that prints information about the owning process -- that way you know for sure if it's sshd or something else. Maybe you have a version of netstat that shows this information? If so, then it's not a problem at all, just simply check that column before you continue with your trouble shooting.

>Will it tell you if you're actually running SSH on port 443? No, of course not.

Well now you got me confused, this seems to be directly conflicting with when you said this: "It will show you if you have an SSH server running"


> A lot of machines are not open to the public internet, so this probably doesn't apply there.

An internal audit is enough to trigger it. "Port scan crashes machine" is not exactly "reliable software".

> You can also use some cgroup managing tool (like systemd) to restrict memory usage to the process and configure the OOM killer behavior, so that would also prevent DoS attacks.

But that means that the default is bad, and unsuitable for resource constrained machines. Which circles back to "neat, but no actual use case".

> Actually no, this is wrong, at least for me when I tried the version of netstat that ships with debian. It only shows if something has the port open -- that thing could be an fd holding service

So you agree that it's a bad idea?

> So you're right that this complicates the system but this isn't really systemd's fault

It is, because it's needless complication. At least inetd was a model to make things simpler. It's the cgi-bin of network services.

But you'll notice that people don't write inetd-based services anymore. In fact my Ubuntu default install doesn't even have inetd installed.

> The only way to know for sure is to use a different tool that prints information about the owning process

netstat has supported this for (maybe) decades on Linux. It's the -p option.

But aside from systemd's poor choices if you see port 22 open, then you can actually be very sure that there is an sshd running, that successfully started (not too broken config).

You could still be wrong. Someone could have started netcat there, or just a honeypot, or whatever, but you can't tell me it's not useful information.

> Well now you got me confused, this seems to be directly conflicting with when you said this: "It will show you if you have an SSH server running"

… unless systemd broke this functionality. I'm making the point why it's a bad idea to break this.

Clients connecting will also not get useful error messages. Port is closed means service not running. Timed out waiting for SSH banner means something else.

Pre systemd it was essentially never anything other than inetd that held ports for others. And for about the last 20 years even it would only do things like echo,chargen,time service that people would run. And having those open by default is from a more naive time, where people thought "sure, why not run tftpd and time service, would could possibly go wrong?".

Nowadays they're off by default, because we're more experienced that any attack surface is still an attack surface, no matter how small.

Probably it helped that OpenBSD kept bragging about how many remote holes in the default install. It's not actually because OpenBSD had better code, it was just that a default OpenBSD only had OpenSSH open to the world.


>"Port scan crashes machine" is not exactly "reliable software". [...] So you agree that it's a bad idea? [...] But that means that the default is bad, and unsuitable for resource constrained machines.

I'm not sure I understand where you're coming from here? I explained how it could be made suitable, it could be done in a way that was crash resistant. I don't know if it's a bad idea or not, it depends on what you're trying to accomplish. The default here is configured by the distro, so you could expect to see a different default on an embedded distro.

>In fact my Ubuntu default install doesn't even have inetd installed.

I believe this is mostly because systemd has replaced its functionality.

>netstat has supported this for (maybe) decades on Linux. It's the -p option.

Good call, I forgot about that, I always use lsof. But that's exactly what I mean, it will show you which pid has the port open, so it will make it obvious if it's systemd or sshd. You won't be sure if there is actually an sshd running unless you check that. This really seems like a non-issue, you have all the tools you need to troubleshoot it.

>systemd broke this functionality. [...] Port is closed means service not running. [...] Pre systemd it was essentially never anything other than inetd that held ports for others.

I don't really want to discuss this anymore if I have to repeat myself, but this is not correct. There are multiple other reasons why you would have another service holding the fd open, such as load balancers, filtering proxies, userspace firewalls, etc, etc. The ability to pass an fd to a child process is an intentional feature of every Unix-like operating system that I've used. Systemd is only using the feature as the OS intended it, which is also supported on OpenBSD.


[flagged]


Please adjust your comments to be according to the community guidelines:

https://news.ycombinator.com/newsguidelines.html


Recently I have been wondering if systemd solves problems that are becoming less and less relevant to developers. New services are often deployed as containers.

While systemd has a bunch of container-related functionality, it does not integrate well into the Kubernetes or even Docker workflow. It's used very little in those environments.

If you are building CoreOS or NixOS system images, or traditional Linux system services, then systemd matters. But I think way more services are being built for the container world where these problems are solved differently.

For example, the TLS configuration can be handled with common container patterns. The author's startup example would translate more easily to a full-blown Kubernetes environment once the VC funding hits their bank account if they had used containers from the start instead of first writing the service for systemd.

It's a shame because systemd is very powerful and I've enjoyed using it.


As a developer I prefer using systemd instead of containers to deploy Golang applications.

Without (Docker) containers it is:

- build Go binary and install it in production server

- write and enable the systemd unit file

With (Docker) containers it is:

- write Dockerfile

- install Docker in production server

- build Docker image and deploy container in production server

I get the appealing of containers when one production server is used for multiple applications (e.g., you have a Golang app and a redis cache), but the example above I think containers a bit of an overkill.


i feel the same way, systemd also has some comprehensive sandboxing capabilities built-in.... i have my gripe with systemd too though. mostly with journald because it is slow, likely due to its on disk format, really could have used sqlite for this...


Same. I deployed a fleet of transcoding servers with the worker logic being a simple Go program. It was super simple with systemd.


Without docker also:

* have a production outage because your libc was updated and now your go apps (which are dynlinked against it by default) won’t start

* mess around with low level cgroup settings if you need to oversubscribe safely

* cry in a corner the second you also need some python libs installed to do some machine learning or opencv or whatever on the side


And if you really want to make a container image for any reason, you can still have systemd use that directly as a Portable Service instead of through Docker.


Those of us using Java, such problems were already kind of irrelevant in 2005.

Where you deploy your EAR/WAR file doesn't matter, so the application container can be running on Windows, any UNIX flavour or even bare metal, what matters is there is a JVM available in some way.

Also on the big boys UNIX club (Aix, HP-UX, Solaris,...) systemd like alternatives have been adopted before there was such an outcry on GNU/Linux world.

On cloud platforms if you are using a managed language, this now goes beyond what Java allowed.

You couple your application with a bunch of deployment configuration scripts, and it is done, regardless of how it gets executed in the end.

The cloud is my OS.


Containers might be popular in startups' "pay five figures a month to $CLOUD_PROVIDER" scene when VCs rain infinite free money, but there are still plenty of occurrences where you have to deal with old-school physical machines where it's often easier to just run the software on the bare-metal rather than using Docker and yet another layer of abstraction.


I run a bunch of services written in python on 50+ bare metals and I can tell you it made my life easier to dockerize everything.


Nomad has a systemd-nspawn driver in the community section, though. https://www.nomadproject.io/docs/drivers/external/nspawn


You can just use podman to run Docker containers. That workflow is honestly what I wanted years ago when I first used docker, where containerization is put in the core system, and you can progressively add containerization to your core services while also running a full container on top of the same runtime.


> New services are often deployed as containers.

That's another problem to be solved.


One reason why application containers are successful is because they eliminate the complexities of a single system where multiple services are running and potentially interfering with each other.

There is no need for PrivateTmp= or some of the other configuration shown in this article because the application container already runs in a separate environment.

I think this is worth considering with respect to this article, even though containers definitely bring their own problems.


Only complaint I have about systemd is their docs aren't versioned, so it's difficult to see if the functionality you're looking for is available in the version you're running. Most of the time I have to Ctrl F through the NEWS doc in their repo.


The title is an oxymoron.

Putting a huge complex piece of software between yourself and "complexity" doesn't make the system less complex.


Systemd seems to do a good job of moving the complexity of managing the privileged/unprivileged divide into a standardized service.

I sympathize with the "transition sucks" sentiments elsewhere on this post. Having a bunch of working scripts turned into instant technical debt cannot be pleasant.

But, as with python3, systemd seems to be the way things are headed.


Why are we talking like this? We’ve been using systemd for over 5 years. It’s weird. Time loops.


It took 10 years before the “python2 will never die”-crowd finally accepted they where not going to win at that python3 was here to stay.

People spend a couple of years getting used to a stack in their early carrier and then spend decades arguing that it should never change.


The problem with your analogy is that while a lot of people did move from python to python 3, a lot also stopped using python all together because the python2/3 split demonstrated that the python devs were willing to throw everyone's work out the window for fun. There was no technical reason why backwards comparability wasn't possible. Multiple good solutions were put forward. They were all rejected for silly reasons.

A lot of that "python2 will never die" crowd left python all together, and they are better off for it because they won't have to deal with the next time python decides to throw everyone's work out the window.


But python programs are much more complex than init scripts systemd replace. It is more like people hate change and are lazy to learn a new thing.


Mamagement has 99 problems. Holding a status quo is about minimizing the backlog more than hating change as such.

Even when we can show am improvement in security and usability, and lower training cost because of consistency, it's still another mouth to feed.


If it provides sufficient benefits (which systemd does), it is a no brainer.


As a technical matter, sure.

As a business/ management matter, we frequently cannot do the technically obvious thing due to other constraints.


I would agree, but systemd has been around quite some time now. At this point it is technical debt not upgrading to it, unless you use something better.


> the "[python] will never die"-crowd finally accepted they where not going to win

Why are you talking in the past tense? We have done no such thing. Death to python 3; long live python.


Obviously I don’t mean you don’t exist any more, but the surveys show enough converted that your views on pythons future are no longer relevant in the bigger picture.


I don't have a horse in the systemd vs init.d, I'm a dumb groovy programmer after all.

But you are right, adoption is not enthusiastic, which to me is a massive indictment of the design and usability. We'll basically spin wheels until someone gets annoyed with it and does systemd better, or at least more modularized.

My complaint? sudo systemctl <verb> <service> means the verb cannot be autocompleted or introspected like sudo service <service> <verb> could be. May be minor, but it's generally my only interaction with systemd versus init.d, and to me they completely blew the only thing I use. Not a good impression.

I understand that init.d was a cobbled set of scripts, loose conventions, and even some hacks. But the resistance to system.d is so pervasive it cannot just be stubborn unix neckbeards.


The people who matter, who write init scripts, in other words the distro maintainers, were happy to switch.

Why else do you think so many distributions switched?

Yes, the resistance is noisy and stubborn Unix neckbeards. Not even Unix, since every other Unix had something similar to systemd already. LINUX neckbeards.


Agreed.

One point is that processes other than root cannot start services on ports < 1024. That was a sensible precaution computers where big and multiuser, like in a university setting.

However, with single-serving services (e.g. in vm/container/vps/cloud), there is no need for it.

BSD lets you configure it with a sysctl option. But Linux defends that option like it is still 1990.

On NixOS, I patch it like this:

   boot.kernelPatches = [ { name = "no-reserved-ports";  patch = path/to/no-reserved-ports.patch; } ];
With the patch just as big:

  --- a/include/net/sock.h
  +++ b/include/net/sock.h
  @@ -1331,7 +1331,7 @@
  #define SOCK_DESTROY_TIME (10*HZ)

  /* Sockets 0-1023 can't be bound to unless you ares uperuser */
  -#define PROT_SOCK      1024
  +#define PROT_SOCK      24

  #define SHUTDOWN_MASK  3
  #define RCV_SHUTDOWN   1


Does it really change anything running something on port 90 rather port 1090?


If you get unprpivileged access to a system, and somehow manage to crash sshd, or win a race to bind port 22 when sshd restarts, you can intercept other logins.

If you can bind port 80,you can gets ssl certs via let's encrypt (which could let you intercept not just web, but also smtp/imap etc).

So yes, it can make a difference. Of course - it's better if the user doesn't have access to begin with.

This might be more interesting for classical multi-user servers than "single use" servers that don't allow "regular" users to login via ssh.


Use NixOS and you'll love systemd.

You'll be defining your own systemd units with ease.

systemd to you will be journalctl and systemctl. So pretty good.


> Use NixOS and you'll love systemd.

I use NixOS but certainly not love systemd. Instead, I've created a way to replace it with s6.[1]

1: https://sr.ht/~guido/nixos-init-freedom/

Cheers, Guido.


That's nice and it showcases how Nix can create a declarative process management atop a script-based imperative manager. How is your experience with it? Also note that there's https://github.com/svanderburg/nix-processmgmt, a manager agnostic processes management framework supporting s6 among others, but your way seems a bit more straightforward.


Thanks,

My use case is to get my VPS run a web and mail service on Nixos but without the bloated binary logging of journalcontrol. The indexes in these log files change heavily between snapshots so they take up way more disk space than append-only text logs, that snapshots very well on ZFS.

My experiences while building:

- It's easy to use the config.system.services tree of the user-services (ssh, bind, caddy, etc) to create s6-services;

- Sometimes it needs a change to make it work [1] on s6;

- That same change seems oblivious to systemd. I guess it just starts the process and never bothers to monitor liveness. So much for a process management system ;-)

- Nixos packagers seems overworked, as my pull requests seem to get stuck ;-(

- Nixos use of systemd leaves a lot of decisions to resolve at boot time, decisions that I want to make at build time with s6;

- However, I cannot create a s6 dependency tree specification at build time, that's still at run time;

- Because S6 uses the service-directory to store state-files in the same directory (no /etc-/var split).

1: https://github.com/NixOS/nixpkgs/pull/122844


I use mostly Ubuntu and Debian and defining systemd units just means you have to spit the right text into a .service file placed at the right spot.

How does NixOS make that easier?


In NixOS it's trivial to use and define NixOS modules, which handle restarting/starting/stopping systemd units. See the wiki example[0] on how to define and use a service that greets the user with GNU Hello. Also, since you have access to Nixpkgs, you can make the ExecStart as complicated as you want with whatever dependencies you desire, and trivially share it with others.

[0] https://nixos.wiki/wiki/Module#Example


It looks like NixWay of doing NixThings to solve NixProblems.

Why I need NixModules in the first place? ELI5, please.


It's basically just a nice way of composing the different parts of your system.

E.g. you could write a NixOS module to manage your web app. You declaratively configure that you want to run Nginx, Postgres, etc. open some ports, connect to your VPN and basically everything else you might want. There are a lot of existing modules, so in most cases you have to write very little code yourself. If you want to scale your system to run on multiple systems or containers, you can do that with relative ease.

You also have a singular source of truth for all of your information, down to your application binaries. Your configuration says you are using Postgres version X.Y.Z with compilation options A, B and C, so that's exactly what is running on your systems.


It's an academically more impressive version of Ansible that's deeply intertwined with the distro's package manager.


You can override NixOS's predefined systemd settings from _outside_ using NixOS module options. This allows you to change default settings that are not optimal for your use case, without having to patch NixOS itself, or write your own unit config.

For example, systemd by default permanently gives up restarting services after a few number of tries (e.g. 5), even if you have set `Restart=always`. This is suboptimal for web servers that should recover by themselves after arbitrarily long failures (e.g. network downtimes outside of your control).

On NixOS, you can, from your machine config, set:

    systemd.services.nginx.unitConfig.StartLimitIntervalSec = 0;
This sets/overrides just that specific systemd option for the existing nginx module. On other distros, you often have to resort to global mutation in `/etc` that does not compose well.

We use NixOS for our infra (having used Ansible before), and this ability to override anything cleanly and keeping defaults otherwise made for much easier to maintain infra code and less ugly/surprising compromises.


>On other distros, you often have to resort to global mutation in `/etc` that does not compose well.

Why does this "not compose well" ?

You don't have to override the whole unit as /etc/systemd/system/nginx.service , which would have problems if two things wanted to override different parts of the original unit. Just drop an override file in /etc/systemd/system/nginx.service.d/90-restart-always.conf with that one specific config you want to override.


> Why does this "not compose well" ?

Because you cannot easily write libraries/components that do this.

In NixOS, other modules can override the options of other modules. For example, a a web app can set the nginx options that it needs, instead of requiring you (the admin) to "drop a file" in /etc.

This is one of the reasons why on Ansible Galaxy (community repository of Ansible roles) there are 527 nginx roles [1], and in NixOS there is 1 nginx module that everybody code-reuses.

[1]: https://galaxy.ansible.com/search?deprecated=false&tags=web&...


What does

    systemd.services.nginx.unitConfig.StartLimitIntervalSec = 0;
do that it doesn't require root?

If the point is that it's not manipulating a system-level nginx service but a user-level one, then writing the systemd override file in the way I described doesn't require root either.


Applying the NixOS config requires root -- that's not what it's about.

What I mean is that services can set other services' options, without you (the admin) having to to write such overrides manually.


Nothing prevents $service's package from creating /etc/systemd/system/nginx.service.d/90-restart-always.conf in a regular distro.



It does not.


If systemd was just a replacement for starting scripts that would be one thing.

when you have to run "systemctl disable systemd-timesyncd systemd-resolved systemd-networkd" to start to get back to sanity, that's not init


I just learned about S6 recently. It simplified daemon supervision for me a lot.


s6 is very cool. Just out of curiosity, have you found any specific use-cases for it yet?


This should be part of the manual. Ive tried multiple times to understand systemd deeper than a service restart here and looking at logs for a unit there to no avail.


I've found systemd to be quite well documented. Here's a few of the resources I frequent:

Systemd docs: https://www.freedesktop.org/software/systemd/man/index.html

List of directives: https://www.freedesktop.org/software/systemd/man/systemd.dir...

Unit-specific configuration: https://www.freedesktop.org/software/systemd/man/systemd.uni...

Service-specific configuration: https://www.freedesktop.org/software/systemd/man/systemd.ser...

Timer-specific configuration: https://www.freedesktop.org/software/systemd/man/systemd.tim...


As someone who wants to learn systemd, even just to understand what I’m stuck with and why it was made, this is the best introduction I’ve found. Does anyone have other resources to help beginners learn how to use systemd efficiently and it’s relationship to Docker?


As a complete systemd newbie, and being pretty new to Linux, I found the documentation well-written, organised and pretty easy to understand. See https://news.ycombinator.com/item?id=27653283


You have probably already seen this, but I found this an interesting explanation to the why question: https://www.youtube.com/watch?v=o_AIw9bGogo


Really enjoyed reading this article.

The LoadCredentials thing reminds me of configmaps in K8S, is there a more general thing in systemd e.g LoadConfig


Disclaimer, no idea what LoadConfig does.

A more generic approach than LoadCredentials I think is the EnvironmentFile= directive if you want to pass along multiple env variables to your process without individual Environment= directives


In the TLS example, how does systemd handle certificate rotation? Are the certificate files symlinks, hard links, or copies? If hard links or copies, presumably my service will need to be signaled or restarted to get the new certificate; does systemd do this?


Whenever these discussions of systemd come up, I am reminded of this talk[0].

It will be interesting to see if one day a replacement for systemd comes along and people who once championed systemd will begin to use the arguments the people who do not prefer systemd use to defend their choices for not wanting to use the next init system manager.

[0]: https://youtu.be/o_AIw9bGogo


> see if one day a replacement for systemd comes along

Part of the critique of systemd is the basic architectural choice of having this monolithic layer between regular user apps and the kernel. So, in a sense, the idea is _not_ to replace systemd with a better-written systemd, but to do things differently.


Yes but in doing things differently with the new one people will defend systemd the same way people defend sysvinit was my point. They will claim the new way is too complicated or tries to replace too much.

I might not be getting the point of the talk but I really appreciate the argument that Benno Rice presents.


If you are looking to run your containers in a very light weight way and also easily understood, you can use systemd for this and use tips from this article. You will have to do the orchestration yourself so I think it would have to be more suitable for very simple deployments with small teams and / or part time projects.


I've used systemd containers to run badly behaved GUI apps, e.g. Steam and MS Teams on my personal computers. Surprisingly good experience, the systemd manual pages are extremely comprehensive and, in my opinion, much easier to understand than people make them out to be.


I don’t see it as a systemd-ism but that the UNIX way is maturing.

Passing an arbitrary fd or socket from one process to another solves many problems and we are in the habit of doing it now.


I read that as meaning "avoid systemd's complexity" as opposed to "avoid complexity by doing so and so using systemd". Oh well.


To;Dr: good lord it's hard to avoid complexity when you're not using Docker to run your app.


Trying to avoid complexity by being dependent on something horribly complex is not going to work.


> Trying to avoid complexity by being dependent on something horribly complex is not going to work.

It is probably unavoidable, looking at how complex modern compilers, processors and kernels are. They sure do make a lot of things simpler, though.


Yeah if it is simple to use and has all the hooks I might want eventually, I don’t mind a complex implementation. Better one complex but popular implementation behind simple config than a bunch of people DIYing it. As long as turning on some high detail debug/tracing is possible so I can debug the complexity if needed


You incur complexity with systemd. You avoid complexity with runit.


runit isn't comparable with systemd. It isn't even trying to be a powerful init system.

You will incur in lots of complexity trying to deal with init systems that aren't much better than traditional init.


IMO runit is abandonware at this point. No release since 2014.

Have you looked at s6? It’s a compelling alternative.


Runit is finished at this point. It does what it says, and I haven't run into bugs.

Churn isn't a virtue.


Exactly this. Runit was designed to be simple and in simple software, at some point, there is just not much to improve.


Void Linux, "the BSD of Linux", uses runit and it has been fine since forever.

If there were any issues they would be worked on.


I’m glad to hear runit works fine for them.

I’m not so sure there isn’t any room for improvement though. In the related s6 project there is lot of discussions about adding new features that are beneficial for supporting a modern Linux distro.


As a datapoint of one, I've been using Void Linux for near a decade.

I haven't once been in a situation where I thought systemd would help. Granted, I don't run many custom services, but for what it does, runit does a great job. Something a bit more user friendly like s6 would be nice, but otherwise it stays out of my way and I don't think about it.

NixOS/Guix would be worthwhile switching to, but Void is comfortably simple.


But... it's not modern!


Both runit and s6 are copies of daemontools which hasn't been updated since 2001. Try doing diffs of runit and s6 against daemontools. Are the differences are significant. What "improvements" were made.

Things can be built to last, including software. That so many programmers today are not building such things (possibly they are incapabale) does not change fact that some did so in the past (whether intionally or not), and some still can.


I’m not convinced that all development that has been made on runit/s6 over the years has been superfluous.


I want to mention connman.

Does everything you expect of single programmlet to manage your NTP, resolve.conf, DNS caching, mDNS, network devices, and etc.

Importantly, it weights only 1/100 of SystemD


Seems like that is mostly for the network-related stuff and not services, mounts, isolation/namespacing/containers, init, logging and so on. Aren't you comparing a basket of apples to an apple-slice?


Yes, I do, but that's what I believe people look for when looking for running a server without 10 daemons to handle network configuration.


I don't think systemd requires 10 daemons for network config, isn't it only systemd-resolvd for DNS, systemd-networkd for actual network, and systemd-timesyncd for NTP?

If you only mean to say that you replace 3 bits of systemd then it would be good to say that, and to say which bits of systemd it replaces.

A lot of distros do not even use those systemd services, for example ubuntu does not use systemd-networkd by default.

I read "it weights only 1/100 of SystemD" as if it compared systemd to connman directly.


How to kick out SystemD first, to try connman?


systemctl disable systemd-timesyncd systemd-resolved systemd-networkd

For the services listed, that's all you need.


The title is quite triggering. As I see it, systemd itself is very complex. So, one might say "Guaranteeing complexity with systemd", though possibly being able to ignore the complexity, usually.


Apologies, I was also triggered by the title. :)

> systemd provides ways to restrict the parts of the filesystem the service can see.

So like chroot and namespaces? Why do I have to depend on systemd when these are native features provided by Linux?

So systemd provides a friendlier abstraction of these concepts. Great, but so do Docker and Podman and many other tools that can actually be installed without taking over the rest of the system.

Having your application actually use systemd libraries further increases this dependency and makes it no longer usable but on a subset of Linux machines. This would be fine for some controlled production deployment, but is awful for usability and adoption.


> So like chroot and namespaces? Why do I have to depend on systemd when these are native features provided by Linux?

Not like namespaces - using namespaces. And for the same reason we use other high level abstractions and high level languages rather than handcrafted assembly. You don't have to depend on it either - you can still use chroot instead of you want, but it's more work that way.

> Great, but so do Docker and Podman and many other tools that can actually be installed without taking over the rest of the system.

Docker installs a service which takes over lifecycle management, restarts, and traffic proxying for apps. It injects and managed multiple firewall chains. It pretty much takes over network management. And it's still stuck on the old cgroups format so it forces that on your system. It really doesn't win this comparison.

> Having your application actually use systemd libraries

You don't need them. Everything from the post is defined in simple environment variables. For example socket activation is maybe 3 extra lines when done from scratch.


> You don't have to depend on it either - you can still use chroot instead of you want, but it's more work that way.

How so? If I need filesystem isolation, I'll use the simplest tool that provides it. In this case probably a container runtime. Note that none of your criticism about Docker applies to Podman.

Why would I ever want to use containers with a tool that forces (OK, strongly suggests...) me to use it as an init system, logging system, network manager, DNS resolver, and whatever other aspect of my Linux system its authors think it should manage?

I apologize for retreading the same discussion on this topic, but like others mentioned, adopting an incredibly complex tool doesn't mean you're simplifying. You're just working at a higher level of abstraction, which can be comforting, but simplifying would be to use the underlying systems directly or using a tool that only focuses on a single aspect of what you need (i.e. containerization).

> You don't need them. Everything from the post is defined in simple environment variables. For example socket activation is maybe 3 extra lines when done from scratch.

Great, then the article shouldn't import systemd bindings... My point is that the program is now tied to systemd systems. Containers don't impose such restrictions.


> Note that none of your criticism about Docker applies to Podman.

It's not a criticism of docker. It's "docker does the same thing", because there are good reasons to abstract that functionality. I'm not familiar with podman internals.

> Why would I ever want to use containers with a tool that ...

Because you have to deal with those things always. You can do them yourself, you can do them with other services, or your can do them in the currently-most-common framework of systemd. You can still pick and choose from those elements as you want.

> You're just working at a higher level of abstraction, which can be comforting, but simplifying would be to use the underlying systems directly or using a tool that only focuses on a single aspect

I don't really see the namespaces and service management as different things. They're pretty much one idea these days.

It's also simplifying things because I can write my own script with the necessary unshare / mount / ip stuff. But the common patterns are repeating so much, I'd rather use a single option for it.

> Great, then the article shouldn't import systemd bindings

They're not bindings, just abstraction. Why should that be treated differently from "shouldn't import fmt, just concatenate strings", "shouldn't use rand, just open /dev/urandom"?

> My point is that the program is now tied to systemd systems.

Only in these examples. Actual apps normally autodetect the relevant variables and not rely on activation if it's not available. You're never tied to systemd activation / keep alive notification unless you choose to do it.


It's not really a binding. It doesn't link to anything in systemd, it's just aware of the convention used by systemd to pass the file descriptors into the process. The actual code being executed from that repository is around 60 lines, I guess:

https://github.com/coreos/go-systemd/blob/main/activation/li...

I just used that rather than writing it myself because it felt like it didn't add much to the story. Also the code was actually written on macOS, it still runs fine without systemd.

Edit: I'd be quite interested to see if passing file descriptors in this way would be a goal in, for example, the proposed Alpine service manager. A quick scan of the proposal didn't show me anything obvious, but I'll go and read it in more detail.


I'd suggest you stear away from using systemd and a server to launch your startup.

While this is a good writeup, and you end up with a service, you still need to manage a machine with all risks involved - server reboots, updates, networking etc.

AWS Fargate, or the new App Runner will manage a container almost hassle-free




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: