My problem with systemd is that it's just so poorly written.
The ideas are not inherently bad. But they're not thought through, and the implementation is pure garbage.
Like taking the most stable software in the world[1], and going "nah, I'll just replace it with proof-of-concept code, leaving a TODO for error handling. It'll be fine.".
And then the "awesomeness" of placing configuration files whereever the fuck you want. Like /lib. Yes, "lib" sounds like "this is where you place settings file in Unix. At least there's no other central place to put settings".
[1] Yes, slight hyperbole. But I've not had Linux init crash since the mid-90s, pre-systemd
My understanding is that default unit files provided by the systemd packages are in /usr/lib where they can be read-only, whereas users can add/override them by dropping their own unit files into /etc (which is more likely to be read-write).
This provides a clean separation between the default configuration and the user configuration.
Can you explain why this is a bad thing?
A counter-example that comes to mind is when a package upgrade requires manual intervention due to file conflicts in /etc. That's what happens when the packager's default configuration interferes with the user's custom configuration.
> My understanding is that default unit files provided by the systemd packages are in /usr/lib where they can be read-only, whereas users can add/override them by dropping their own unit files into /etc (which is more likely to be read-write).
> This provides a clean separation between the default configuration and the user configuration.
> Can you explain why this is a bad thing?
The normal way to do this is to put the default configuration in /usr/share/.
systemd does ask some good questions. E.g. I think the logfile situation needed a major shake-up in unix. Too many log file formats, in text, often completely unparsable (if you're lucky then a regex will work for 99.99% of log lines, but not all), and all unique. And the same mistakes being made over and over again. E.g. "oh, we don't log timezone", or even "meh, it's up to the user to parse the time with timezone correctly, even though things like 'MST' is not even unique".
But did systemd fix that? No. It's just that now I have logs in journalctl AND nginx, AND a bunch of other files. Thanks, standard number 15 that was supposed to unify it all. If you build it, they won't just come. Especially when the implementation is bad.
Believe it or not, the above is actually the pro-systemd argument.
Now, for what you describe: Yes. Exactly. I'm saying systemd DOESN'T do this. I'm saying this is a large successful part of Unix, that systemd ignored.
> A counter-example that comes to mind is when a package upgrade requires manual intervention due to file conflicts in /etc. That's what happens when the packager's default configuration interferes with the user's custom configuration.
Is that actually a problem? Maybe I've been spoiled by Debian, but with the combination of the upgrader showing the diff, and `foo.d` config directories, I've never had this problem in about 25 years of running Debian.
But I believe you if you say that others have this problem, and believe you when you say it's real. But how exactly is systemd fixing it? It doesn't resolve the conflict, if it works like you say where it's a per-file override. That sounds like breaking the system (usually in subtle ways) instead of hilighting the newly arrived inconsistency. That's just sweeping problems under the rug.
It's not enough to say vaguely that "this is different, so probably solves some problem, somehow. And it didn't cause me personally a problem, so fuck everyone else".
You can also do 'systemctl edit <service>' which will open up an editor in which you can see the existing configuration and edit the overrides in /etc/
Care to elaborate? Off the bat, your comment comes across as a cynical rant due to its high use of strong words (garbage, fuck) and lack of examples. And even if you have anecdotes, to be convincing, it would have compare something like bug density to the software projects that collectively replaces. As written, your statement is unlikely to convince anyone that isn’t yet already.
I my experience, systemd config is simple because it handles all the complexity.
Inside it’s guts, it is much more complicated than a sysv system — naturally so because it can do so much more. Those folks loves using the latest and greatest kernel function for all its glory.
All works well - until it don’t. When something is broken, suddenly you have to understand all the interdependent components to debug.
Back in the days, these were not so uncommon, because bugs or simply unimplemented features………
But it's also overengineered. Like starting a daemon on first connect is "neat trick", but should never have gone beyond that.
Like: oh, I want to restart this daemon, because I want it to re-run its init code (possibly with new settings), but you CAN'T, because some idiot decided that it'll only actually start when someone connects to its unix socket, so running "restart" is a no-op.
No, I'm arguing there's a way to force-start even socket-activated services.
But this is really a moot point. Systemd's socket activation is really meant for system services which would otherwise be in the critical path of system boot. 'Regular' client-facing services that people normally run–webapps, etc.–are not really the target use case. It's fine to start them up in the normal way, with WantedBy=multi-user.target in the [Install] section. And I have never seen people use socket activation for them anyway. So you are basically arguing a strawman here.
Starting a daemon on first connect is essential for fast boot times of a system with multiple dependent network services. This is mostly a desktop use case though. Not sure if it can be disabled for servers.
But I would also like to see data showing how often desktop users reboot (on purpose, that is, not because systemd or something says "you should reboot now" because it's shitty software that doesn't just work cross updates).
Like, who even boots their computer anymore? Isn't the typical user on a laptop, and just suspends it?
My workplace even had to install corp software that forces a reboot every N days (with warnings ahead of time) because people just Do. Not. Reboot.
And even for the people that do, at what cost, here? You have a bunch of services and services completely broken, but "they started just fine" (except they didn't start), and only break once you actually need them.
So to me this really looks like it applies neither to servers nor desktop. I'm really not seeing any use case except fetishizing boot times.
And for me this always SPENDS human wait time, not save it. I try to use a service, and nope, it needs to "boot up" first. Could you not have done that already, WTF? (and maybe it fails to boot, which I only find out about now that I'm already in the zone to use it)
Are we really optimizing for kernel developers, here? Can't they just disable the services they don't need, to speed it up?
And we have eleventy billion cores now. Really? You can't start a 645kB gpsd? It takes what, 3ms?
> So to me this really looks like it applies neither to servers nor desktop.
It applies to both. We need desktops to boot up fast, because you said it yourself, sometimes they just need to. And no one likes waiting around for their machines to boot. Can you imagine the volume of complaints about long boot times that would come in to large-scale distros from annoyed users? That alone makes it a high priority.
And on top of that, we need servers to boot up fast, because nowadays they're virtualized and started/stopped constantly when services are scaled up and down. Can you imagine trying to scale up a fleet of servers and waiting a couple of minutes for each one to boot?
> I.e. sometimes computers just need to reboot, and there's nothing you can do about it.
And this is the attitude that brings us shitty software, and "I dunno, just reboot to fix the problem?", which is what we have now.
Short of kernel upgrades they really really don't.
But if you've bought in to "oh computers just need to reboot sometimes", then I guess you fall into the category of people who have just given up on reliable software, or you don't know that there is an alternative and no this was not normal.
>>> We need desktops to boot up fast, because you said it yourself, sometimes they just need to
>> I didn't say that. Because they don't.
> Yes you did[…]
> people just Do. Not. Reboot.
So is that what I said? I believe you did not read what you quoted.
The main reason people reboot is because of shitty software that requires reboots. So if you want to go self-fulfilling prophecy, then systemd is optimizing for boot times because it's low quality software that requires periodic reboots?
But maybe you count forced reboots once a month (or every two months) for kernel upgrades (but also the above arguments since they also run systemd and therefore need reboots). Fine.
So in order to save ten seconds per month (from a boot time of a minute or so, including "bios" and grub wait times, etc.., so not even a large percentage) this fd-passing silently breaks heaps of services, wasting hours here and there? And that's a good idea?
And all for what? Because you chose to have installed services you don't need, and don't use? And if you do use them, then the time was not saved anyway, but just created a second wait-period where you wait for the service to start up?
And ALL of these services could in any case be fully started while you were typing your username and password.
So what use case exactly is being optimized? The computer was idle for maybe half the time between power-on and loaded desktop environment anyway.
> If nobody cares, then why do people hate rebooting so much?
Because all their state is lost. All their open windows, half-finished emails, notepad, window layout, tmux sessions, the running terminal stuff they don't have in tmux sessions, etc… etc…
> And ALL of these services could in any case be fully started while you were typing your username and password.
This is the key point you are refusing to hear. No, all of the services on a modern Linux machine can't be started while you're typing in your credentials. So they're started lazily, on-demand, one of the classic techniques for performance optimization and a hallmark of good engineering.
Of course they can. How many services do you they there are, installed, and how long do you think it takes to start them?
How long do you think it takes to start gpsd, or pcsd? Even my laptop has 12 CPU threads, all idle during this time. And including human reaction time (noticing that the login screen has appeared) this is, what, 10 seconds? 120 CPU-seconds is not enough? All desktops run on SSD now too, right?
In fact, how many services do you even think are installed by default?
And Linux, being a multitasking OS, doesn't even have just that window.
But you know, maybe it's a tight race. You could try it. How long does it take to trigger all those?
> a hallmark of good engineering.
In the abstract, as a "neat idea", yes. In actual implementation when actually looking at the requirements and second order effects, absolutely not.
You know you could go even further. You could simply not spin up the VM when the user asks to spin up a VM. Just allocate the IP address. And then when the first IP packet arrives destined for the VM, that's when you spin it up.
That's also a neat idea, and in fact it's the exact SAME idea, but it's absolutely clearly a very bad idea[1] here too.
So do you do this, with your VMs? It's cleary "started lazily, on-demand, one of the classic techniques for performance optimization and a hallmark of good engineering".
[1] Yes, very specific environments could use something like this, but as a default it's completely bananas.
But no it doesn't. Until your service is started, your service is NOT actually booted. That's what I said.
You are not paying per-second for the VM. The VM itself adds zero value to you. It's the service that's running (or in this case, not) that you're paying for.
Who cares how long it takes before systemd calls listen()? Nobody derives value from that. You're not paying for that. You're paying for the SERVICE to be ready. And if you're not, then why are you even spinning up a VM, if it's not going to run a service?
Starting services in parallel will reduce overall service start up time as well, even if services are dependent on each other, because services often do work before they connect to a dependent service. Without socket activation that is a race condition.
It's a pile of proof-of-concept broken pieces duct taped together into a big mess.
Here's an example: Someone read that fd-passing is a thing, so now systemd listens to just about everything and spawns stuff on-demand.
Now, that may seem like a good idea, if you think it up in a vaccuum and don't have experience with the real world. It's a great idea, if you're in high school. But to have it actually accepted? WTF is even happening?
Oh, let's do this for time sync from GPS. Great. All that time that could have been spent verifying the signal and all, completely wasted, because some jerk thought that it's better to waste 15 minutes of the human waiting, just to save 100kB of RAM.
It's a monumentally bad idea.
And more specifics: Like I said, when you replace init you need to have it not crash.
And then restarting daemons with systemctl almost to a rule fails, and fails silently. Often I have to just kill the daemon manually, and systemctl start it again.
But people aren't complaining about systemd anymore because now there's two kinds of people:
1. People to young to remember stable software.
2. People who have given up, and just accepted that Linux too "just needs a reboot every now and then to kinda fix whatever got broken".
But maybe the trend is changing? Pipewire looks like it's not actually shit (unlike PulseAudio which has plagued us forever), and while it has some important bugs in edge cases, it's actually more reliable than what it's replacing(!)
> As written, your statement is unlikely to convince anyone that isn’t yet already.
It's hard to convince people who don't care. Or indeed those who don't know that no, actually, short of a kernel upgrade "reboot to fix that problem whenever it happens" is not normal, and is a serious bug.
Pre-systemd Linux had as a selling point that it's actually stable, compared to Windows at least. But Windows has gotten much better in the past decade in reliability, and Linux much worse.
systemd is on the level of a re-think by a pretty bright high school student. And that's not a good thing. It's a very bad thing.
> to be convincing, it would have compare something like bug density to the software projects that collectively replaces
You're asking me to be data-driven, while being fully aware that systemd isn't, right? Your argument is essentially fallacy by implying that status quo is data-driven.
It's hard to take your suggestion at face value. Especially with many of the same people pushing systemd at the time making up shit like "We know that Unity is objectively the best user experience in the world"[1] (that's why it lost, because nobody liked it, right?[2]).
At the same time I also fall into group (2), above. I don't have time to wrestle in the mud with people who don't care.
[1] a quote like that, I may not have gotten the words just right. but the word "objectively" (without data) was there.
[2] and I don't even particularly care about window managers. Before Unity I hadn't bothered switching from "whatever the default is on this system" in most cases.
You can just disable the fd passing for services that don't need it. I'm not sure what your actual issue is. I haven't had to reboot any more with systemd than I did with sysvinit or openrc, or the slackware rc init, or anything else really. If you have an actual crash that is causing you problems, you should consider reporting it or submitting a patch to fix it, just like you would with any other open source that you depend on.
>systemd is on the level of a re-think by a pretty bright high school student.
That seems like an odd statement, I believe systemd was inspired by other established unix service managers like macOS launchd and solaris SMF. The design is definitely not perfect but I wouldn't say the history was ignored when making it.
> You can just disable the fd passing for services that don't need it. I'm not sure what your actual issue is.
First of all this just sounds like "I don't know what your problem with systemd is, you can just choose to not use it". Second, my point is that it's an absolutely terrible idea, and should never have been done.
> I believe systemd was inspired by other established unix service managers like macOS launchd and solaris SMF. The design is definitely not perfect but I wouldn't say the history was ignored when making it.
A high school student can read up on things and then make something they, without real world experience, thinks seems like it's a good idea. And with no experience about what it takes to make software reliable.
If you think the socket activation takes too long you can just turn it off and start that service unconditionally. What's the problem? Other service managers support this too, it's not a systemd only thing. Maybe it's a bad idea for some services but others would seemingly disagree that it's always a terrible idea, and not just those who are systemd developers. You may just be using it for the wrong services -- it's most useful with services where the startup time is less important than reducing the overall memory usage on the system.
If you have experience making software reliable, please consider submitting bug reports and patches to help the project, like you would do with any other open source that you depend on. I'm sure contributions to improve the testing and CI would be appreciated. A high school student can also trash talk loudly about things they didn't take the time to fully understand (and I admit I did a lot of that when I was a teenager in high school), but it takes real expertise to illustrate what the actual problem is and to contribute a fix for it in a positive way.
> it's most useful with services where the startup time is less important than
So now it's not about startup time at all?
> please consider submitting bug reports and patches to help the project,
Who says I don't?
But systemd needs a few full time adults, not just a patch here and there to fix the launch-while-brainstorming culture that gave us the current situation.
It can also be about initial startup time of the system, not startup time of any individual service. Please follow the site guidelines, c.f. the part about "don't cross-examine" -- this is just a technical tool, I'm sure we both can come up with some ways that it could be useful, even if we wouldn't use them ourselves.
>But systemd needs a few full time adults, not just a patch here and there to fix the launch-while-brainstorming culture that gave us the current situation.
Then start working it on it full time, and get some of your friends hired too? Surely you can find someone to pay for that, if it's useful? What else is it that would satisfy you here? I'd be happy if another group was committing full time to systemd (or another similar open source project) just to fix bugs. I fully support you if you decide to do that.
Sorry I think I missed that. In that case you will have to find someone else who can do it and figure out how to get them paid. If you want help doing that, don't hesitate to ask.
> Here's an example: Someone read that fd-passing is a thing, so now systemd listens to just about everything and spawns stuff on-demand.
This feature benefits some people. Even if your service requires socket activation, you can still set the service to startup and you can still control it with normal command lines.
> It's a great idea, if you're in high school.
You make this accusation twice, but you give no-one any reason to believe it (unless you think "using an optional feature I don't want to use and which isn't significantly easier to use than to not use" is a reason, but most people would say that it is a good example of flexibility).
> And then restarting daemons with systemctl almost to a rule fails, and fails silently.
I have encountered an issues with using `service blah restart` on Ubuntu, which means if it isn't connected to an interactive session it doesn't work properly. I was able to fix this by switching to `systemctl restart blah`. Perhaps you're experiencing something similar? I imagine Ubuntu's service wrapper is probably taken from Debian so it could be quite widespread.
The fact that `systemctl (re)start` doesn't always give useful feedback is irritating. I am usually too bothered by something not working to have noticed when and why it gives no output about a failed service. A command should always output something on error and it should be sensitive enough to notice whether it has succeeded or failed.
> Linux too "just needs a reboot every now and then to kinda fix whatever got broken".
This is another accusation you make twice. I have been using Linux since the 90s, and I cannot agree that we have to restart Linux distros more often now than before. Can you give any examples of circumstances when you choose to reboot?
> You're asking me to be data-driven, while being fully aware that systemd isn't, right? Your argument is essentially fallacy by implying that status quo is data-driven.
Systemd might have been adopted on theoretical grounds. But that doesn't mean that an empirical objection is useless or irrelevant. If you can show that the theory doesn't match the data, or the data is worse for systemd than some alternatives due to unforeseen consequences or the difficulty of dealing with the larger spec, then this might lead to improvements to systemd or adoption of some alternative.
> At the same time I also fall into group (2), above. I don't have time to wrestle in the mud with people who don't care.
You don't fall into group (2) above. Group (2) is a subset of people who do not complain about systemd, but you are complaining about systemd. Moreover, your comment slings a good lot of mud, so it's hard to take that as a valid objection. You at least should work to clean up the mud you threw unnecessarily.
> This feature benefits some people. Even if your service requires socket activation, you can still set the service to startup and you can still control it with normal command lines.
But it's broken by design. It's a bad idea. It's "neat", but "neat" doesn't add value.
> You make this accusation twice, but you give no-one any reason to believe it (unless you think "using an optional feature I don't want to use and which isn't significantly easier to use than to not use" is a reason, but most people would say that it is a good example of flexibility).
Not sure what you mean. Using this feature pushes orders of magnitude of complexity onto users, and greatly reduces the ability to error handle or even know the status of services.
Having a daemon listening to a socket is clearly orders of magnitude easier for end users. It means everything is in agreement. You check if the service is turned on (systemctl or whatever), you check if the process is running (ps, etc), and you check listening ports (netstat, nmap, etc...), they all agree that the service either is or isn't running. And if it's running, it's successfully run its initialization and should be usable.
We already have this experience with inetd-based services. fd-passing is reinventing the past, poorly. fd-passing actually would have made more sense back in the 90s, because spawning a new process was more expensive. Both in terms of code and CPU power that overhead doesn't actually matter anymore for the vast majority of cases.
Again, I don't know what your question is. Is it "why would I possibly want to know if a service is ready do its duties or not?"? Because if it is, I don't know what to tell you.
> Perhaps you're experiencing something similar?
I mean things like restarting nginx, and either it just plain didn't (and thus fresh TLS certificates weren't picked up), or it failed to start up again and now nobody's listening to port 80/443 at all.
> Can you give any examples of circumstances when you choose to reboot?
I've filed bugs, but don't want to doxx myself. Something more systematic is that sometimes after running apt-get upgrade it's recommended that I reboot (for non-kernel reasons). The fact that someone would even write that message is a sign that the author doesn't care.
> Systemd might have been adopted on theoretical grounds. But that doesn't mean that an empirical objection is useless or irrelevant.
I agree. But this is a very common tactic for people who just don't want to have a discussion, too. I'm sure in this case you're saying it in good faith, but you should be aware of the assymetry of asking one side to provide data when the other side has none. And the cost of collecting interpreting that data (depending on which parts of this, what, a human-year?), and the risk of systemd people dismissing that data anyway, because "yeah, I guess the data supports your point of view, but I don't like it so you can fork the repo to implement it if you want. Bye.".
So I'm not saying this appeal to data is in bad faith, but it is a bit naive.
> Group (2) is a subset of people who do not complain about systemd, but you are complaining about systemd. Moreover, your comment slings a good lot of mud, so it's hard to take that as a valid objection. You at least should work to clean up the mud you threw unnecessarily.
Best comparison is that I can complain about the corruption of politicians without inviting an argument that I myself should become one, to drain the swamp.
IOW: I don't have the time, and if nobody else cares, then even if I did then I don't see that I would succeed in a sea of people who don't care about software reliability.
You keep saying this, without giving evidence to back it up. People are still running Linux in memory constrained environments, those didn't go away now that the 90s are over.
>Using this feature pushes orders of magnitude of complexity onto users, and greatly reduces the ability to error handle or even know the status of services.
To be clear, it sounds like what you're suggesting is that these services implement their own fd holding logic, which is going to be even more complex, and is exactly what systemd is trying to prevent from happening.
>You check if the service is turned on (systemctl or whatever), you check if the process is running (ps, etc), and you check listening ports (netstat, nmap, etc...), they all agree that the service either is or isn't running. And if it's running, it's successfully run its initialization and should be usable.
This isn't really correct, netstat or nmap won't show process status at all. You really don't know what the real status of that port is unless you've run lsof or something else that scans the open fds of the processes, and such a tool would make it obvious when systemd (or some other fd holder) has the fd open. Also, systemctl will display this separate socket/service units, so you can just check if the socket unit is running but not the service.
> People are still running Linux in memory constrained environments
So why do they have all these memory-hungry services they don't need on standby?
Does that mean that I can DoS these machines simply by connecting to all the open ports, thus starting up the heavy daemons in the constrained environment?
Why is that a good thing?
>> You check if the service is turned on (systemctl or whatever), you check if the process is running (ps, etc), and you check listening ports (netstat, nmap, etc...), they all agree that the service either is or isn't running. And if it's running, it's successfully run its initialization and should be usable.
> This isn't really correct, netstat or nmap won't show process status at all.
This is HN, not reddit, so I'm going to assume you're not just trolling.
netstat -na | grep tcp.*443
Yes, actually, netstat will show you if you have an HTTPS server running. It will show you if you have an SSH server running.
Same argument with nmap.
Compare this with the fd-passing model, where you can have every port on your system bound, and it tells you nothing (while troubleshooting) which services are actually up.
Do you not see how "all the ports are bound" then becomes completely useless in troubleshooting and checking status?
Will it tell you if you're actually running SSH on port 443? No, of course not. That's not how troubleshooting works, like at all.
I ask the same question because I haven't yet seen a better alternative that was given. If you have one, please show it, it would be very interesting to me. Otherwise, it sounds like you may not have that much experience with these tools, which is understandable. I can help find solutions, if you're interested.
>Does that mean that I can DoS these machines simply by connecting to all the open ports, thus starting up the heavy daemons in the constrained environment?
I'm not sure I'm understanding this question? A lot of machines are not open to the public internet, so this probably doesn't apply there. You can also use some cgroup managing tool (like systemd) to restrict memory usage to the process and configure the OOM killer behavior, so that would also prevent DoS attacks.
> Yes, actually, netstat will show you if you have an HTTPS server running. It will show you if you have an SSH server running.
Actually no, this is wrong, at least for me when I tried the version of netstat that ships with debian. It only shows if something has the port open -- that thing could be an fd holding service (like inted or systemd or something else), or it could be a load balancer, or it could be another service that is incorrectly configured to use the wrong port, etc. So you're right that this complicates the system but this isn't really systemd's fault, and there is nothing that a service manager can really do about this. The only way to know for sure is to use a different tool that prints information about the owning process -- that way you know for sure if it's sshd or something else. Maybe you have a version of netstat that shows this information? If so, then it's not a problem at all, just simply check that column before you continue with your trouble shooting.
>Will it tell you if you're actually running SSH on port 443? No, of course not.
Well now you got me confused, this seems to be directly conflicting with when you said this: "It will show you if you have an SSH server running"
> A lot of machines are not open to the public internet, so this probably doesn't apply there.
An internal audit is enough to trigger it. "Port scan crashes machine" is not exactly "reliable software".
> You can also use some cgroup managing tool (like systemd) to restrict memory usage to the process and configure the OOM killer behavior, so that would also prevent DoS attacks.
But that means that the default is bad, and unsuitable for resource constrained machines. Which circles back to "neat, but no actual use case".
> Actually no, this is wrong, at least for me when I tried the version of netstat that ships with debian. It only shows if something has the port open -- that thing could be an fd holding service
So you agree that it's a bad idea?
> So you're right that this complicates the system but this isn't really systemd's fault
It is, because it's needless complication. At least inetd was a model to make things simpler. It's the cgi-bin of network services.
But you'll notice that people don't write inetd-based services anymore. In fact my Ubuntu default install doesn't even have inetd installed.
> The only way to know for sure is to use a different tool that prints information about the owning process
netstat has supported this for (maybe) decades on Linux. It's the -p option.
But aside from systemd's poor choices if you see port 22 open, then you can actually be very sure that there is an sshd running, that successfully started (not too broken config).
You could still be wrong. Someone could have started netcat there, or just a honeypot, or whatever, but you can't tell me it's not useful information.
> Well now you got me confused, this seems to be directly conflicting with when you said this: "It will show you if you have an SSH server running"
… unless systemd broke this functionality. I'm making the point why it's a bad idea to break this.
Clients connecting will also not get useful error messages. Port is closed means service not running. Timed out waiting for SSH banner means something else.
Pre systemd it was essentially never anything other than inetd that held ports for others. And for about the last 20 years even it would only do things like echo,chargen,time service that people would run. And having those open by default is from a more naive time, where people thought "sure, why not run tftpd and time service, would could possibly go wrong?".
Nowadays they're off by default, because we're more experienced that any attack surface is still an attack surface, no matter how small.
Probably it helped that OpenBSD kept bragging about how many remote holes in the default install. It's not actually because OpenBSD had better code, it was just that a default OpenBSD only had OpenSSH open to the world.
>"Port scan crashes machine" is not exactly "reliable software". [...] So you agree that it's a bad idea? [...] But that means that the default is bad, and unsuitable for resource constrained machines.
I'm not sure I understand where you're coming from here? I explained how it could be made suitable, it could be done in a way that was crash resistant. I don't know if it's a bad idea or not, it depends on what you're trying to accomplish. The default here is configured by the distro, so you could expect to see a different default on an embedded distro.
>In fact my Ubuntu default install doesn't even have inetd installed.
I believe this is mostly because systemd has replaced its functionality.
>netstat has supported this for (maybe) decades on Linux. It's the -p option.
Good call, I forgot about that, I always use lsof. But that's exactly what I mean, it will show you which pid has the port open, so it will make it obvious if it's systemd or sshd. You won't be sure if there is actually an sshd running unless you check that. This really seems like a non-issue, you have all the tools you need to troubleshoot it.
>systemd broke this functionality. [...] Port is closed means service not running. [...] Pre systemd it was essentially never anything other than inetd that held ports for others.
I don't really want to discuss this anymore if I have to repeat myself, but this is not correct. There are multiple other reasons why you would have another service holding the fd open, such as load balancers, filtering proxies, userspace firewalls, etc, etc. The ability to pass an fd to a child process is an intentional feature of every Unix-like operating system that I've used. Systemd is only using the feature as the OS intended it, which is also supported on OpenBSD.
The ideas are not inherently bad. But they're not thought through, and the implementation is pure garbage.
Like taking the most stable software in the world[1], and going "nah, I'll just replace it with proof-of-concept code, leaving a TODO for error handling. It'll be fine.".
And then the "awesomeness" of placing configuration files whereever the fuck you want. Like /lib. Yes, "lib" sounds like "this is where you place settings file in Unix. At least there's no other central place to put settings".
[1] Yes, slight hyperbole. But I've not had Linux init crash since the mid-90s, pre-systemd