All Linux binaries are compiled with PIE nowadays. You can run `checksec` on any binaries on Ubuntu, and it will have those properties.
(You can install checksec with `pip install pwntools`).
On the other hand, GLIBC has, to my knowledge, the most hardened heap implementation out there. And there are more mitigations for double-free and other heap exploits on GLIBC.
So in that regard, Alpine is less secure by using musl. Having a small, understandable system is a real advantage when it comes to security.
It was pretty clear to me that the comment was a description of the respective characteristics of glibc and musl in terms of security, while avoiding any conclusion: glibc has heap hardening, which is good for security, but a complex codebase, which is bad for security. Meanwhile, musl is small and understandable, which is good for security, but with a naive codebase that lacks hardening, which is bad for security. Which is better is intentionally left to the reader to avoid flamewars.
That's a charitable reading but it doesn't track with what they actually said. The first paragraph says that all modern Linux binaries are compiled with PIE, so Alpine has no advantage there. The second paragraph says that glibc is more secure than musl heap-wise. The third paragraph is the conclusion, which is that Alpine is less secure because it uses musl.
A sentence thrown on to the end of the conclusion should normally be read as reemphasizing the reasons for the conclusion unless it starts with a word like "though" or "however".
If you're smart enough to construct this analysis and critique, then you're smart enough to have reached the same conclusion the parent and I did.
I'm not charitable, it's just what made sense, like mentally fixing a typo instead of acting like you don't know, and can't figure out from context what someone meant just because they flubbed a letter or a word or something.
When a letter gets flubbed, it's nearly always possible to correct it from context alone. When a word is missing, it's sometimes possible to retrieve the original meaning but other times the missing word creates an ambiguity and you have to just pick a meaning. Faced with the ambiguity, your brain jumped in one direction, mine in another. You landed on the correct answer, I didn't, but there's no need to imply that my reconstruction was done in bad faith.
> I'm not charitable, it's just what made sense, like mentally fixing a typo instead of acting like you don't know, and can't figure out from context what someone meant just because they flubbed a letter or a word or something.
I mean, that's just what's called in Philosophy the principle of charity [0]. When evaluating a claim you should read it in its best light, which include glossing over minor inaccuracies and going straight to the main point.
So in that regard, Alpine is less secure by using musl. However, having a small and understandable system is a real advantage when it comes to security.
A line break in between the two sentences of the last paragraph may have made the commenter’s point clearer.
It seems to be they were only comparing the relatives benefits/drawbacks of glibc and musl, but with the way it is written the pro-musl comment feels out of place.
I run checksec on everything all the time and on all my Alpine nodes all the processes come back like this not pasting the full output for brevity... I have never see anything built by Alpine missing these flags.
COMMAND PID RELRO STACK CANARY NX/PaX PIE
init 1 Full RELRO Canary found NX enabled PIE enabled
[snip...]
crond 422838 Full RELRO Canary found NX enabled PIE enabled
> On the other hand, GLIBC has, to my knowledge, the most hardened heap implementation out there. And there are more mitigations for double-free and other heap exploits on GLIBC.
Re: Linux security, if someone can run any code at all on your system, you're screwed. Linux is swiss cheese. The only reason it isn't just as overrun with malware as Windows is nobody uses Linux for a desktop, so malware authors don't really try. (honestly I'd say modern Windows and MacOS both have a superior security architecture)
Linux distributions just have a different security model, based on trust. Maintainers form with developers a chain of trust from the repo to your machine.
Windows and MacOS on the other hand have an untrusted security model, everything is assumed to be potentially dangerous.
You're really not adding good content here. This is crap, that gets owned all the time... Why? What's the actual comparison? What model/approach makes the difference?
Please add something meaningful. Otherwise it's just ranting/fanboying over your preferences - we can be better than that on this board.
We really can't. Even if I reply with reasons, they will just be argued, ignored or downvoted by people who don't know what they're talking about. People only believe what they want to believe, or whatever a famous person says. (not to mention none of the people replying to me are providing any contrary evidence, just more spurious claims, but those with unpopular opinions get the downvotes)
Have you actually looked at the 0days? Almost always it's confined within the sandbox. You have to chain it with an sandbox breakout exploit to make it useful.
> ChromeOS, one of the more secure operating system s (behind QubesOS, on par with Android and iOS)
I don't know what features ChromeOS has over Linux but I wouldn't considered Android or iOS particularly secure, and Qubes isn't either directly, it's just a tool that can help in some cases.
I am also also a bsd person and also coincidentally ran my first alpine as a vm on bhyve this week.
It's busybox. If you don't need explicitly discrete binaries of the /bin and /sbin utils it is a tiny user-space and fast to boot. Tmux, zsh and I was done for most unix purposes.
To get to my endpoint I wound up doing a lot of apk installs (node. More node dependencies. Stuff) but overall I found it the best Linux experience I've had in a long time. I wish zfs was baked in, and more overt virtio bindings to things for bhyve zfs backed run.
I've used/deployed FreeBSD for over 20 years and I have to admit I dread connecting to a GNU/Linux box. It's just such a mess of a system with so many variations in inconsistencies. Even connecting to a Windows server makes more 'sense' to me.
Anyway, I'm glad to hear that there might be a sane Linux distro out there. I will give it a go if I need a linux box which is pretty rare tbh.
firewalls. is it iptables, nftables? what is iptables-nft and iptables-legacy doing in this mix? or was I supposed to manage them with firewallctl or ufw?
network settings. if-up scripts? NetworkManager? Are we already supposed to use that systemd-network thingy or is it still not ready ? I just need to add an IP address in addition to the one given by DHCP...
who is managing /etc/resolv.conf today?
(my regular frustrations when dealing with both Ubuntu and Rocky Linux hosts..)
As a greybeard GNUlinux sysadmin: nftables raw ripping out iptables (newer gui/tui firewall interfaces support nftables) rip out NetworkManager, and use systemd-resolved to manage DNS. (On non-systemd systems like Devuan then this changes.) Use systemd units for powerful program and service control, including systemd-nspawn for containerization.
iptables has been with us for more than 20 years and is only now being replaced (pretty slowly I might add). The old rules are still supported through iptables-nft, you can just import them and forget nft exists.
Distributions I prefer have never used NetworkManager and haven't changed network configuration in a long time. RHEL and its rebuilds have used NM for what feels like an eternity. Ubuntu is the odd one out here with its constant churn, afaik.
Same with firewall wrappers like ufw and firewalld. Either your distribution uses one and you just use whatever has been chosen for you, or it doesn't and you go with nftables (or iptables-nft if you prefer).
This is all only really a problem if your organization uses a bunch of distributions instead of standardizing on one, but then you probably have a lot more other serious problems than learning how to configure your firewall...
As a counterpoint, I evaluated FreeBSD for a project about a year ago and was really put off by its primitive service management (compared to systemd which I know pretty well and use its features extensively, they really do help in your daily work), and the three firewalls which all seem to be approximately equally supported and you never really know where to put your time. (Unfortunately, I had to pass the OS for other reasons which have no relation to its technical merit.)
Yes, however, each has a clear set of tools, and it's clear which one are you using. There are no shims to use IPFW tooling with PF and vice versa, while on linux they are all mixed.
Sorry, for such inconvenience, we will stop writing software we want so that we won't risk filling BSDers brains
I really don't get these criticisms, you have choice, having choices doesn't make a system bad, makes you have to make your choices, which can also be going towards systems where stuff is standard
Choice should only be offered after you have a stable foundation/base. Suppose you have a store that sells frozen food only, an incredible amount of choices, but no base ingredients like flour, grains and meat.
Software is utilitarian in nature, the goal is the task, but, how do you accomplish a task with an infinite amount of tools? and not only that, but how can you be sure that the tool is secure and stable?
I've had nothing but issues with systemd-resolved.
Networkmanager seems to be what things are standardizing on these days. Which, while for some reason I've always avoided networkmanager and used various combinations as alternatives, I'm all for having one most common standard networking utility for Linux.
Same here. However, from what I've seen, touching any systemd component causes cascading issues.
I usually settle on networkmanager, since there's not a great alternative for dealing with wifi. However, it often delegates to a giant broken pile of fail.
Things can be much simpler on machines that connect via ethernet (including VMs).
NetworkManager and systemd-resolved are not really interchangeable. The latter is a local caching multiprotocol name resolver and NetworkManager supports its use for name resolution.
FreeBSD has 3 different firewalls, not 3 different interfaces to the same firewall. Each firewall has its own purpose. IPF is lightweight, pf has a nice UI/UX, ipfw is very integrated into the system.
More importantly, doing a simple kldstat would tell you which firewall is running. On Ubuntu (as an example) I have no idea if I should be using nftables, iptables or if ufw is working or not.
That's the main problem with Linux these days: Experience with distro A rarely transfers to distro B.
Also, at least with Ubuntu, switching to a new LTS means that most administration tools have been replaced with different (usually buggier) ones, so knowledge of the old release doesn't necessarily transfer either.
It wasn't this way in the early days, but the community focus stopped aligning with end user interest about a decade go. At that point fragmentation + complexity exploded.
I say this as a big time BSD friend, the same can be said about the BSDs. OpenBSD and FreeBSD are very different , I’ve never used NetBSD, but I can only imagine it’s not the same as the other two.
Yah, it's a bit wrong that people compare an operating system like FreeBSD (or Solaris or AIX etc) to "Linux" which is just a kernel. The distribution IS the operating system, and of course there will be differences.
SystemD is changing things up a bit and packaging up all the "boilerplate" and making things more consistent across distros, which is convenient sure. I joke that the old adage "GNU/Linux" should be updated to "SystemD/Linux".
I agree with you, FreeBSD should be judged by its own right against every other operating system out there, including the 100s of GNU/systemd/Linux-distributions, and every obscure operating system out there. How deep you dig depends on you.
My preferences have fallen on a combination of FreeBSD, OpenBSD, Manjaro Linux, with FreeBSD my main operating system.
The main draw backs are
1) poorer wifi support *
2) non-existing bluetooth support
But the main advantages of FreeBSD
1) FreeBSD is a source distribution first, always has been, always will be.
2) The most permissive software licenses are prefered, which I think is really cool
3) By far the best package managers. both ports and pkg are simpler to use than anything I have tried from any other distribution. I know some people swear by Slackware, Gentoo and Arch, but in general their package management do not appeal to me. Plus it always seems like the linux distributions are either source or binary. Sure, you can usually do both(except for the source first distributions) on most linux distributions, it's usually inferior to ports/pkg.
4) first class ZFS support
5) I get to run the same system on my desktop as I do on my production systems which I consider a big advantage.
I have resolved the WIFI support by running wifibox, a tiny virtualized Linux vm running on bhyve. It gives me a 20-fold increase in speed! Coincidentally, it’s based on Aline Linux, which the blog post is all about!
When I want to play games, I reboot to Windows or Manjaro, which takes about 60 seconds... Both fairly stable and easy to maintain operating systems. I like MacOS as well, but I don't have any apple computers anymore.
It's been a while since the first-class ZFS support had any advantage for the user beyond an initial install. Maintenance on it was so limited that they ended up rebasing on ZFS on Linux anyway, making it literally less first-class than on Linux.
Today you can get ZFS packages from contrib in Debian and run it for several years with no problems. I know because I did that from Debian 9 (2017) through Debian 12 (2023) and still going. Ironically, Debian 9 took over that ZFS pool from a FreeBSD server, and there is not one part of that migration that I regretted.
The first 3 points are pretty much covered by nixos too. 1. It's source compilation based, but you download the cached result if it exists. 2. Unfree option has to be explicitly set if you want that for specific packages. 3. Depends on the tastes, but it's pretty easy.
Afair, for switching from 20.04 to 22.04 I had to ensure network configs are under netplan and that's all.
What's imimportant as well, there is no rush to switch to newer LTS, no problem to plan and test migration over 1 year be needed as old LTS is still supported.
The question was why people may find Linux inconsistent. The distributions are very much part of that, and even within a distribution, just a 2 year later LTS might have wild differences because of the kernel promoting new mechanisms
It may not be that much of a problem in practice, I deal with multiple distributions because for new servers, we pick 'm based on expected future support, and they're only a bootstrap to docker/podman which is the great 'equalizer'. So the inconsistencies are only a problem until our Ansible scripts have learned the difference and when we need to debug an issue. Not that often fortunately, once the configurations are in place things are generally stable.
I only interact with Debian, sometimes Ubuntu, and that description of the Linux situation is fair and accurate. I love Linux, but it’s also a chaotic mess, just as described.
Linux user since 1994, tiny kernel developments eons ago. My whole home depends on Linux and Home Assistant, Vaultwarden and a few others.
I love Linux but the total mess with networking and sound is disheartening. It is a pile of crap.
I do not care if this is this or another solution, but for fucks sake - let's have one system and not five that step on each other. This is infuriating.
As well as what others have said: With FreeBSD things change over time, this is a a given. But I can always use the release notes and the FreeBSD handbook to resolve any issues.
With GNU/Linux things change and the lack of authoritative documentation is tiresome. I end up on Stack-overflow triaging legacy posts from sources I just cannot trust.
Is FreeBSD perfect? No of course not. Is Linux a complete waste of time? No of course not! BUT my time is better spent in FreeBSD (and yes even windows) than Linux.
The big problem I have as BSD person who occasionally needs linux for something is that the google search results are terrible. Even when specifying the OS, I often get incorrect results that show how one would do what I want to do 8 years ago, when the tools were different.
I have a job with Linux again, so I set up a dev vm, and I've got prod hosts on gcp that run our application in a container...
On prod, I have to use ifconfig, because ip isn't installed. On dev, I have to use ip, because ifconfig is 'obsolete'. Same deal with netstat vs ss. Those are the big ones for me.
I don't particularly care about the progression of firewalls on Linux, it seems like one day it will be back to one, but FreeBSD has always had three and I use two of them simultaneously, so how can I complain?
It seems like the real problem is that you've got a ton of drift between your Prod and Dev environments. How old is your Prod compared to your Dev?
Since you're running something in a container, you should be able to upgrade the host OS (more accurately, recreate the hosts with current images) very easily. I wouldn't expect there to be more than a few weeks of drift between Prod and Dev.
Our GCP prod is running google's container optimized os [1], something relatively current? But the container we run our app in is debian something, and my dev environment matches that... But when I go to prod, I don't go into the container, because a) I don't really understand how? b) I don't need to anyway; I can do almost all the stuff that needs doing without getting a shell into the container.
The real problem is that ifconfig can do the job, but nobody wanted to modify it, so they built a new tool, but both tools still work, and nobody is going to cleanse the world of ifconfig. Same deal for netstat vs ss. ss does the same job as netstat, but supposedly faster or better? but they could have made netstat better and didn't. In the meantime, causing churn and leaving a wake of broken documentation and frustrated users.
I didn't pick the environment, it was there, and I don't have enough authority to change the environment that much, I'm just working part time, and I want to get in, do my work, and get out. It sucks having to use two different tools for the same thing, all over the place. If I had an open offer to come work part time with a former boss at a place running FreeBSD and Erlang, I'd have taken it, but I got Linux and Rust instead, so I'm dealing with it :P
Of course, FreeBSD isn't perfect either. I've just updated a machine to FreeBSD 14, to find that I can no longer use the command
ifconfig pfsync vlandev main vlan 4 192.168.4.11/24
because I get an error
ifconfig: both vlan and vlandev must be specified
Instead, I've got to put the vlan number first. So I've got that to chase down, and freebsd-update was also very slow, so I've got that to chase down too.
We run a lot of containerized stuff too, but in AWS EKS. For us, access to the container host isn't really a thing, and I'm not even sure if it's possible to be honest. The container hosts are an implementation detail, any direct access to the containers is through `kubectl exec ...`, so the tools available there come from the container image and match from dev through to prod.
I do agree with you though. It really feels like the change from netstat to ss, ifconfig to ip, etc, is churn for the sake of churn. FreeBSD is nice because it's a comprehensive operating system. Linux is a kernel used by a whole bunch of different, but fairly similar, operating systems.
My Debian server has a mix of systemd and /etc/init.d startup scripts. That’s the sort of thing where a BSD would be likely to say, ok, as of version N due in 3 months, we’re migrating everything and going from 100% old way to 100% new way.
> My Debian server has a mix of systemd and /etc/init.d startup scripts. [...] ok, as of version N due in 3 months, we’re migrating everything and going from 100% old way to 100% new way.
> Support for System V service scripts is now deprecated and will be removed in a future release. Please make sure to update your software now to include a native systemd unit file instead of a legacy System V script to retain compatibility with future systemd releases.
Eh, I don’t really care either way. The ones I really care about are in my shell history. I meant that more as an example of where the BSD way would be different. Since they manage all of the software together, it’s much easier for them to do wholesale migrations like that.
Long ago, it was that a linux system was "linux kernel, big pile of random GNU utilities for userland" while a BSD was "every utility is written by the same team."
This ends up with the documentation "working" and very consistent vs a linux (jeez, no man pages, what the heck is this "info" crap?).
Things are better now, especially in the documentation front, but it remains that the "gnu" ecosystem is still a hodgepodge of different utilities written by different teams at different times and there are still inconsistencies.
I've spent decades in the linux ecosystem and away from "unix" so the memories are hazy and the brain damage (from linux) permanent at this point; the fish no longer notices the water.
Yes, this has been my experience as well, even at the kernel level. I actually enjoy looking at and hacking on BSD kernel code. Linux kernel code is ... another story.
But I tend to think the difficulty with Linux/GNU is more a result of the enormously larger community and the huge diversity of use cases. For example if you stick with a complete vendor (Red Hat being the best example) among the system tools is a fair amount of consistency and documentation (man pages). As you accumulate more applications and extra tools, that's where the community fragmentation really hits. This is most intensely felt when I try to set up a workstation (laptop or desktop) with a BSD. Even discounting hardware support, I run into so many things where BSD is consistent because in part because it doesn't exist there.
I still have a dream though that one day BSD will start becoming the GOTO for various places. Though, I think how Apple took it and made it common but also locked it down, I have my suspicions that the permissive licensing (which as a developer I really love) does seem to end up being taken by big tech to get huge profits and used without giving (much) back.
As others have alluded, greater Linux is awash with too many choices for every component, like a walmart supermarket, not to mention CADT-driven development.
How baked in? I think Alpine is one of the few distros to ship a binary zfs kernel module. So it's just a quick apk command away from getting it. There's some pretty decent documentation to install zfs on root on the alpine wiki too https://wiki.alpinelinux.org/wiki/Root_on_ZFS_with_native_en...
ZFS on root (and setting up automatic snapshots) is great for rollbacks if a system update turned out bad, and also allows for being certain there’s no corruption after a power failure or a disk starts getting flaky. It also makes sending off snapshots for a backups a breeze.
So it’s not really for day to day issues, it’s more for turning a rare “day of pain” into something much less.
AFAIK, for events like unexpected power failure, ZFS is always consistent to the point of the last complete TxG. No snapshot required (and it wouldn't provide more-recent data, either) to deal with power failure-induced data corruption, because there isn't any.
And also AFAIK, ZFS snapshots don't protect against data corruption from flaky disks: An online snapshot doesn't make a copy of anything. (That's more like what RAIDZ or copies=2 are for.)
Snapshots do make great rollback methods, even for fixing dumb day-to-day mistakes. (I mean, I'm sure none of us have ever done something stupid with a computer, but it probably happens to someone sometimes.) And they are indeed easy to send elsewhere to be used as complete backups of a dataset's state at a single point in time.
And there's other good stuff about daily use of ZFS root, too: Datasets. Like other filesystems, ZFS does occupy an entire disk (or a partition on that disk) -- in this case a "pool".
But within that pool can be many, many datasets that are independent of one-another, just sharing space in the pool. One linux distro's root might occupy one dataset, and another more-experimental (or major breaking upgrade) distro's root might occupy another dataset. /home can be on a third dataset that both distros use. This is a very wonderful thing for those with limited hardware budgets -- gone are the days of resizing/paring down disk partitions just to try something different. (Datasets can also be copied or moved to different pools and those pools can be different sizes.)
Or: Caching. Now that persistent l2arc is a thing, a snappy-feeling system can be built with big (redundant!) spinning disks, and a relatively small (fast!) SSD as a read cache.
It's not really just for "day of pain" issues -- ZFS is pretty slick in all kinds of useful ways, as a root filesystem and elsewhere.
For starters you have the snapshots which you can use as 'restore points' like in Windows to go back when an update breaks something. Except this time they actually do work (I've never had that fix anything for me on windows).
You can also use them to try out huge packages like different DEs on your daily driver and revert if you don't like it without having to remove the entire forest of dependencies.
You can also silence the filesystem for backups similar to volume shadow copy in Windows. ZFS send/receive is also a fast way to actually make backups although it has some serious drawbacks so I don't use it (mainly the inability to restore individual files)
Another thing is that you can base filesystems upon other ones so you can use it to create one jail (like docker container) that you keep updated and base your other ones off it. A bit like docker layers do. Yes this one is very FreeBSD specific.
And finally, doing a scrub to check for bitrot can really save your bacon.
So in my examples most useful features stem from its snapshot functionality.
BSD users might also like Void Linux, which was developed by xtraeme (a NetBSD developer). It has glibc and musl versions and uses runit as an init system. You can also build packages from source using xbps-src.
This is what I settled for after using Arch and looking for a Non-SystemD alternative that feels a lot like Arch. The thing that threw me the Void direction instead of Alpine was glibc support so I can use NVidia drivers (yes I know, Booo NVidia ;-)
I am loving Void; my main system is Arch because of the large package selection and systemd ergonomics, but I have installed Void on three boxes for relatives and myself, and it's been wonderful. Warning though: I have only used the xfce installation with minimal tweaks. Also, a complex multi-user setup with void might be a bit more tedious to setup because runit is less batteries-included than systemd.
Chimera is developed by a former Void maintainer. It's still in its infancy compared to the two main "KISS" distributions (Alpine and Void) but it looks promising, especially thanks to dinit, which like S6 is what systemd should have been.
Then if a Linux kernel isn't a strict requirement, there's obviously NetBSD, from which Void takes inspiration. At last it's "the real thing" and not some adaptation, and a very underappreciated and overlooked Unix.
chimera definitely goes for its own type of experience, the source of the core tools does not make that much of a difference in that (they were mostly chosen for other reasons anyway)
that said i was a freebsd user for over a decade so it must have left some mark (even though generally the systems have little to do with each other in their general design)
People may not know that you are the creator of Chimera Linux.
There does seem to be a lot of BSD influence over Chimera with the BSD userland, Clang / LLVM toolchain, lack of glibc, and cports.
Regardless of the inspiration, the design choices in Chimera are great. Being able to standardize on things like Pipewire and Wayland without legacy. Working to bring the functionality of systemd without the monolith is great too. I hope Turnstile catches on.
Yeah, it's an unfortunate name collision. From Chimera Linux FAQ[1]:
The system also has no relation to ChimeraOS, besides the unfortunate name similarity. ChimeraOS used to be called GamerOS and renamed itself to ChimeraOS later; however, at this point Chimera Linux was already in public development with its name in place.
I was expecting some comment on the fact that man pages, a point of pride for the BSDs, are not included by default in Alpine. That's one of the reasons that kept me from using it on my travel laptop (now running OpenBSD).
Alpine users, am I missing some configuration option to make sure all documentation is always installed when getting a package? Or is manually installing the -doc package every time the only way?
If you use a `-slim` debian docker image variant, it won't have the manpages (or most locales, and apt will be stripped down, etc, etc).
I think you can achieve the same thing on bare metal by being careful to tell it not to install a bunch of base package sets.
Alpine is more container-focused than debian, so I imagine they default to something closer to `-slim`, but I haven't tried installing it on bare metal or in a VM.
The "manpages" and any "locale" packages are neither required nor important so are not included in a Debian minbase install. If you want them, you can install them easily with apt.
I'm not sure what you mean by apt being stripped down. You get the normal apt still.
On Debian some packages come with the manual, but have the info-pages as a *-doc (like coreutils), some packages come with basic manual and have extra ones on the doc package (like linux-image), some packages have no documentation at all on the main package (AFAIK nothing on the minimal installation), and some packages bundle everything.
I honestly have no idea why people find OpenRC and such things appealing. Literally any supervision based option is better than leaking your PIDs, storing them in PID-files and hoping that the value in the PID file still refers to the daemon you ran after 3 weeks. To top it off, in some cases some service management tasks are handled by pgrepping for a specific process name. I can somewhat sympathise with the idea that not everything should be auto-restarted by default but that's literally the only thing these kinds of things have going for them.
Also, inevitably these things heavily rely on syslog, which is the most 80s technology ever invented. I agree that multilog/svlogd/whatever could be improved to add an easy centralised view (when you need to know the sequence of events from multiple tools) but grouping logs by some vaguely defined category which you always get wrong, and letting anyone log anywhere with any name are such strange features.
Sure, but the main alternative is systemd which is architected in a way that just isn't secure, and opens it up to a whole bunch of new and exciting CVEs.
There's just way too much going on in PID1, written in a memory unsafe language. I don't see a technical reason why it couldn't have a minimal PID1, and a few setuid programs. Aside from it making it possible to run systemd inside a docker container, which I presume redhat/IBM is strongly against, preferring you to use their in-house containerization tools like systemd-nspawn.
It's just never going to be viable from a security point of view with how it's architected.
There are literally at least 3 well designed and featureful alternatives to OpenRC which are not systemd: daemontools, runit, and s6-rc. There are also other lesser known options.
For a real world in-situ use of runit, see voidlinux. It could be handled better but at this present moment it is at least no more clunky than using OpenRC.
> Sure, but the main alternative is systemd which is architected in a way that just isn't secure, and opens it up to a whole bunch of new and exciting CVEs.
This is a general "back in the days always was better" answer. Fact is that along the years systemd had less than 50 CVEs published, it reinvented for good the whole initialization process and linux administration in general, and together with SELinux are great foundation for any modern Linux distribution. Sure RC was super simple, but systemd is just the evolution that Linux needed to become what it is today.
When there's a CVE in a program written in a memory-unsafe language that has a position of privilege in your security model, that's a much much bigger deal than if there's a poorly written bash script running as a user.
Seperate out your service manager from your pid1, pid1 needs to just be responsible for reaping orphan processes. If you're going to have a monolithic daemon in that privileged position at least write it in a memory-safe language, as that's where most of the nasty RCE vulns come from.
I do have bunch of servers on Centos 7, which is quite old and have met just couple of minor issues related to systemd, never had a downtime because of it. I'd say NOC doing some networking maintenance brings me more problems then systemd.
> but systemd is just the evolution that Linux needed to become what it is today.
Not at all. The linux of today doesn't owe anything to systemd, is not radically different from when systemd didn't exist, and arguably we would have a better alternative if systemd had never been adopted.
> arguably we would have a better alternative if systemd had never been adopted.
Not true. We have many alternatives, adopted in some distros. But AFAIK no Enterprise distro. For servers or desktop.. why?
because systemd starts as many services as possible in parallel this speeds the overall startup and gets the host system to a login screen or reduce the server downtime dramatically than SystemV. That is for sure a well wanted characteristics today...
Sure RC was super simple, but systemd is just the evolution that Linux needed to become what it is today.
At this, I just vomited a little in my mouth.
Linux owes nothing to systemd. In every measurable way, systemd adds more complexity, reduces security by expanding the vulnerability footprint, creates a monolithic ecosystem, and handles everything far worse than, for example, Debian's use of sysvinit.
I spend more time dealing with systemd edge cases, and bugs, and security issues every few months, than I did in 30 years of other init systems.
> I spend more time dealing with systemd edge cases, and bugs, and security issues every few months, than I did in 30 years of other init systems.
It's been the same situation for me, too.
Every time I get stuck dealing with a new systemd-related problem and search online for solutions, the huge number of bug reports, mailing list posts, forum posts, IRC logs, and other communications I incidentally see describing my problem and/or other troubles involving systemd remind me that I'm not alone. Many other people are consistently having a wide variety of problems with it, too, and this has now been going on for years and years.
Systemd has driven me to move systemd-using Linux systems I end up responsible for over to FreeBSD or OpenBSD whenever possible. Their init systems aren't perfect, but they almost never cause me problems. In the very rare cases when they aren't working for some reason, at least those systems are simple enough that I can usually debug the issue on my own, without having to search for help online.
Can you describe one of your problems? I've had very smooth sailing with systemd and I like not having to play games with pid files and pgrep like I had to in the 90s.
I can't speak for VancouverMan, but my experience has been similar. A few examples of the problems I have with systemd:
System shutdown/reboot is now unreliable. Sometimes it will be just as quick as it was before systemd arrived, but other times, systemd will decide that something isn't to its liking, and block shutdown for somewhere between 30 seconds and 10 minutes, waiting for something that will never happen. The thing in question might be different from one session to the next, and from one systemd version to the next; I can spend hours or days tracking down the process/mount/service in question and finding a workaround, only to have systemd hang on something else the next day. It offers no manual skip option, so unless I happen to be working on a host with systemd's timeouts reconfigured to reduce this problem, I'm stuck with either forcing a power-off or having my time wasted.
Something about systemd's meddling with cgroups broke the lxc control commands a few years back. To work around the problem, I have to replace every such command I use with something like `systemd-run --quiet --user --scope -p "Delegate=yes" <command>`. That's a PITA that I'm unlikely to ever remember (or want to type) so I effectively cannot manage containers interactively without helper scripts any more. It's also a new systemd dependency, so those helper scripts now also need checks for cgroup version and systemd presence, and a different code path depending on the result. Making matters worse, that systemd-run command occasionally fails even when I do everything "right". What was once simple and easy is now complex and unreliable.
At some point, Lennart unilaterally decided that all machines accessed over a network must have a domain name. Subsequently, every machine running a distro that had migrated to systemd-resolved was suddenly unable to resolve its hostname-only peers on the LAN, despite the DNS server handling them just fine. Finding the problem, figuring out the cause, and reconfiguring around it wasn't the end of the world, but it did waste more of my time. Repeating that experience once or twice more when systemd behavior changed again and again eventually drove me to a policy of ripping out systemd-resolved entirely on any new installation. (Which, of course, takes more time.) I think this behavior may have been rolled back by now, but sadly, I'll never get my time back.
There are more examples, but I'm tired of re-living them and don't really want to write a book.
> Systemd has driven me to move systemd-using Linux systems I end up responsible for over to FreeBSD or OpenBSD whenever possible.
Nice that you privately do it privately. In Enterprise environment however is different, and systemd played an important role in having Linux reaching that level.
It totally is. I see the appeal: it's, on the surface, easy. But this comes at a cost.
Turning Linux into Windows by replicating svchost.exe shouldn't be applauded by the Linux community.
I'm glad the BSDs are still out there and I'm glad there are still non-systemd Linux distros out there and I'm even more glad some systemd distros haven't completely shut the door on moving back away from systemd.
Do I write a systemd service once in a while? Yup, I do. Is it easy? Kinda, at first. But we shouldn't be too excited about superficial simplicity. Something has been lost in exchange.
The monster systemd squid spreads its infinite tentacles on everything it touches while being PID 1, making sure that a countless number of current and future exploits (or backdoors) are possible.
We've got Linux's PID 1 (for most distros) controlled by a MS employee, who replicated Windows' svchost.exe. And people are all excited?
I personally cannot wait for another, better, init system to come out and replace systemd.
Meanwhile I'm glad there's choice: OpenBSD, Alpine Linux, Devuan, etc.
> Turning Linux into Windows by replicating svchost.exe shouldn't be applauded by the Linux community. ... We've got Linux's PID 1 (for most distros) controlled by a MS employee, who replicated Windows' svchost.exe. And people are all excited?
systemd was pretty consciously patterned after launchd, not svchost. The goal was, and for good reasons, to make Linux behave like a more integrated Unix-like that already existed: MacOS.
Benno Rice has an excellent presentation on systemd that's worth watching through to the end; unlike most of the table-pounding (and "it's just svchost.exe!!" is exactly that), he provides what I think is a pretty fair--and, interestingly to me, a BSD-grounded--view as to where systemd is strong and is weak. https://www.youtube.com/watch?v=o_AIw9bGogo
The thing is, I own a mac, and I've never had to touch launchd.
I've hit severe systemd bugs on 100% of the linux desktop installs I've set up in the last 5-10 years. (examples: "x/wayland session management is broken", "uncached DNS takes 10 seconds to resolve", "this service is masked, and none of the force-unmask paths work", "omg lol no more logs for you", and so on).
The fact that pid 1 can even be involved in those sorts of userspace bugs shows how broken the architecture is.
> (examples: "x/wayland session management is broken", "uncached DNS takes 10 seconds to resolve", "this service is masked, and none of the force-unmask paths work", "omg lol no more logs for you", and so on).
I used to be release manager for a Linux distro. Mostly, such issues were integration problem and not a systemd problem. In some cases that I worked on, the integration wasnt well-thought, or it was done in some amateurish way which needed actually some extra hours of professional software development to make it "production ready". Unfortunately part of the process of working with open source.
This is one of the downsides of systemd from a community perspective--it's not that it doesn't work; it largely has, and has consistently, for most people and most distros who've adopted it pretty much since the jump! But the bonkers level of partisan poo-flinging by folks who will not simply go off to Devuan or whatever has inculcated an automatic assumption that a system built by some of the most talented folks working in the Linux space simply has to be broken whenever they have a problem.
By being ambitious, systemd brought it on itself, but it's frustrating because the conversations don't go anywhere and don't matter.
Take a look at S6 and dinit. They both embody what systemd was intended to be while keeping the portability, technical simplicity and loose coupling.
You might also want to consider Void and Chimera. Void has a unique combination of technical simplicity, functionality, rolling updates and low maintenance along with some beefy repos. It's close to being the perfect desktop Linux to me.
Chimera uses dinit, which is closer to systemd's features, whereas Void uses runit, with is more of a minimal viable init + rc.
They are very interesting for sure, but I'm waiting for the S6 successor that's in development before I switch from systemd. There are a number of things systemd offers that are either easier, better, or unavailable in other tools that keep me happy for now. If the successor ends up being good but still missing those features, I'll try my hand at implementing them for the greater good.
Are you referring to svchost.exe, the performance hack that allows multiple Microsoft-supplied services to share a single process, or the Service Control Manager[1], the Windows component responsible for starting and stopping Windows services?
If the former, I agree that trading off service process isolation for reduced start time and lower resource usage is an optimization that has probably outlived its usefulness and should not be emulated on systems that aren't severely resource-constrained.
While systemd arguably bundles too much functionality into its own process, AFAIK it doesn't include any mechanisms to support svchost.exe-like behavior in services it controls.
If the latter, I'd argue that the SCM is actually quite minimalistic, especially in comparison with systemd: it's responsible for starting services in the correct order per a supplied list of dependencies, restarting failed services, notifying services of important lifecycle events — service configuration changes, shutdown requests, network interface status changes, etc. — and that's about it.
Alpine has come cool perk: When a Nix user is flexing about declarative package management, live-edit your /etc/apk/world and run `apk fix` to show them how to do it without Nix.
I generally prefer Gentoo's method: /var/lib/portage/world for manually installed packages, /var/lib/portage/world_sets for selected sets, and sets can be defined in /etc/portage/sets/. This allows you to split your packages into categories and only install certain bits on systems where you need them, as well as add any comments to the files as necessary. The equivalent to `apk fix` is `emerge -uDU @world && emerge -c` which is slightly more unwieldy but oh well.
Alpine allows something akin to sets by using `apk add -t setname pkg1 pkg2 pkg3`, which creates a dummy package which depends on your package selection. I generally create shell scripts in /etc/apk/sets/ to mimick the Gentoo experience, but it's not always the same.
At least a significant chunk of the specific complaints have been fixed. As your first link admits at the very bottom, Alpine-compatible wheels are a thing now, and the DNS thing was fixed a while back. But yes, it would be interesting to benchmark and get actual numbers on the perf side.
I imagine if you've got some performance sensitive workload that relies on glibc for that performance you would "just" run that application in a container.
For many situations, performance is a secondary concern relative to simplicity or security.
Slackware is the happy medium I've found between BSD and Linux. It's unashamedly unix-like and uncomplicated, and has its own rich ports tree through Slackbuilds.
Slackware lost it's way when it tried to comepte with desktop distros but didn't really commit to it.
I used to like it as a minimal distro that just stayed out of my way and considered it oriented towards slightly more technical folk.
Then the community would be hostile if you were digging in to an issue and didn't do a full install because that's the recommended way. And not doing so would also result in stupid issues due to stupid dependencies like mplayer not working because samba was not installed.
Alpine is an improvement over Slackware in every way.
It was my first linux, but I haven't used it in ages. I never found installing from source tgz to be that bad, but dep trees have gotten much deeper over time. I don't think windowmaker had clipboard support when I left it. There is something nice about knowing what every file in your system is for, and being able to read most of them with ed. Alpine scratches that itch for me. I try to sub in anything with a rust equivalent I can. Building rust may have deep trees, but with musl the only runtime dep is usually the kernel itself.
It's fine to just call it Alpine Linux, since (AFAIK) BusyBox isn't associated with the kind of self-aggrandizing assholery that sometimes leaks out of RMS.
RMS had a very strong point that GNU should have been getting some credit and was arguably more responsible for the 'linux' experience that linux itself is.
It's a weak argument that resembles bullying more than it does anything else.
If that kind of crediting was necessary for something like the GNU project to be known and to thrive, then it should have been solved eons ago by including it as a requirement the terms of the GPL instead of by being indignantly recalcitrant about it later on.
IMHO it would looked more 'nice' if the '[ok]' stayed where they were - at least till 'login:' prompt - but I know that its just implemented in a way that 'write [ok] after space from right side of the screen' and as resolution increased - it happened what happened.
They could just move the '[ ok ]' or '[fail]' to the left - beginning of the message - that would 'solve' that problem also.
Technically, they could keep track of where each ok/fail was printed and erase and repaint them after the resolution change – if your terminal supports ANSI color it is essentially certain to support cursor movement commands too (inb4 someone counters that they have a color hardcopy terminal!) But I doubt anyone cares enough to implement that.
I used to think these screenfuls of scrolling diagnostics were super cool and felt like a real hacker when I was 16 years old and first started playing with Linux, a feeling likely shared by many. These days I prefer a minimalistic boot, as long as the diagnostics are easily accessible when there’s a need to debug something. And in that case it doesn’t matter much ehether everything lines up neatly or not.
Alpine is a solid distro for servers from my experience.
The only bad experiences I've had with it, come from the lack of parity between the x86_64 and aarch64 virt images. So our x86_64 setup doesn't work without building our own image with kernel params and addons. Even ZFS I don't think is built into the virt-aarch64 image.
All in all, I would recommend more devs/sysadmins to try alpine outside the container world, and run it in test VMs, host servers, etc.
Yes, when using Alpine in a container context you're getting the "MINI ROOT FILESYSTEM" [1] experience; the parity differences are in the kernel (they're easily ""fixable"" and the team is open to enabling things that people actually use, I've opened such issues on their GitLab and they're very active and friendly)
A couple days ago an HN commenter suggested that use of Alpine is by definition a supply-chain security problem:
> "Alpine is not full-source-bootstrapped, often imports and trusts external binaries blindly, has no signed commits, no signed reviews, no signed packages, and is not reproducible. It is one phished git account away from a major supply chain attack any day now.
Alpine chooses low security for low contribution friction. It is the Wikipedia of Linux distros, which granted it a huge package repository fantastic for experimental use and reference, but it is not something sane to blindly trust the latest packages of in production."
> Alpine is not full-source-bootstrapped, often imports and trusts external binaries blindly, has no signed commits, no signed reviews, no signed packages, and is not reproducible.
the above is not entirely correct, alpine does have and always had signed packages
in the other aspects most distros are generally not much or at all better, since all that stuff is hard and takes extra infrastructure
in chimera we try to make source bootstrap possible and in general not rely on third party executables, but it's not always possible (e.g. some language toolchains were bootstrapped from official binaries originally) and we try to respect best practices for reproducibility (pretty sure alpine does too) but actually verifying it would need dedicated infra/resources, so we don't do it
I use Alpine as my personal laptop OS. There's a lot of unfortunate things, like the fact that a package upgrade or install will often uninstall other packages which your system needs, and you spend half a day fixing it. Package/config backup and restore is a PITA and I still don't quite understand it. Getting a full GUI desktop set up is also a PITA and requires a lot of research and trial and error. Things like Bluetooth Audio seem unnecessarily buggy/difficult. And it takes a while to figure out what isn't going to work with libmusl, and the workarounds thereof. It's also pretty hard to try to contribute to the community.
But overall, I prefer it to the more bloated modern distros. There's less complexity, it's snappier (I use an old laptop), and nothing gets shoved down your throat. Flatpak makes using 3rd party or bigger glibc apps a breeze. To get 1Password GUI to work using Docker took quite a bit of experimenting but it worked out in the end.
Usage, I converted a bunch of containers to it from debian-slim in our fairly large CI/CD setup and it processed workloads noticably slower, despite booting slightly quicker as it's a smaller pull. However, with network speeds nowadays, it was negligible difference and not worth it. Rolled it back to debian-slim.
Would these performance concerns be an issue if you were using alpine "on the metal" and debian containers?
When running complex applications I find it's simplest to "compile" the application into a container thus rendering some tedious complex runtime to a static binary that's trivial to run without worrying about tedious dependency management. It burns a bit of storage but that's not a big deal these days.
If someone suggested to me "hey I want to run a big PHP / nginx / mysql workload for my startup; should I use alpine?" I'd suggest they find a doctor.
We're providing a CI/CD system that supports several different departments and teams with varying technology in their stack and a plethora of different pipelines. Some big java projects, python and loads of other batch jobs like spark etc. If only it was as simple as just running it on bare metal.
The issue is with muslc, not the hardware.
I am pretty comfortable with suggesting one OS as the "hardware" OS and another OS for the userland...
Alpine's design makes it really well suited to "hardware"; I'd even suggest it's probably a good way to run kubernetes or lxd because it's simple and trivial to provision/customize and not full of vendor nonsense.
You can use alpine as a "base container" layer, but you'll quickly end up in a world where libc vs musl or "I need a weird package" makes a tiny centos/debian container more appealing. If you've got java or python or ruby or some other complex runtime, just run it in the most commonly used base container and don't go looking for trouble...
I've been playing in my homelab in the past year with Illumos (OpenSolaris). After decades of using linux, everything is much simpler, all the linux constant changes are inexistent. Everything which was working in Solaris 10 (15-20y ago), still applies with small modifications today.
Has support for zones, which in my opinion are 10 steps above docker containers, has builtin ZFS support for root/zones/etc and so on.
Services are managed using SMF which has the only downside that services are configured using XML but usually if using only the builtin services it is not a problem. SmartOS also has a script to automatically configure the XML file.
I'm running OmniOS on a couple servers having a few zones each and I also run a SmartOS server for VMs. Launching a VM is a lot easier than on Linux. I can switch between Bhyve and KVM and use a single JSON file to configure all the VM properties and then launch the VM using a single command: vmadm create -f file.json.
All the networking is done using simple commands such as dladm and ipadm which are using the standard UNIX way.
Alpine is a joy. I use it as a very small and lightweight distro for various containers on my Proxmox server.
I do wish there were a glibc-based Linux that had an emphasis on being small. Not as small as Alpine, of course, but something without systemd and snap and 100MB of files for locales I'll never use.
You can always build your own with a distro build system like OpenEmbedded, if you want to have ultimate control.
But a minbase Debian install is reasonably small by today's standards. Doing this using the Debian installer is not as easy as doing it using a tool like debos if you're trying to build a disk image or file system tarball to make a container (then you don't need the kernel). A Debian bookworm minbase container (no kernel or locales, with ca-certificates, does have systemd) works out to about 71MB gzip compressed or 212MB uncompressed (about 78MB compressed and 235MB uncompressed if you add 1 locale). That's definitely not small but it is *reasonably* small in my book today.
oh interesting. The Debian you get using the Proxmox LXC template installs to 540MB. (Compare 10MB Alpine, 178MB OpenSUSE). But I don't think they put any effort into making it a small container. Getting down to 235MB is pretty good.
Totally in love with Alpine Linux as well. I wrote a few guides how to use it, including making a minimal file server [1] or hoow to PXE boot Alpine Linux with NFS storage and even Desktop Environments [2]
Also in my homelab I have a Docker Swarm system on multiple Lenovo Tiny nodes which all run Alpine Linux because it has much less overhead compared to Ubuntu or even Debian
Huge Alpine fanboy myself. I love that a clean install has less than ten processes and I know what they all do. The community is also very good at building packages that "just work" - the article points out ZFS, but also docker, podman, libvirt are trivially installable.
> Perhaps the package I was most surprised about was zfs. ... What that would look like after an upgrade I’d have to see, but thus far I’m impressed.
I can confirm it works smoothly. I've observed that the ZFS package is updated whenever the kernel is updated.
> At the risk of embellishing my feelings about this, it is such a relief that there are Linux distros like Devuan, Gentoo, and Alpine using this. It’s a breath of alpine air, and has legitimately made Linux fun again.
What kind of fun I'm missing by using systemd as init system? Couple examples of those who used it, would be interesting to see
For sure. It's decades of legacy, many different versions, footguns and caveats everywhere. I personally like to tinker with it, but I also have to look up basic things like "how to write a for loop" or "how to compare numbers" constantly.
" known or understood by very few; mysterious; secret; obscure; esoteric "
And as an example for something arcane, the Sanskrit language is brought up. I think this is a perfect example to compare with shell scripting. Sanskrit is from an era where it was more common, but nowadays it's usage is more of a specialty. The rules are understood and public, easily accessible technically, but it's hard work to get good at it. Since it has a long history, it has many variants, both over time, and depending on locality. There is a kind of common form of it that people can use for everyday matters, that is much easier than knowing all of the rules and cases.
I think it's a good argument that shell scripting is arcane, even by investigating the definition of it. Not to mention that in the original post, it was just postulated that shell scripting is a bit weird, and maybe not the best tool for the job, a kind of a lighthearted jab at the language, and at the practices people sometimes do.
Shell scripting is not "known or understood by very few", it's widely understood by a great many people. It's one of the most common programming languages. It is by definition not arcane. It simply can't be with how widespread and popular it is.
The comparison with Sanskrit doesn't make sense, given shell scripting is still in wide use currently.
It's fine to think shell scripting is a bit weird, but it's just absolutely and unambiguous wrong to say it is arcane, especially by the definition.
"Arcane" means "requiring secret or mysterious knowledge". I think there are quite a few features of bash that qualify as arcane. I've been writing shell scripts for decades and I still have to look up how to do certain things often enough.
And if you want to write in portable shell, remembering all the rules and things you can and cannot do feels somewhat arcane to me.
Sure, but they aren't really writing what I would consider a script/program under typical usage. Obviously that's a fuzzy definition if you start piping things together, but I'm talking more about control flow / parsing argument / etc. Doing that in bash correctly does require arcane knowledge and skills. It is immensely difficult to not shoot yourself in the foot.
I think it’s definitely a contributing factor. Folks conflate simplicity with familiarity. It also just feels good to apply knowledge you worked hard to gain.
As a user, I mean. It was very easy to tell what it was doing, understand why, and was pleasant to interact with the tools for managing that. I rarely write init scripts.
Alpine Linux is my daily driver on my workstation and laptop. All my machines look the same thanks to chezmoi. I think about switching to FreeBSD though, because of bhyve (vm-bhyve). Want to give it a try. Curious if it can replace my LXD environment.
I heard that bhyve has GPU passthru support now. would be fun to have a rock solid freebsd base system and still be able to use my alpine linux environment within a VM (with X) if needed.
Its all nice and small, etc. but its Apples to Oranges comparison.
On FreeBSD You have ZFS with ZFS Boot Environments - which gives GREAT flexibility and protection against all kind od updates, changes and even accidental files deletion in the system. Moving to 'oldschool' LVM and EXT4 is just a huge downgrade. Especially without any advanced features such as compression or snapshots or clones or ... all the other wonders from ZFS.
As from the busybox ... You can also use it on FreeBSD - its under 'sysutils/busybox' port ... and FreeBSD has 'kinda' itself busybox-like solution - its called FreeBSD Rescue System and its located at /rescue. It contains one executable file 17 MB in size (and ZFS lz4 makes that 11 MB) with hardlinks as commands - including all needed ZFS commands - which BusyBox obviously does not have.
FreeBSD % % du -sm /rescue
11 /rescue
FreeBSD % du -smA /rescue
17 /rescue
FreeBSD % ls /rescue
[ dhclient gbde kldstat mt rmdir umount
bectl dhclient-script geom kldunload mv route unlink
bsdlabel disklabel getfacl ldconfig nc routed unlzma
bunzip2 dmesg glabel less newfs rrestore unxz
bzcat dump gpart link newfs_msdos rtquery unzstd
bzip2 dumpfs groups ln nextboot rtsol vi
camcontrol dumpon gunzip ls nos-tun savecore whoami
cat echo gzcat lzcat pgrep sed xz
ccdconfig ed gzip lzma ping setfacl xzcat
chflags ex halt md5 ping6 sh zcat
chgrp expr head mdconfig pkill shutdown zdb
chio fastboot hostname mdmfs poweroff sleep zfs
chmod fasthalt id mkdir ps stty zpool
chown fdisk ifconfig mknod pwd swapon zstd
chroot fetch init more rcorder sync zstdcat
clri fsck ipf mount rdump sysctl zstdmt
cp fsck_4.2bsd iscsictl mount_cd9660 realpath tail
csh fsck_ffs iscsid mount_msdosfs reboot tar
date fsck_msdosfs kenv mount_nfs red tcsh
dd fsck_ufs kill mount_nullfs rescue tee
devfs fsdb kldconfig mount_udf restore test
df fsirand kldload mount_unionfs rm tunefs
The MUSL also limits some things - but that is topic for another discussion.
... and do not get me wrong - I like Alpine (same as Void or Gentoo or Devuan) - its just not the same thing.
Worth mentioning aside Alpine (and other small Linuxes) supporting ZFS just fine, we also have ZFSBootMenu, which is frankly a hell of a lot better than the boot environment experience in FreeBSD.
From within the bootloader's interactive (fzf-based) menu, you can:
* Select boot environment (and change the default boot environment)
* Select specific kernels within each boot environment (and change the default kernel)
* Edit kernel command line temporarily
* Roll back boot environments to a previous snapshot
* Rewind to a pool checkpoint
* Create, destroy, promote and orphan boot environments
* Diff boot environments to some previous snapshot to see all file changes
* View pool health / status
* Jump into a chroot of a boot environment
* Get a recovery shell with a full suite of tools available including zfs and zpool, in addition to many helper scripts for managing your pool/datasets and getting things back into a working state before either relaunching the boot menu, or just directly booting into the selected dataset/kernel/initrd pair.
It also supports 100% of ZFS features that the host system supports, since it uses the ZFS kmod. That includes native encryption.
They're afraid of the hypothetical legal threat from Oracle, which largely seems to come from a lot of license misinterpretation and urban myth.
Seems like the only ones that have ventured to ship a ZFS binary are Canonical, and their implementation seems to be done by people that didn't understand ZFS and have no interest in understanding it.
It's really a shame. OpenZFS on Linux really has excellent support and integration, arguably as good as or better than FreeBSD and Illumos, and has this excellent bootloader.
Still, ZFS has good out-of-tree support in distros like Void and Alpine, where the users can take it upon themselves to do a good root-on-ZFS setup and reap the benefits.
"was booting it on a VM at work, before realising I misread Dom0 as DomU...former refers to a Xen hypervisor, not a guest...it booted and installed the same as a standard ISO."
For so many of us that would be an automatic fireable offense. I hope OP has the autonomy to do that and remain unscathed.
it is not always frowned upon. It is a big NO in prod servers.
But most companies have less critical or testing environments available where people have some more space to play with.
And some teams like support, QA, Dev, etc usually have their own vm servers available to play with.
Where i work support analysts even had their personal servers they could do whatever they wanted, within reason of course. They still have access to VMs at AWS that they can start and destroy at will, but now there is just a few base images to choose from.
If booting an ISO in a controlled test environment (implied by author specifying they are booting on a VM) is a "fireable offence" where you work then I pity you for having to work in such a draconian environment.
All Linux binaries are compiled with PIE nowadays. You can run `checksec` on any binaries on Ubuntu, and it will have those properties. (You can install checksec with `pip install pwntools`).
On the other hand, GLIBC has, to my knowledge, the most hardened heap implementation out there. And there are more mitigations for double-free and other heap exploits on GLIBC.
So in that regard, Alpine is less secure by using musl. Having a small, understandable system is a real advantage when it comes to security.