Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What Next After Ubuntu?
239 points by voakbasda on Jan 29, 2023 | hide | past | favorite | 340 comments
I have been running Ubuntu on servers and desktops for around twenty years, but ongoing changes to their platform have shaken my faith in its future. The first serious breach of trust was forcing their users to use their untrustable snaps (e.g. Firefox on 22.04), but the last straw for me has been breaking apt upgrades on 20.04 LTS in order to push their Pro agenda.

I am looking to replace Ubuntu with something that will be stable and supported for the next twenty years, without being ruined by corporate interests. What are my best options?




I live in a region with a lot of government contracting businesses, so Red Hat Enterprise Linux is something I have to maintain a working familiarity with.

However, I use Debian for all of my personal projects and infrastructure.

The reason? There's no for-profit corporate interest directly controlling the project. The project's organizational structure resembles a constitutional democracy:

https://www.debian.org/intro/organization

There is an incorporated entity in the United States to handle a number of intellectual property and financial concerns:

https://www.spi-inc.org/projects/debian/

However, it exists as a non-profit with a very narrowly defined, specific set of purposes:

https://www.spi-inc.org/corporate/certificate-of-incorporati...

Because of this, I feel like the Debian project has a good combination of people and resources, making it easy to rely on long-term, but without the for-profit corporate interests that may conflict with my own in the future.


I've used exclusively Debian on all my servers and laptops for the last 20ish years.

Can confirm, it will continue to keep working, however there might be some things that piss one off in a similar manner as Ubuntu's snaps. I hate systemd for example, but grudgingly accept it.


Fortunately, we can still use Debian without systemd. I'm forced to use systemd in many places, but for example this laptop were I'm writing runs Debian without systemd.

For me it's more to worry about: the influence that paid Ubuntu developers gain, year after year, inside Debian. Or the presence of ubuntu/canonical changes inside Debian packages.

Regarding Ubuntu... I'm forced to use it at work, and we use to "debianize" ubuntu servers... this is: remove snaps and snapd, remove netplan (for ifupdown), remove lot of dependencies that come with the minimal install (used only for their paid services), remove many services/packages (cloudinit, multipathd, polkit, motd-news, lxd, apport, etc)...

And the recommends of the recommends of the recommends of the recommends of all that.

In the last LTS, we we're forced to build our own installer at the end: a minimal live system + debootstrap + a couple of scripts (parted/mkfs/grub-install).

With the previous debian-based installer, we were able to fix most of the Canonical decisions via preseed... with the last version it was a pain until I did our own installer.

Still, for many people, being driven by ubuntu, is perfectly fine. They use to hate forced UI changes (i.e. my mum), but everything else is fine for them while "it works".

It's maybe just more technical people who are bothered by the underlying changes.


> Regarding Ubuntu... I'm forced to use it at work, and we use to "debianize" ubuntu servers... this is: remove snaps and snapd, remove netplan (for ifupdown), remove lot of dependencies that come with the minimal install (used only for their paid services), remove many services/packages (cloudinit, multipathd, polkit, motd-news, lxd, apport, etc)...

> In the last LTS, we we're forced to build our own installer at the end: a minimal live system + debootstrap + a couple of scripts (parted/mkfs/grub-install).

That is a lot of work. Why not use Debian itself? If you want that lts support you can use AlmaLinux in the RHEL clone camp. Not aware of any upsells in that system from my usage of it as a kiosk. The desktop is rough with some common packages missing from EL9, but the server would be excellent. Even CentOS Stream has longer free support than free Ubuntu LTS.


It sounds like they have a top-down directive to use Ubuntu.


Correct.


Do you mean your laptop runs Devuan? I get that is nearly the same as Debian, but does it include the Debian social contract and community? Isn't it a separate project?

https://www.debian.org/social_contract


If you're going that way Devuan is probably a better choice, but Debian did support running without systemd and still has sysvinit on a best-effort basis - https://wiki.debian.org/Init


Sorry, it's not Devuan, but Debian 11.6, previously was 10.x and was on testing until Bullseye was released.

Works for me without systemd, no problem.

I already use systemd in hundreds of systems at work and personal servers/vps.

But on my personal laptop, I have too many custom stuff after 20 years of thinking around, glued across all the layers (kernel, boot loader, init, background services, tty, graphical environment) and I did choose to keep my laptop with the traditional equiv of all the things that systemd does, just because of that.

Debian is really flexible to allow you to choose components in the system, or to have more than one equivalent component, and to configure how they interact, and still be considered an official Debian system.

And even to allow to people like me to do your own Debian if you want. That is the "official Debian" that I like, the one that allows me, that doesn't force me.


> I hate systemd for example

Everyone's entitled to their opinion, but everytime I hear this, I have difficulty listening to the rest.

Systemd is 10x easier and more manageable than every alternative.


No.

(As a sysadmin of 15 years, I can clearly say all of them have their advantages and disadvantages, but being easy and manageable is not an advantage of systemd, but a property of all of them)


Can you provide example of what makes it easier than alternatives?


Like...everything?

Create a service that starts on boot:

    cat > /etc/systemd/example.service << EOF
    [Unit]
    After=network.target

    [Service]
    User=example
    ExecStart=/usr/local/bin/example

    [Install]
    WantedBy=multi-user.target
    EOF

    systemctl enable --now example
And the harder things get, the bigger the advantage. Like if you want to mount a disk, create a socket, and then afterward start unprivileged service...all very simple.


systemd is "easier" for people used to cisco/microsoft/etc.

The huge list of services with not-for-human names so your job can look complicated and essential.


It may be easier for them, but as someone without Cisco experience, systemd was a lot easier for configuring service dependencies once I understood it.

It's been 10 years now? I wouldn't go back, most of the suggested alternatives are so basic that troubleshooting becomes a chore. Almost all the hate is because they don't like Lennart.

We should fork it, get someone else to head that with a different name to get them on board.


i also accepted it. And made a point to forget the cisco experience.

Yeah, the dependency is a fine feature, but before it was even *easier*... just put a number on the rc.d symlink.

c'mon, if you think that is harder than editing a dozen files, remembering weirdly named camelCase attributes and then pointing to randomly named services, then i don't know what to say. But again, i also accepted it, for better or worse.

Now, i don't care about lennart, but i hated systemd for most of the time because they pushed an incomplete crap over something that was working, just to get contributors. If redhat wanted their cisco/windows services management clone, they could have worked on it. Doing what they did (i call it to pull a gnome) was just shitty behaviour and they should always be remembered for such action.


Eh? systemd unit files are tiny compared to the huge massive scripts that came before it. Often those scripts we're only understood by a few people as well.

I think this why a lot of people really hate systemd. Suddenly a bunch of arcane knowledge used to maintain specific scripts got made redundant.


I have a thousand gripes about systemd, but “it made my knowledge obsolete” doesn’t make that list.

Also, I have only seen two or so massive scripts about services. Most of them were slight variations of a standard boilerplate code.


Debian now works fine without systemd. The alternatives are sysvinit, openrc and runit. I've removed systemd from many working installations. The problem is rather the many random dependencies.

(Block systemd in a preference file with priority -1. apt install openrc (for example). Read the warnings. Reboot. apt purge systemd. Freedom.)


If you don't like systemd why don't you run Devuan? It's just Debian without systemd. I'm running it in a VM now to try it out and it's been great, so far. I like having everything back to being plain text that I can monitor and manipulate as I like. Rather than logs being weird binaries that need their own special tools to access.


why the aversion to systemd?


I don't like the design at a fundamental level. It feels brittle, bloated, and not very unixy. I'd rather have my init system be a handful of microscopic executables, using text files or symlinks for configuration and text files for logging.

I don't like everything depending on systemd it feels like too much complexity at the wrong part of the stack. It doesn't jive with my sense of architecture, it's not well designed software.


Systemd does use "text files or symlinks for configuration" — those being the unit files in /lib/systemd and /etc/systemd. `systemctl enable` just makes a symlink from /lib/systemd into /etc/systemd, even. What would you point to to claim that it does otherwise?

> text files for logging

...just sucks, on both embedded systems and production servers. (I.e. anywhere where you aren't debugging the machine on the machine, but rather from another machine.)

Either the program just writes to a plain text file forever — and so fills up your disk the first time it goes haywire (so now you have two critical runtime problems!); or it implements its own log rotation and compression (as must every other daemon — not very unixy!); or it must be specifically wired to work with syslog APIs in order to use rsyslog (which, by the way, uses binary wire protocols as well; logging at scale hasn't been text-based in a long time.)

Journald, meanwhile, just sits on the other side of the pipe from any systemd service-unit's stdout + stderr; manages log rotation + compression in a centralized way (which also means you get cross-unit log compression for free); and offers CLI tooling to pipe the multiplexed log stream back into anything that wants to read from it, in whatever format those things want to read from it (i.e. tools that want JSON Lines, get JSON Lines; tools that want plaintext, get plaintext; tools that want a binary record stream, get a binary record stream.)

Is this a Unixy approach? Well, it's pretty much the same one taken by the extremely venerable Unix/Linux line-printer (lp) subsystem — CLI commands, with textual config files, for interacting with a system daemon (lpd) that manages and manipulates binary state files, within daemon-owned directories. Would you complain that the contents of /var/spool/lpd aren't human-readable?


> (Text files for logging) ...just sucks, on both embedded systems and production servers.

Ehrm, no. Managing a sizeable fleet, with a central logging server for 1.5 decades, and we never had the problems you mentioned:

> and so fills up your disk the first time it goes haywire.

This is a bug of the program or configuration mistake or your monitoring is not working as intended.

Funnily, we're seeing more disk pressure from systemd journals. Go, figure.

Just remembered: syslog daemons have rate-suppression mechanisms to prevent big lines repeating too fast and preventing your disk from filling up. So even your program enters an infinite loop, a well configured syslog daemon (rsyslog, syslog-ng, whatnot), should note "X similar errors have been supressed.", where X can be anything from 2 to 1000 (or even more).

> or it implements its own log rotation and compression

Which you can disable 99% of the time and just delegate the stuff to logrotate.

> it must be specifically wired to work with syslog APIs in order to use rsyslog

rsyslog is just a syslog daemon. syslog is kernel plumbing at this level. You can terminate this pipe with anything.

> Journald, meanwhile, just sits on the other side of the pipe from any systemd service-unit's stdout + stderr; manages log rotation...

And provides nothing new when compared to syslog plumbing. A binary log, some tooling around that, and that's it. It even makes per daemon log monitoring harder by blinding syslog-aware monitoring and automation tools, hence we need to enable rsyslog on the system too. Now we have two journals. Neat.

> venerable Unix/Linux line-printer (lp) subsystem

Which only handles "line-printer" subsystem, and yes, it's more UNIXy. It doesn't get the text output, bashes into a binary data structure, and doesn't try to replace anything and everything from boot to logs to time sync to user login to tap water temperature.

It just stores its state in binary file. Which almost every UNIX daemon does. Incl, but not limited to X11 & CUPS.


> Funnily, we're seeing more disk pressure from systemd journals.

So configure journald correctly? It has multiple options to control disk usage from logs -- `man journald.conf` and search for "MaxUse" for the relevant options.

> Just remembered: syslog daemons have rate-suppression mechanisms

So does journald. Relevant options are RateLimitIntervalSec and RateLimitBurst, and individual services can set their own limits as well.


> So configure journald correctly? It has multiple options to control disk usage from logs -- `man journald.conf` and search for "MaxUse" for the relevant options.

So configure logrotate correctly; you hardly need journald for that.


Systemd had no shortage of issues, but:

> text files or symlinks for configuration and text files for logging.

Systemd gets this right and arguably pushed the whole ecosystem in this direction. The old rc scripts could barely be considered a text file and symlink configuration system — they were a pile of text files containing a miserable combination of code and configuration mixed together, along with a very simple configuration (this service is enabled in these runlevels, more or less) that got translated, hopefully correctly, into symlinks. Of course, nothing really kept the symlink farm consistent with itself or anything else except a pile of additional scripts associated with packages that tried and usually succeeded.


I agree with (basically) all of this, the larger point. I'll note that I said "or"... ;-) (CYA)

I'm not sure what the "right" solution looks like, perhaps a directory of TOML or JSON files. Perhaps the aforementioned + executable shell scripts with predictable naming? handwave handwave as long as it's "UNIXy". (consist of easy to edit text & not invent a new anything & be composed of pieces which do one thing well)


Do you think we'll get lucky with a systemd replacement similar to how pulseaudio has been deprecated by a much more reasonable implementation Pipewire?


In my copious free time, I have a vague idea of designing and implementing a mechanism called kpid1.

Basically, a task running as pid 1 (including in a container) could call a new kpid1() syscall, which would cause the kernel to completely take it over. The kernel would take care of all the usual init work, and it would expose the minimal API (presumably using a new kind of fd) to allow a different task to give it instructions and manage zombies as needed. And that’s it.

It’s worth noting that the entire concept of pid 1 is very unixy, but not in a good way. Reasonable modern designs (e.g. all the Windows variants) don’t have any real equivalent.


What benefit are you seeing in putting it in the kernel?


Several:

Zombie reaping could have a reasonable API. (Signals are miserable.)

PID 1 is magic in problematic ways. In particular, if PID 1 crashes, the whole system goes down with it. And having PID 1 be a normal program running from a normal ELF file means that that ELF file is pinned for the life of the system or at least until it execs something else. So handoff from initramfs to a real fs either involves PID 1 calling execve() or involves leaving the init process around. Upgrading the package containing PID 1 requires execve(). Running PID 1 from a network filesystem or an unreliable device risks a kernel panic for no good reason.

With PID 1 moved to the kernel, the actual service management job is no longer coupled to PID 1’s legacy. A service manager could hand off to another one by saving its state to disk and exiting, by running the new one and moving its state after the new one starts, or by any other ordinary means. And if it crashes, you can ssh in, read logs, save work, and then restart it or the whole system as appropriate.

As a minor additional benefit, having PID 1 in the kernel could enable some optimizations. Right now, a process must enter the zombie state when it exits, and it must stay in that state until its parent wakes up and reaps it. So a service exiting fundamentally involves some complex bookkeeping and a context switch to a single, unrelated process. If the kernel knew that kpid1 was in use and that nothing in the system actually needs to be notified of exiting children of pid 1, then a child of pid 1 that exits could simply go away, as it would on a sensible system like Windows.

(Yes, it's okay to admit that, in some respects, Windows is substantially better than Linux/Unix.)


Not OP, but the whole business of PID 1 having to reap orphan PIDs seems like something the kernel should have to do. Is there a good reason for when a process exits that not other process is waiting on, that a user-mode PID 1 process has to observe that exit?


My understanding is that reaping children is a normal thing for most processes to do, and it's only orphans that fall through to PID 1, at which point it's easier to deal with it there rather than need to do anything special in ring zero.


Reaping children is "normal" in a universe where processes have numeric ids that can't be reused for unrelated processes until some handshake occurs that frees the id for reuse.

If you take anything resembling a fresh look at this concept, it's absurd. Imagine if every open file had a systemwide unique id, and one specific process owned that id and would continue to own it until it released it.

Reasonable designs use weak references that don't have values that can be compared across processes. These are usually called "handles" or "file descriptors", and they don't have this problem at all. Nothing reaps sockets, for example, and nothing needs to.


I have to think ... you don't actually know what you're talking about

I fought against systemd for a while, too

I was wrong

The "if it ain't broke, why fix it" approach with 'classic' init scripts led to a far far messier place than systemd

My only complaint about systemd is that I haven't found a way to push the journal to text simply ...and that's most likely my poor google-fu or not having enough to time to fully dive into it, rather than "systemd's journal sucks"

Systemd does use "text files or symlinks for configuration"

It's very well designed - though you may happen to have a preference for a different architecture (...but the one you described is pretty much systemd, my friend :))


Hear your words.

How is that different from I don't like relativity theory at a fundamental level. It feels abundant, bloated, and not very physical. I'd rather have my Newton's law with a handful of augmentation.


Did you consider in your opinion that booting and service management is an insanely hard problem?

Would you write an optimizing compiler in multiple small tools as well? Essential complexity can’t be reduced, IPC will blow up your accidental complexity budget. Many times a monolith is indeed the best design choice.


> text files or symlinks for configuration

Like...Systemd?


In short, developers' attitude towards critics and initial design decisions.

Now it's better, not but perfect, and a lead dev working for Microsoft doesn't inspire confidence about its long term agenda.

I can post links to comments of mine if you're interested further.


Please post links, I’ve never truly understood the systemd/init conflict, unlike the emacs/vim conflict.

I’d like to learn more!


I want to like systemd, especially since it is the default and has all the mindshare in Linux. I'm also not particularly in love with the historic shell scripts approach, as some are. Some systemd elements, such as its journal, are convenient.

My issues with it are:

- If you setup an atypical configuration, particularly involving luks volumes, it is not hard to break systemd and dracut's assumptions, and then you will have a hell of time trying to boot and survive systemd updates.

- When it breaks, figuring out what the hell is going on involves having to learn a lot of systemd, which has lots of its own unique vocabulary and logic. There are many pieces and moving parts. It feels like someone went "microservice-crazy" with the init system and like there has to be a simpler way. The surface area of systemd is enormous.

- The whole anti-split /usr crusade is excessive. You might want to have /usr as a separate volume so you can mount it with the nodev option, for example. Why should that be forbidden?

If you conform to systemd's expectations about system configuration, I'm sure it works fine regardless of its elegance/inelegance and excessive complexity. If you would like to do things differently in ways that Linux's building blocks otherwise permit, you could be in serious trouble.


I wonder if surface area of bash is even quantifiable, I never managed to read its man page to the end.


it's only 3400 lines!

:P


Bugs and complexity is the usual reason. It seems to be a competence thing - experienced admins find it harder to debug and fix systemd issues, while regular users care less about being able to pop the hood and fix things themself.


Which issues? The last time I ran into a systemd issue was in 2016.


There's been gigabytes of text spilled in flames back and forth about systemd over the last ... decade? The first google completion suggestion for "is system" is "bad" and that'll lead you to plenty of criticism about it, lol.

For myself, eh. I find it a little annoying but basically tolerable, sort of like a reinvention of SMF from solaris. Linux system config / init has gotten a hell of a lot more complex since I first touched it in the mid-90s, sometimes we get more functionality for that and other times the grognard in me wants to bin it all and retreat to Slackware or something. What was the old joke, Microsoft admins have solitare.exe and Linux admins have "fiddling with text files"? :)


It breaks the philosophy of doing one thing well, has an awkward and esoteric config language and keeps growing in scope.


You might find this interesting: https://nosystemd.org/


Ubuntu exists because Debian stable releases were inconveniently outdated, and Debian unstable was occasionally broken. Ubuntu also explicitly compromised "software freedom" in favor of utility, particularly by making non-free video and wifi drivers easy to install; in contradiction to Debian's pro-copyleft design. This situation has not changed.

I suspect OP is considering a change from Ubuntu because Ubuntu itself has diverged so far from its original identity as a reasonably stable Debian-unstable fork.

It was several years ago that I abandoned Ubuntu in favor of Archlinux for that very reason. These days, I'm almost exclusively using NixOS, but can't recommend it to impatient or non-technical users. NixOS is incredibly stable, very fresh/up-to-date, and incredibly chaotic to use. Someday, I expect, there will exist something of a "distribution" of NixOS that - much like Ubuntu did circa 2008 - caters to the average user. I hope that day comes soon.


slightly wrong. Ubuntu succeed in part because of what you mention, plus lots of marketing money. debian have caught up now. and most people seriously wanting up to date is over at gentoo or arch.

But back to the "ubuntu exists" premise you started, it is because a rich guy wanted to take over debian and sell an enterprise solution based on it.

remember at the time enterprise was all the rave. google was pushing their enterprise suite, and there was ton of startups (zoho, etc) it was a crowded space then and red hat, suze were completely fumbling with their linuxes.


Marketing money? For about the first decade I think the only marketing was shipping CDs for free to anyone who wanted them. A billboard or two might have been rented over the decades in highly targeted circumstances. There is approximately zero marketing.


you think free CDs worldwide is cheap?!

they paid for "CD vending machines" at several locations where you could get free cds... that is more expensive than a billboard and practically buys you a spot on specialized magazines. They were going those marketing campaigns all over the place.


Oh yes. Free CDs worldwide was incredibly cheap as a form of marketing, especially when you have the ability to source the cheapest bulk pressing and shipping deals world wide. As are vending machines, compared with the ludicrous prices charged for a billboard in a relevant position. I can't even recall if the vending machines were Canonical or local Ubuntu user groups doing it for shits and giggles/course credit. A tiny drop of money compared to competitors marketing budgets.


Yeah, I just set up a new personal server, and after experimenting with Linux Mint a bit, decided that mint wanted to be a desktop with a gui more than a server (which is totally fine!) and ended up at Debian, after years of defaulting to Ubuntu. Debian feels more or less the same once I gave myself the option of installing unstable packages (which I probably wouldn't do for a professional server, but seems fine for a personal one).


I don't know about claiming Debian is free from corporate interest. As a packager, sure. And for its own stuff like APT, yes. But that's just a fraction of the overall Debian codebase.

Debian ultimately follows the OSS community for most stuff, and the 'community' is often corporation backed. Just look at systemd. A lot of distros didn't want it, but ultimately were stuck between maintaining masses of stuff themselves, or accepting Red Hat to maintain it for them. There are many other examples.


Comes full circle as Ubuntu is derived from Debian


As somebody said, "Ubuntu is Debian-based the same way milk is grass-based".


What a smarmy thing for that somebody to have said! They use the same package manager and many of the Ubuntu devs were originally Debian devs.


The same package manager, but different enough packages, PPAs, etc, and a different release process.

Many different technical decisuons.

A very different project governance.

I'm not implying that these things are bad! But different enough they undeniably are.

The fact that I can install .deb packages on my box using a port of apt does not make it Debian. (Saying this as a dedicated Debian user since 1998 and until the switch to systemd.)


I realize the above comment took more effort than parroting someone else's 9-word statement. Thanks for putting the effort to expounding on the differences between milk and grass.


ubuntu started out as "debian, but with more up to date packages and some light polish."

as a long time debian user in the 90s, ubuntu was an exciting new step for debian based distros.


The Gray Lady of distros...


- "something that will be stable and supported for the next twenty years"

Then the safest bet is to choose something that was popular and stable twenty years ago.

https://en.wikipedia.org/wiki/Lindy_effect ("...the longer a period something has survived to exist or be used in the present, the longer its remaining life expectancy")

You *probably* just want Debian. There's many reasonable options but that one's the Schelling point for this question (I think).

edit: Here's the list of Slashdot linux threads from 2003. I think anything flamed in there that's still around is a Lindy candidate.

https://www.google.com/search?q=site:linux.slashdot.org/stor...


I want to highlight this answer.

Ubuntu is completely based upon Debian. Going back to Debian ensure familiarity with tools (like apt). Also, the longer release cycle is perfectly adapted for people using only Ubuntu LTS (which, in itself, is quite awkward as Ubuntu was created to allow short release cycle for desktop users).

There’s also multiple companies offering Debian support if needed.

And it can safely be assumed that Debian is probably one of the 10 operating systems with the most probability of still being supported in 20 years.

On the downsides, you may end using some experimental/external repositories to have some bleeding edges applications and those make dist-upgrade often problematic (not that it is better with Ubuntu). You may also lose some automatic configuration at install time. I’ve one laptop which, for example, does not have middle-click working out-of-the box (it is just one apt-get away but you have to know what you want).

So, yeah, Debian should be one of the first to consider.


Honest question: how is Debian's hardware compatibility compared to Ubuntu?

My impression is that Ubuntu had people testing it on modern hardware, either on the Canonical side, or perhaps on the Dell side (even though I don't own any Dell now)

About 10 years ago I installed Debian on a desktop, and I remember having graphics issues that I didn't have with Ubuntu.

I guess I needed the proprietary driver that Ubuntu offered? I don't remember exactly. Probably some nvidia crap

But either way, I just want to buy some hardware and have the graphics and sound work.

Is Ubuntu currently any better than Debian in that regard? Do they have more testing, or are they the same now?


Any solution that work with Ubuntu should theoretically work with Debian. You'll just need to cobble it together more manually, I imagine.

The good news is that Debian is a very popular distro. You'll be able to find copious amount of information online to guide you to relevant hardware support [1].

It's not at all a "it just works" situation. But if you're comfortable with getting your hands a bit dirty, and you're not using any super exotic hardware, it's not at all bad.

[1]: https://wiki.debian.org/NvidiaGraphicsDrivers


Using the "unofficial" Debian non-free installer and packages will cover most of the gap. Ubuntu still allows more packages in than Debian will.

- https://cdimage.debian.org/images/unofficial/non-free/images...

and apt config:

    deb https://deb.debian.org/debian stable main contrib non-free
The non-free installer is separated from the "official" installer though. There is a general push to "fix" the multiple installer issue, but I think how exactly to fix it is still up in the air. The listed winner in the 2022 vote to "Change <the debian social contract> for non-free firmware in installer, one installer" also notes it needs a 3:1 majority (but I don't know much about the machinations of Debian policy/voting).

- https://www.debian.org/vote/2022/vote_003


A few years ago I used to install Ubuntu because it just handled hardware out of the box that Debian didn’t, or needed a lot of effort to get working.

Now Debian seems to run on everything I throw it at. YMMV of course.


Or FreeBSD if stability is the thing.

// add: everyone says Debian. good Linux. but OP didn't say Linux, they said stable over decades, no corpos.


Red Hat for enterprise use, Fedora for personal desktop use. One of the largest communities you’ll find. If you want Debian based maybe just straight Debian.

The centos debacle was poorly handled but I think what red hat was /trying/ to do made sense. The community just wasn’t there to sustain it. Red Hat was paying to give away free RHEL basically lol. CentOS Stream should have just been called RHEL Stream. It’s basically RHEL minus minor bugs that would only effect small pools of people.

I’m glad Rocky and Alma sprung up and have budding communities though.


I just moved over from Ubuntu to Fedora as I felt I was starting to lose my edge: apt is so easy you forget that the other half of the universe thinks in dnf and I need to keep my skills sharp. Its been a good decision.

A bad decision was to use a Fedora mix with KDE. It works very well, but I think its rough edges would simply not be there if I'd chosen the GNOME path. You get the sense that the RH team default to GNOME and KDE is an afterthought. Which is fine by me, but its a bit of a shame, cos I much prefer it.


KDE user here who has been installing Debian derived distros for about a decade, Fedora Core before that. I often reconsider Fedora. What KDE specific issues should I know about before giving serious consideration to moving back?


>A bad decision was to use a Fedora mix with KDE. It works very well, but I think its rough edges would simply not be there if I'd chosen the GNOME path.

I've been using Fedora as a desktop for many years and agree that KDE is annoying. That said, I understand your point of view, but I find Gnome to be even more so.

Which is why I use XFCE[0] instead. It generally works pretty well, although I don't push it all that hard. Perhaps it might be a good desktop for you too?

[0] https://techviewleo.com/install-xfce-desktop-environment-on-...


Is there any specific reason to choose them over Linux Mint? I'm genuinely asking, I've never tried Red Hat or Fedora.


Fedora is much more secure (a strong focus on SELinux, secure boot, etc.). Fedora has more cutting edge software (latest GNOME, gcc, etc.) while still being very stable.

A large chunk of the gcc/glibc/GNOME/Wayland/... developers are employed by Red Hat and Fedora shows that, it's where the Linux workstation is going and has an amazingly good of integration of new tech. Most other mainstream distributions trail by several years.


OP wants to get away from Ubuntu, but Linux Mint is based on Ubuntu. Maybe that would be reason for OP to disprefer Mint?


Mint so far has avoided forcing snaps, I will jump ship too if they do.


What about LMDE?


That's barely even a real distribution; it's an escape plan for Mint to leave Ubuntu if necessary. This is not a serious suggestion.


OP mentioned stability and long-term supported as what he’s looking for. The Linux Mint team have stated LMDE is more for themselves to gather info and not a priority. So I would stay away if looking for stability and long-term support.

“We work on LMDE primarily for us, to get that information. It is not a priority, certainly not compared to Linux Mint itself…” https://blog.linuxmint.com/?p=4276


One of OP's issues is Ubuntu forcing snaps. I think Mint disables snap by default:

https://linuxmint-user-guide.readthedocs.io/en/latest/snap.h...

Another Ubuntu derivative that does not use snaps is System76's Pop!_OS


Isn’t one of the purposes of snaps to allow finer grained permissions? In non-snap systems, how do you prevent that weather app you just installed from accessing your camera and microphone?


The snap service tends to run when it wants and it can slow down a system when it runs. Also the snap binaries are a lot larger than with apt.

Honestly, I don't think a weather app would access a camera or microphone. That seems like a bad example. Honestly, I put a piece of tape over my camera. It looks like other posters in this thread have solutions.

Using more hard drive space than needed and not being able to control the service are show-stoppers for a lot of people, and it looks like the permissions issue has another solution.


It runs as a user without the necessary permissions to access those devices?


So one should configure an alternative user for each group of permissions that any arbitrary app might need? Then what happens the day that I do decide that my weather app should access e.g. my location? Now I should move all its data and update my launch scripts to the new user?

I happen to dislike snaps as well. The hard coded install directory is a passion point for me. But at least the permissions issue they are getting right.

I wish that desktop distros would adopt the Android permissions paradigm.


No, you can create groups for permissions that you'd like to enable and disable and add the user to the appropriate groups.


Here’s why not Linux Mint. It’s a few years old so I’m not sure if these issues are solved but I’ve never heard of Debian or Fedora having these issues: https://lwn.net/Articles/676664/


Fedora is a nice busy, but power user and developer experience: only rolling for the parts you would care about with a year of security support.

Fedora is especially useful for a new hardware workstation with that year of support. Any longer than that in my experience you tend to run into issues. For example: DXVK, for Windows games with Steam/Proton, just moved to requiring Mesa 22.0 as a minimum and Fedora 35 has Mesa 22.1 something. Similar issues with Ruby development in the past.

Updates are more limited, with a few select packages kept rolling compared to a true rolling distro like Arch. When a new kernel series is release (6.0 => 6.1) Fedora holds off for a few weeks to test until 6.0 goes EOL. Gnome is kept to it's major release with only minor versions to fix bugs. Mesa is updated until a Fedora is released. Firefox is kept up to date and I use the snap for VSCode and Chromium based browsers. Most other packages are not updated to major new versions, except for Vim (and Emacs if I remember).


I would recommend Linux Mint for desktops and Debian stable for servers.


My main gripe with RHEL based distros is the lack of support for in-place upgrades. I know you shouldn't do it, etc but for small servers that can benefit from newer kernel versions, its annoying.

Also dnf-autoupdate doesn't support auto restarts to update the kernel.


Red Hat Enterprise Linux supports in-place upgrades: https://access.redhat.com/documentation/en-us/red_hat_enterp... https://access.redhat.com/documentation/en-us/red_hat_enterp...

Here “supports” means “does not result in itself in a loss of support coverage”. It's been possible to do it in an unsupported fashion before.


Not rhel clones though, which are what most small scale people are going to use. Also still no auto kernal updates.


RHEL is free for up to 15 instances (including prod), and 9.1 has pleasantly current packages. It’s a solid choice for startups and small teams who need to minimize time spent on infrastructure.


I found the one year renewal thing annoying, but other than that it looks to be really good! Just the auto kernel update part is left


Kernel live patching helps to guard against major security and bug issues and is fully automated in RHEL via kpatch.


But it's paid, I don't necessarily need livepatching, just auto restarts to update the kernel


Not quite sure which scenario you mean, but both of these work in the official RHEL distro that is free for up to 15 instances:

1. Restart is ok: automatically apply all security updates as they come out ("dnf-automatic), then restart if required ("needs-restarting -r").

2. Restart is not ok: automatically apply all kernel patches as they come out("kpatch auto")


Dnf automatic should support restart now! https://github.com/rpm-software-management/dnf/pull/1879

However its o my going to work on rhel 10 or anything that has very very up to date DNF version.


Only*


Unfortunately Fedora has been making a lot of similar decisions to Ubuntu lately, although it's still better IMO. I'd say coming from Ubuntu, Mint is probably a lower learning curve anyway.


What decisions? Fedora is my daily driver and I follow Fedora dev pretty closely, so far they haven’t done anything like Ubuntu. Maybe the custom Flathub repo? But that’s a requirement for them since they can’t host proprietary software for legal reasons.


As far as I understand, the next Fedora release will stop masking most of Flathub by default.


Could you please give some examples of those similar decisions? I'm eager to find any evidence of the Fedora admins as bad actors.


My only complaint is that the switch to Wireplumber caused a lot of audio issues on my machine until I reverted back to PulseAudio.

I could also see people not wanting Wayland by default, though that has been working fine for me and the software I use.

I've only been using Fedora for 2 years, but apart from those two decisions, it's been a lot more stable than my time with Ubuntu.


> I could also see people not wanting Wayland by default

It takes a single line change in /etc/gdm/custom.conf to disable it.


Even easier, at login window you can select, and then it will remember your choice.


I haven't seen a decision made in the past that would be relevant, but there is one for future fedora releases https://pagure.io/fedora-workstation/issue/269 about installing certain apps as flatpaks rather than rpms by default. These would be fedora based flatpaks rather than from flathub.

It does not actually say how it interacts with existing rpms or whether said rpms will continue to exist or be maintained in the future.

I would say it's too early to get up in arms about just yet.

Me personally, I agree with this approach, but a lot of other people clearly won't.


Fedora should not be recommended anymore for personal desktop use for the simple reason that they removed hardware accelerated video decoding and encoding support. Must all run on the CPU now.


It's a bit more nuanced than that. Fedora disabled hardware video decoding support in their Mesa builds, which mostly affects AMD GPUs. Intel has hardware video decoding in the GPU driver (or in the firmware or wherever), and it's not dependent on Mesa having it enabled. NVidia of course does its own thing.

It's still a shame for AMD GPU users, of course, although understandable from the point of view of legal risk management.


You can enable it manually, like almost every other non-beginner friendly fedora quirk

https://ask.fedoraproject.org/t/proprietary-video-codecs-are...


The nice thing about Windows and MacOS is that users aren't expected to jump through hoops to restore functionality.


I mean its not limited to macOS, you can just use Ubuntu or any other more user friendly distro.

Fedora isn't really noon friendly, and probably will never be, because of copyright limitations compared to the UK where Canonical is based.


I've been running Debian stable on servers for like 8 years now.

Up until recently I had a server running Debian Jessie with 1,802 days of uptime. It served a decent amount of traffic. Services on there ran unattended for literally years and it was rock solid. I ended up decommissioning the server because for the same price I could get better hardware specs so I made a new server and put on Debian Bullseye (the latest stable release as of mid-2021).

With Docker being a thing now, Debian's older but more stable package versions for app level things (programming runtimes, databases, etc.) is less of an issue because you can run them in Docker. However you can get stable core system packages with Debian's impeccable track record. IMO I wouldn't run anything other than Debian on a server (including base Docker images).

For a personal distro, I think it gets more complicated. It's personal preference based on what you value. Using Debian's unstable channel could be an option for having the latest releases on things while still being stable even though it's labelled "unstable". Arch is another choice. There's many others.


For production servers, the trade offs Debian makes are better than the trade offs Ubuntu makes.

This has held consistently true for as long as Ubuntu has existed.

In recent years this is also true on the desktop.


You didn't patch in almost 5 years?


> You didn't patch in almost 5 years?

For that machine I configured https://wiki.debian.org/UnattendedUpgrades to auto-patch packages with auto-reboots disabled.

That specific server wasn't running Docker so there was less to worry about from an attack surface level.

Debian stable releases get 3 years of official support and then an extra 2 years of security maintenance. Running a specific release for 5 years isn't unheard of if the workload you're running is ok with not being updated for that long.

Ideally I aim to create new servers when a new stable release is available or at least before the official 3 year time span is over.


At least not the kernel.


What do you do for kernel updates?


I auto-updated everything regularly except for kernel updates. I replied to another comment that has more details.


what is justification to update everything except kernel besides impressive uptime?


I've never encountered a bug in any of the web apps that I developed where a kernel update solved it. I know there's other reasons to update such as addressing security vulnerabilities, but I've also never encountered or personally heard of someone having their system compromised by a kernel vulnerability.

Of course that doesn't mean it can't happen, but for single server deploys I do value things like uptime. Rebooting is at least a 30-90 second downtime event by the time the box comes back up and your services start up again. There's also a risk that something might not play nice with the update and now you're stuck with potential downtime while under pressure until you revert the change. Hopefully that wouldn't happen with a security patch level update but it's a risk at the end of the day.

Basically for the workloads I run I'm confident enough in having user land system packages updated automatically and rebuilding Docker images to have the most up to date security patches which is where my apps are running anyways.

For bigger updates (distro versions, kernel updates, etc.) I'm more in favor of spinning up a new server, re-deploying everything there, switching DNS over to it and shutting down the old server. In my opinion it's more safe since your original server is never modified and your site is always up.


I'm liking NixOS thus far. I've got it installed on my personal laptop. It's got some rough edges, but the benefits for me outweigh the downsides. I really like affordances like "nix-shell -p foo" to just run a shell with the "foo" command in it for one-off usages, instead of slowly accumulating installed apt packages that I forgot why I installed them.

Similarly, having any custom configuration inside of "configuration.nix" is way nicer to use than manually editing /etc/whatever.conf. I can have one place to store any custom hacks, with a nice comment as to why + git history.


> It's got some rough edges

I've recently been looking really hard at NixOS as a possible next-step. (I use Manjaro and am particularly interested in ways to keep my Laptop and Desktop in sync)

I've heard that the documentation can be pretty lackluster at times. Are there any other rough edges I should know about?


I'd say it's really about the documentation, including "unofficial" documentation like bug reports and SO questions. Part of why Ubuntu is so popular is because there's enough of a community that whatever issue you hit, someone's probably already hit it before and asked about it on SO, where it has a highly upvoted answer that fixes the issue and explains it. That's also why NixOS seemed like a better choice than Guix to me.

One thing I ran into was setting up a Python project using poetry2nix. Mostly works great, but then you sometimes get inscrutable error messages. I had to copy this into a shell.nix file for reasons that aren't entirely clear to me (and I had to hunt it down from https://github.com/NixOS/nixpkgs myself instead of finding docs or a bug report):

    astunparse = super.astunparse.overridePythonAttrs
            (old: { buildInputs = old.buildInputs ++ [ self.wheel ]; });
One non-documentation issue I've hit is that even when using the stable channel, you live much closer to the bleeding edge than a distro like Ubuntu. I updated my system to the latest packages, and then my wifi wouldn't work after waking up from sleep. Turned out to be a kernel regression that was fixed a few days later in a patch update. Everything was fine again, but it's not something you'd run into with a more conservative distro. Similar issue with the latest Gnome breaking extensions for a while before they got updated.


> Are there any other rough edges I should know about?

I think the easiest to encounter is that programs downloaded from the internet might not 'just work' for various reasons. (e.g. because they're linked against libraries in more typical locations, or because the code hardcodes paths like /bin/bash or /bin/rm which NixOS doesn't provide by default).

So, when programs like minikube like to provide a nice UX by downloading helper executables, it won't 'just work'.

Solutions vary from "someone already packaged this, it's not a problem" to requiring deep understanding of what you're trying to do.


It's not trivial to package/run stuff that isn't prepackaged (unusual given that NixPkgs is enormous).

But not too hard either, I authored several official packages within a few days of migrating to Nix. It's only difficult if the build process is a bit uncommon or ugly.

The hardest part about NixOS is figuring out what to learn, and in what order.


For development, not all languages have the same “support”, tooling. This is sometimes inherent in the given language’s dependency management or popularity.


I’ve ran NixOS on my last two machines. I like it more than the alternatives, but it isn’t without flaws. At this time, you must be sold on the idea of declarative configuration and willing to learn at least the basics of how Nix the language works.

It’s cool that you can Git pull and build an OS, but management of the project can be very slow. Using a ‘pull requests’ model majorly slows down progress; if you need to revise changes for a new package or package update, you will make the correction and get approval, even as a maintainer, but no one will come back around to merge it. With a patch-based model maintainers can waste less time by just making those few modification to the patch and getting updates upstreamed faster without the back-and-forth.

That said, it’s still something I’d recommend for someone with the experience and interest. There’s never been a system I was as confident with running patches and just updating the system myself for when stuff wasn’t working. But also, Guix is out there doing similar stuff and you must admire the free software goals even if it can sometimes be impractical (I just do not like the Scheme-based syntax).


I think NixOS is what the future should be. Searching and resolving packages still seems slow, and things like flakes are still evolving to address rough edges, but having completely declarative definitions of the entire OS installation is amazing.

Spinning up native development environments as an extension of production environments is nice too.

I worry that tools like pyinfra and Ansible combined with Docker containers and traditional distributions might be good enough to prevent critical mass in terms of user adoption.

Even so, it’s worth daily driving for a while just to get a sense of what the world could be like. Kind of like Erlang OTP versus Kubernetes and containerized microservices.


For anyone interested in trying it know there is one major downside: most Linux binaries won't run on it because dynamically linked libraries work strangely on NixOS


This. And it's completely reproducible. IMO this is the best bet for stability + long-term survivability.


Also you can easily remove packages without leaving now unused libraries behind.


I switched to Debian on servers & was impressed enough (previous experience with Ubuntu & RHEL) that I even went ahead & installed it on desktop. That was a mistake for a few reasons so I quickly switched away* on desktop but it's still my choice on servers, with a few caveats...

1. Snaps are infecting all distros, not just Ubuntu. This affects desktop more than servers, though LetsEncrypt/Certbot are major culprits on servers. I have managed to work around this on my debian servers with a combination of podman & certbot's dockerhub images but it's a sad state of affairs that such fuckery is needed.

2. Hardware support is the perennial issue on desktop & I was still getting hit by this in 2022 on Debian with AMD drivers & multi monitor/KVM support. This obviously isn't a worry on servers.

---

* fwiw I switched away from Debian on desktop to Gentoo, which I used to use years ago before getting a little fed up with compile times. Have found compile times to be absolutely miniscule on modern hw compared to what I remembered, & binary ebuilds much more plentiful than in the past too. Had always found it the most sane, stable & up-to-date distro in the past, with compile times really being the only downside, so would recommend for anyone looking to try out something new. Unlike Arch, Gentoo does a lot for you: it's really just the initial install that's a pain.


I'd recommend you look at Linux Mint as a desktop replacement for Ubuntu.


Have used mint on the desktop in the past. I'm Irish so I took interest fairly early on due to its provenance.

It's cool if you like it's default DE setup, but I don't see any real advantage over Debian, & unless you're using LMDE, it's gonna suffer from all the disadvantages of modern Ubuntu. It's been hit just as hard by snaps - Debian is being hit too but not as hard as Mint or Ubuntu.


Mint has Irish provenance ?


mint has debian based distro without any ubuntu dependencies.


Yup, it's called LMDE


Look no further than Debian. After switching from Ubuntu I started to understand why people view Ubuntu as a bit dirty.


Ok, but which release channel? And can I convert an Ubuntu into a Debian by reinstalling it while keeping my home directory?


> And can I convert an Ubuntu into a Debian by reinstalling it while keeping my home directory?

You can do that with any distribution, unless you expect your configs to line up exactly.

If you don't keep your /home on a separate partition, back it up. Install Debian, making sure to separate /home and root into different partitions this time. Go through your ~/.configs, find the ones you've changed (most of this will probably be browser shit) and put copies aside. Then take all of the configs out of your home directory backup (including the originals of your changed configs) and put those aside in a different place, deleting them from the backup of your home directory. Backup the virgin ~/.configs from your new install (do not delete them from the new home directory.) Then copy your old home directory files (sans configs) over your new ones using rsync. Compare your manually changed files to the virgin files from the install - has the format changed, will they still work? Are they located in the same directory in Debian as in your previous distro? If it looks fine, copy them in. See if they work. If they don't, look up why not. They probably will.

If you keep your home on a different partition, then install as if you don't, and let Debian create a home in the same partition as the new OS. Do the same config dance as above (annihilating your old configs other than the customized ones), and switch your /home to be mounted from your old home partition.

Or at least this is what I do. On your desktop, you probably want to install testing, on your servers stable.


I just pin to a specific code name (version) instead of a state (such as testing), e.g. right now I'm using bookworm which happens to be testing, and I'll still be using it for a while once it's been declared stable. If I feel like I'd want to upgrade some packages that are available in the next testing, or I can't install some specific package from unstable because it requires an update for a big library (often libc6), I update to the next testing.

I think it's better to avoid pinning to testing since it gets a lot of updates right after a stable promotion (after the package unfreeze), which you probably don't want.


> Ok, but which release channel?

For home use, Debian testing is usually a good balance between things not breaking and things not being ancient.

For servers, Debian stable is probably a better choice.


I think anyone who suggests daily driving Debian testing should also mention the fact that packages can disappear from testing for weeks/months at a time (and reappear later). It's recommended to configure `unstable` in your sources as well but set up apt pinning so that those packages are only pulled in if they're missing in testing. See: https://wiki.debian.org/DebianTesting "Best practices for Testing users"

In practice this means adding something like this to /etc/apt/preferences (along with adding entries for `unstable` in /etc/apt/sources.list)

    # use `n=` when referencing codename (i.e. buster/bullseye/...)
    Package: *
    Pin: release n=bookworm
    Pin-Priority: 550

    # use `a=` when referencing archive (i.e. stable/testing/unstable)
    Package: *
    Pin: release a=unstable
    Pin-Priority: 520
That way apt will pull in any packages missing in testing from unstable, and once the package is reintroduced to testing, will prefer that version rather than continue to track unstable.

Maybe I've been lucky but I've been running testing on my non-server desktops and laptops for 13 years now and have only rendered my system unbootable once (required having to boot up a live CD to reinstall an older working version of some bad libpcre update that had been rolled out).


I’ve been bitten quite a few times by Testing so I run stable now.

Even the current stable is fairly “new” so I don’t even mind.


For a long while I ran testing and had zero issues.

Warning: if you're used to PPA life in Ubuntu Debian doesn't offer an equivalent that I'm aware of. EDIT sibling comments indicate home brew might solve this.

The problem with Debian is you can't usually pick "a thing" from another channel, you mostly have to fully commit. Testing is great until it isn't and anecdotally sid/unstable never fixed that for me - I just had to learn to build the occasional package from source

Ultimately I'm a lot happier having gone on that journey but it can feel very arduous the first few times apt doesn't have a recent enough version of something available


Right now I run testing as my gaming Linux, and I like that the packages are not old. But there's also a KDE bug that occurs daily, and it's a bit annoying.


> people view Ubuntu as a bit dirty

Look even further at Devuan, made by people who conder Debian as a bit dirty. (I use both as needed, I'm just riffing on your comment)


Personally - I think the step away from systemd is a fairly huge mistake.

I understand that legacy users and orgs may have vested interests in sticking with older init systems, but personally I think systemd solves a challenging problem space in a very easy to use manner.

I much prefer writing a systemd unit file than having to wade into sysvinit or runit.

Politics aside - I just find systemd far easier to work with.


> I use both as needed

In what case do you need to opt for a different init system than systemd? Genuinely curious, as I've been using debian with systemd for almost a decade now and haven't run into any issues regarding it.


Asking why/why not to use systemd is covering very old territory here at HN and not the point of my earlier comment - I was riffing on someone using the term ''a bit dirty'' in regards to a Linux distro.


I understand the point of your comment, and that why/why not is mostly a philosophical/political/whatever question.

But really, the “I use both as needed” is the statement which pokes my interest. When do you need something else than systemd? What is the actual work which doesn't run with that particular init system? I'm honestly unsure if you were just joking or if there's really something more to it.^^'


Another reading of that comment would be that they use Debian (with Systemd) when needed, and personally prefer Devuan for philosophical/personal reasons.


It's still pretty easy to replace systemd in Debian with sysvinit (we do it at moderate scale). Devuan works very well though, basically there's little difference between Devuan and Debian without systemd (all the packages are identical, except where a package has decided to require systemd in which case the Devuan version will just have the dependency removed).

Long term though there's a query over support for anything other than systemd in Debian, hence the move towards Devuan.


I consider Devuan just under maintained Debian without systemd (I personally consider that a drawback but for some that is a big selling point).


Where Debian is a reputable distribution with a clear goal, the Devuan crew is solving a non-issue by means of almost religious levels of resistance toward systemd.

If you want technological merit, chose Debian. If you want to join the conservatives of Linux, you should go for Devuan.


Even during Debian's systemd adoption wars I'd never seen any argument calling into question Devuan as ''disreputable''. After all, it was a group of experienced Debian folks that made a systemd-less version of Debian that compares almost exactly with Debian's status and capabilities. I hope in your case you have not been forced to use Devuan.


Hit me up when you finally notice that enterprise and seniors embraced systemd years ago while you kept avoiding change.


You implied that Devuan is disreputable.


I don’t think that’s what they are saying.

Debian is reputable because it is widely widely used for ages.

Devuan is not disreputable. It is simply a fringe variant of Debian that has much fewer users and no obvious reason to adopt for most people.

Debian has a good reputation.

Devuan does not have much of a reputation at all, because most people that might use it have no problem with using Debian and systemd.


> it was a group of experienced Debian folks

No, it wasn't. There was only one ex-Debian person in their team.


I'd heard many times over the years that Roger Leigh was given a great deal of support by other Debian people, let alone across the Linux community. Maybe I misremembered.


You're using language and innuendo instead of argument to imply that the maintainers and users of Devuan are irrational. The objections to systemd were almost completely about the technical merits, with a side order of being annoyed that Red Hat was again being allowed to dictate the standard Linux stack by inserting another impenetrable monolith.

I know it's weird for me to object to your comment without implying that you're insane. It's alright to feel like using systemd after a tangle of init scripts was a breath of fresh air. I understand that point of view. I don't understand the invective against people who put their money where their mouths were.


If you are comfortable with Ubuntu then perhaps Debian [1], the OS that Ubuntu was forked from would feel natural?

[1] - https://www.debian.org/


After switching from Ubuntu to Debian, it feels like Ubuntu is just a frankenDebian where packages of different versions are mixed haphazardly resulting in many minor but annoying issues.


For me Debian has really been "the girl next door" of Linux distros over the past 20 years. I've been momentarily pulled away by "the new hotness" like Gentoo, Ubuntu, Arch, or Flatcar. But I just keep going back to Debian.

With a recent price drop I decided to upgrade the storage on my Thinkpad T480s, so I had occasion to reinstall the OS. I went with Debian Bullseye. Choosing all the default settings with LightDM and XFCE, everything except wireless "just worked." Understandably, I had to enable non-free for the Intel wireless firmware, but a simple apt install had that working flawlessly.

Everything about my base system is perfect for me. Booting is quick. Power management is great, with the battery (now at 90% of its original capacity) easily lasting all day. Full volume encryption setup was effortless. I even got Secure Boot running after following some steps (more or less copy-and-paste) from the Debian wiki. When the time comes to upgrade the laptop wholesale to something with a more recent CPU, I half expect that simply moving the M.2 SSD from my current laptop to the new laptop will just work, or at least will require minimal tweaking.

I've tried using snap to get more recent versions of things, but that ended up being more trouble than it's worth. Now I just build my own containers, running them with UID/GID mapping, giving access to X11, and bind mounting a dedicated home directory for the app. Sort of like a poor man's snap or Flatcar Linux, but it's easy enough to figure out, I get more customization, and I get to keep my old familiar Debian environment.


Wow, if there's one thing open source projects are known for, it's dated websites


> isn't broken don't fix it

Though in this case it's likely to keep it accessible to people we slow internet connections.


It's 1.4mb for an incredibly simple page, and a modern design doesn't mean it needs to be some bloated webapp


1.4 MB? WTH, Debian's current page is 14 KB, that is 0.014 MB. 1.4 MB, good grief, that is larger than Dostoyevsky's Crime and Punishment.


> 1.4 MB? WTH, Debian's current page is 14 KB, that is 0.014 MB.

It depends on whether you're looking at just the HTML or at the page as a whole. The HTML is that small, but there are a couple of JPEG photos on the home page, each weighing on the order of hundreds of kilobytes; the total including these photos is around 1.4 MB.


Debian's website isn't dated, it's constantly updated and a wonderful, comprehensive resource. Unless you're bikeshedding about rounded corners or something.


You can have a constantly updated grate resource that's still dated. Dated is in reference to the UI.


its old school, not dated.


Could you be more specific? What exactly should Debian's website offer that it currently isn't?


More pleasing to the eyes and not from the 90s. A more attractive site wouldn't turn people off as soon as they visit it and would yield more users and support. Ubuntu's site for example looks nicer and only 0.1mb larger.


Are you volunteering to redo debian.org? Because that's how this works.


That's not how it works. I don't think the people on top of the debian community would just let anyone redesign their home page, even if the website would be objectively better in some regard.


Oh, there would be a debate, for sure. It will probably go through a lot of bikeshedding. But if you demonstrate competence and commitment and stroke the right egos, it can certainly happen.


I don't think a nice website would translate to that much more benefits, like new Debian users. If Debian was an online clothes retailer, and they wanted to compete with other such websites like asos.com, they'd need a website like them. But even then, Craigslist is also very popular, and the website design is similarly dated.


I never thought i'd meet the guy who actually likes web bloat, and on HN of all places.

I thought you were just a legend, like Kayser Soeze.


Now compare the two on Lynx


Half a gig of Javascript and a GDPR popup.


Yes, this is part of the open source culture. It's a matter of resource allocation, and a bit of signalling. If you'd like a trendy corporate website, take a look at a distro that's backed by a corporation too, like Ubuntu, or Red Hat.


I dislike Ubuntu for the same reasons and I can recommend two paths:

For environments where you expect and appreciate stability- Debian. Hands down. Your initial feel is going to be like a leaner Ubuntu with a few extra steps (non free drivers or software). Very low learning curve.

If you don’t mind “staying on top of your OS and packages”, then I’d recommend Arch Linux for desktop (not server). There is never “end of support of any release” because it’s a rolling release distro. Do your updates regularly and you’ll be good to go. They have one of the best wiki in the Linux world and it’s a low maintenance distro once you set it all up. A bit steep learning curve at first but I’d argue that this learning is beneficial to general understanding of Linux.


Why not server? Because of stability?

I use Arch as my main server for 5 years or so and never had any issues. I just update when I think about it.


Stability is great, never had any issues either. Really it’s the frequent package updates. Instead of applying security updates as needed, you are constantly updating all packages as they are released.

Depending on what your server does, this may be less than ideal.


> I just update when I think about it.

What about security vulnerabilities?


I follow them independently (as a collateral effect of my job) - but it is good to see such comments


I don't know if either Nix or Guix will be stable and supported for the next twenty years, but they probably won't be ruined by corporate interests. I've personally quite enjoyed learning about functional package management, where a package is seen as a pure function whose arguments are all its dependencies, the source, and the build tools. The output of the function are the binary artifacts.

It's a good idea, and if you like the idea of setting up your computers by programming in Scheme, Guix will be right up your alley.


Nix did some partnering with Microsoft GitHub recently. It will start with some extra resources for the project, but we will have to see if it stays that way.


Distrowatch carries on after all these years, and (at the least) gives a fairly good catalogue of Linux and BSD distro capabilities in textual and tabular format.

https://distrowatch.com/

Try some out as VMs before comitting to bare metal installs.

EDIT: I see that MX Linux continues to reside as the "top" distro according to that site's methodology. MX has been on top there for a very long time.


It's methodology is page hits so it can be manipulated pretty easily. I see MX Linux at the top but I never see people talking about it on social media so it makes me wonder how it got to the top of the list.


This is surprising for me, too. OTOH I'm using it right now. Maybe it's up there because it just works without much hassle? By that I mean not having to go into any configs, and editing them by hand? They have graphical config tools which did anything I needed to, click, click, clickety click. Chic! Furthermore they support the usecase of installing it to livemedia like USB-sticks, external HDD/SSD via USB, or even booting into RAM, and running from there. Even on older systems, rock stable. All in all very impressive. Like their cousin AntiX, too. No wonder, they are using the same toolsets running atop Debian.

Very pragmatic mindset towards daily driver on whatever(common) hardware under whichever circumstances.

Maybe that motivated users to rank it up there? (I didn't, btw.)


I think both AV Linux and KX Studio recently rebased to MX so all the audio people now go there


Distrowatch rankings are based on clicks. If someone wanted Hannah Montana Linux to be #1 in the ranking, it could be done.


Clicks, yes, but the rest seems a bit murky, with a hint of ''then a miracle happens'':

https://distrowatch.com/dwres.php?resource=popularity


Debian Stable is your best bet here. It's run by community, evolves slowly and with a consensus (somewhat), and you can be involved in the process if you have the desire and patience.

I'm using it since 3.0 both on my desktop and infrastructure, and I'm pretty happy.

I once lost a Debian server in our system room. When we remembered that we have such machine, I just logged in and it installed all updates in between and was working without any problems. Just rebooted to get the new kernel, that's all.


For end user workstations, my favorite Linux distros have converged on either Pop!_OS or Arch Linux (and Manjaro).

Pop!_OS is a remarkably stable and usable Linux distro. At least from a UX and aesthetics standpoint I find it competitive with macOS (I am also a longtime mac user, though macOS has lost its edge in recent years on the UX front). Overall I would say it's my favorite Linux distro these days. I am a big fan of the Debian family of distros in general. I used Arch Linux for almost ten years before switching fully to Pop!_OS and I never had an update go pear-shaped. Rolling release with pacman is amazingly robust indeed. I would say it's my number two.

I was an Ubuntu user from version 7 to version 16. Two things did it in for me. The first one was when Canonical submarined Amazon search queries into the OS's search feature somewhere around version 12 or 13. The second problem I had was major version upgrades reliably crashed my workstations. Every single major version upgrade meant a Busybox prompt after rebooting. After a few too many of those headaches (frustrations with apt upgrades causing trouble aside), it got to the point where I'd just do a nuke and boot to upgrade major versions instead of doing it in situ. After that happened for the last time sometime around Ubuntu 16.10 I said enough and dropped it going fully over to Arch Linux until around 2020, when I switched to Pop!_OS for a change of pace. Pop!_OS major version transitions have never caused problems for me. Arch Linux obviously doesn't have a notion of versions to begin with being rolling release.


I’ve been running Pop!_OS for about two years now and I still love it. Use it for both work and home. No plans to change.


I also had issues with Ubuntu's snap Firefox. And I am also considering leaving (after almost 20 years).

My biggest issue was snaps taking huge amohnt of disk space. My 30GB system partition was running out of space. What I usually do is to move the big stuff to another disk and symlink to it. Moving snap directory this way caused mounting issues, Firefox was unable to download anything.

I switched to normal Firefox from apt (non-snap) and it works. I am considering Debian.

I think Ubuntu (like Mozilla) needs to realize who is their user base - increasingly conservative, occasionally tinkering, users. They need to be more careful with sweeping changes.


Ubuntu gave up on the desktop user experience when they dropped Unity. Linux Mint has the best desktop user experience now and no Snaps. For servers I would recommend Debian stable.


For what it's worth, I've been using SUSE, openSUSE, and currently tumbleweed personally as a daily driver on-and-off for ~19 years. Occasionally I hop over to Fedora and come back. I also run a couple public-facing servers on Linnode using Suse's Leap. I haven't had any major issues (some minor annoyances along the way of course), but the software is very up-to-date. I see that you're concerned about corporate control, maybe I've just been getting lucky. I don't have much experience with Debian, but have been experimenting with FreeBSD (has a non-profit registered in colorado[0]) and NixOS (has a non-profit registered in Netherlands[1]).

[0]: https://freebsdfoundation.org/legal/ [1]: https://nixos.org/community/


Honestly, if you want steady progress and consistency, if you want changes that are thought out, well reasoned, discussed and sensibly implemented, you should consider the BSDs.

Even ignoring the systemd stuff separating Linux from the rest of the world, there are just so many ways all these Linux distros are trying, often gratuitously, to differentiate themselves from each other. Often it's sloppy, and the end result is that it affects upgraders (long time users) much more than new people, because they're so focused on attracting new people and therefore test new installs so much.

A Linux admin from the '90s would be lost on a modern Linux distro. A BSD admin from the '90s would have plenty to learn, but most of it would still be quite usable.


I just had the interesting experience of upgrading a FreeBSD box from 9.x (2012) to 13.1 (2023). It had been turned off for a few years while I was off playing with industry.

The process was shockingly smooth. The conf files are mostly in the same places. It was... nice. At the same time I took my desktop from Ubuntu 14 (2014) to 22 and ended up giving up and reinstalling. That was more expected. I was shocked that the FreeBSD upgrade just worked.


Stable? Debian or Fedora.

For an end-user machine, I've found that rolling release is much better than point release (much newer, less buggy/crashy -- paradoxically -- versions of software and GPU drivers). (Although, surprisingly, Fedora seems to have very fast package updates despite being a point release.) For rolling, Manjaro is my top pick. Arch is a close second, assuming you've mastered the arcane art of its installation.

You might be thinking, "But a fringe, rolling release distribution can't be as stable as Ubuntu, the de facto default point release distro that ostensibly claims 99% of Linux marketshare." You would be insane to not be thinking that. But I guess companies' potential for incompetence is even more insane since that's been the opposite of my experience. I've had less installation issues and crashes during daily use of Manjaro than on Ubuntu over the last few years, which really surprised me. It's like Ubuntu is decaying. Or maybe I just got bad RNG. Or maybe it's that program and DE devs are always working on the latest version of their software while older versions are an afterthought, so rolling distributions that keep you up to date with the latest versions seem stabler. :p


> assuming you've mastered the arcane art of its installation.

Archinstall [1] exists, so there isn't much reason to use Manjaro anymore (which has a lot of its own issues anyways [2])

[1] https://wiki.archlinux.org/title/Archinstall

[2] https://manjarno.snorlax.sh/


After a major Ubuntu version upgrade completely broke virtualization, I switched to OpenSUSE. It has a very professional feel and YaST is a great tool for (re)configuring the system without having to look up commandline specifics for every one-off action I wanted to perform. I use Leap for my work laptop and Tumbleweed for my home PC. Both are very stable.

I also use Manjaro on the side but for a rolling release it lags slightly behind OpenSUSE Tumbleweed. AUR provides a good supplement of software but more are providing alternatives via Flatpak nowadays. Fedora is a good choice too. I recommend trying all of these before making a decision.


OpenSUSE has it's quirks, but I have been happy user for 7 years. Tumbleweed saves me headache of upgrading every n years


I've literally never had a positive experience doing major version upgrades in Ubuntu during ~5 years of usage. I only broke some minor things once on Tumbleweed and it was fixed via a Snapper rollback within minutes.


Debian.

Anything that's not up to date enough, I install with Homebrew. I did not follow the drama but I know that there was a time when even Unstable was missing a recent version of Emacs. Homebrew has it on release day. With the combination of Debian stable and Homebrew, you get a stable core and the bleeding edge for the 3 pieces of software you actually care about. Perfect.


> stable and supported for the next twenty years, without being ruined by corporate interests

That exists only in your imagination, so, imagine whatever you like.

Sounds like you want Debian, but you might want to explore the disgruntled CentOS forks/replacements if you want the appearance of an enterprise distribution (that's what "stable and supported for the next twenty years" means) without the pesky problem of having to be aware that there is an enterprise driving it.

May as well look at FreeBSD while you're here, though.

I switched mostly to Manjaro a long time ago and I've only had a couple of issues with it, for whatever that's worth.


For a desktop, I've had a wonderful experience with Fedora Workstation. They're up-to-date while remaining stable, have a policy of working with upstream, and don't do any real customization to KDE/GNOME.

I am mildly peeved at the whole CentOS debacle but wouldn't mind running Fedora on a personal server.


Just use Rocky Linux or Alma Linux it’s basically what Cent was. Unless you’re running anything super insanely mission critical CentOS stream would still be fine for your uses, and if you need something THAT highly available support might be worth while anyways in an enterprise environment.


Yeah if you're using it for a real server then Rocky/Alma/Stream/RHEL. For a personal server I find Fedora fine, since my local packages will match my server's.


>For a personal server I find Fedora fine, since my local packages will match my server's.

Absolutely. I also use Squid[0] as a proxy for updates[1] which, after initial download for one of the dozens of my Fedora boxes, caches the updates, reducing bandwidth usage and (more importantly) speeding up updates significantly.

[0] https://serverfault.com/questions/837291/squid-and-caching-o...

[1] https://linuxiac.com/how-to-use-yum-dnf-command-with-a-proxy...


Personal machines (workstations or servers) are very different to those you'd use in industry. I'll only comment on personal machines.

For home workstations, as an ex Ubuntu user, I like Manjaro (and the whole Arch ecosystem) A LOT. Fedora is pretty nice as a dev workstation also.


Manjaro is maybe not the best if OP is looking specifically for stable + supported for 20 years. Given that it's a community distribution and stays fairly close to the newest versions of everything.

But it's still my favourite. All the power of Arch with none of the hassle.


Mainstream: Fedora. It has been rock solid for me, while at the same time adopting the cutting edge of the Linux ecosystem (latest gcc, latest security stuff, etc.). Also a lot of exciting things happening, like Silverblue, CoreOS, OSTree, etc. For stuff that needs to be really stable, you can use RHEL or a derivative.

If you are willing to do something more alien: NixOS. But be aware that it can be a deep rabbit hole and time sink.


Silverblue looks great; I'll be moving from Ubuntu to that for my next install.

My experience with OSTree in flatpak has been pretty positive and I really like the idea of a immutable base image.


I have anecdotal evidence that many people moved from Debian to ArchLinix over the last 5..10 years.

ArchLinux is the unassuming kind of distro that gives way to upstream.


Steam's platform survey would agree with you. Ubuntu share has been decreasing steadily over the years while Arch/Manjaro has gone up (not even counting Steam Deck)

I am one of them, having switched from Ubuntu to Arch two or so years ago due to frustrations with the platform. Arch is a bit more hands on, especially at first install, but it has solved all my complaints from Ubuntu. If OP is interested, I would recommend looking at EndeavourOS, which is probably the most friendly way to get into the Arch ecosystem, without it getting in the way.


One such user. Switched to Arch almost a decade ago (from Ubuntu->Mint->PopOS), and now on Asahi Linux (which is just ArchARM with a small overlay of 15-20 driver/kernel packages).


I have a M1 mac. How frustrating is Asahi compared to macos? I love me some linux but hate the driver fiddling


Here's an draft of my upcoming review: https://hackmd.io/@captn3m0/ByWpMgZho.

tl;dr: Asahi works great on Apple Silicon. There’s a small collection of software (like Widevine/Zig) that isn’t available on Asahi/ARM64, and some important features that are still in progress (Speakers/Webcam/Mic/External Monitors). GPU and Sleep support is still experimental, but the device is usable enough for me (No critical showstoppers so far).


> I have been running Ubuntu on servers and desktops for around twenty years

If you have been running Ubuntu for that long, and don't have much experience with other distributions, the stable distribution you will probably feel the most comfortable with will be Debian. From an administration point of view, they are very similar; not only do both use dpkg and apt for package management, but also small things like details of the structure of the configuration in /etc are very similar. This is because Ubuntu started as, and still is, a derivative of the much older Debian distribution. The only thing you might miss would be Ubuntu-specific stuff like snaps; and you might also get a bit annoyed by Debian stable having older versions of packages than you might be used to (it's a slow-moving distribution, more similar to enterprise distributions like RHEL in that aspect).


On my servers I run FreeBSD. It’s stable and it will most likely continue to exist for decades to come unaffected by corporate interests, just like it has for ages so far.


I think Debian/Fedora(or RedHat if you're willing to pay) are fine on the enterprise side.

For personal use... I fell in love with Arch. Between the Arch User Repository (AUR) and the quick upgrades, I really struggle to use anything else these days.

Arch takes a bit more work to get online than some of the other distros, but I've been amazingly happy running it across all my personal machines (from desktops to laptops to servers to media machines) - It's a fantastic engine to build a machine around, but you will have to do a bit of building.


Arch isn't too difficult, but I've been using it since ~2005 or so.

Something that is hard to quantify - arch uses vanilla software, almost entirely. Very very few patches/configurations are deployed. This means you're actually learning about the upstream software, not the distro.

What you learn on Arch, generally applies to other distros as you know something about the upstream software. I don't know a thing about debian, but I can generally figure things out.


I've used Fedora for years as my desktop linux distro on my laptops and CentOS or RHEL for servers.

Fedora is the desktop focussed project which is upstream from the commercial RedHat Enterprise Linux, CentOS is the community distribution of RedHat Enterprise Linux.

This is a great, stable, and feature rich ecosystem that contributes a lot of innovation to opensource projects and upstream back to Linux, it has a strong community and has a long track record stretching back more than 20 years.


I switched from CentOS/RHEL to Ubuntu on my servers a couple years ago

While it was a little jarring to have things named and located differently (not much different, but different none-the-less), I was basically "Ubuntufied" within a couple weeks

20 years ago I ran SuSE almost exclusively

Prior I had Slackware (which is still around, and distinctly non "corporate"), RedHat, Mandrake, and a couple others

You wanting to avoid "being ruined by corporate interests" is likely impossible - unless you go with a single BDFL distro like Slackware

Ubuntu is "corporate"

RedHat is "corporate"

SLES is "corporate"

Debian is "corporate" (not as in a commercial entity, but definitely have their own very tight community (ie "corporate") rules as to the "right" and "wrong" ways of doing things)

I'm curious as to why you think Ubuntu (or Red Hat, or any of a number of other distros) which have been around "a long time" won't continue to be around for "a long time"?

Ubuntu's 18

Debian's 29

Red Hat is 30

Suse is 31

...what are you looking for, exactly, that you think isn't handled well-enough by the "big guys"?


I remember when I had fun using Ubuntu. My main concern (and joy) back then was enabling wobbly windows.

What about now? The commercialization leaks through more and more every year. The Ubuntu logo is becoming as misleading as the OpenAI name, instead of the Kumbaya i thought it to represent, it's quickly becoming just another piece of spyware.

There is no way to create an empty file with a right click without googling and going through some bullshit first. I can't even drag and drop files and directories from the desktop straight to the file explorer. Who the hell thought that downgrading functionality was a good idea?

The "touchpadization" of the UI sucks. There is no way to configure how background images are displayed. Too much more bullshit every time that I have postponed the next upgrade out of fear it'll become even worse. So yes, I'm ready to change to something better too.


Linux Mint has a good level of insulation from nonsense like Snaps while preserving compatibility.

Fedora seems to be gaining ground, and also all the innovation in Linux is pushed by the Red Hat people, and Fedora gets it way sooner. Although Ubuntu derivatives still seem to be the standard for home users who don't do Arch.


It’s unclear to me what’s exactly wrong with Ubuntu. Can you please give some more explanations on what you mean with “ruined by corporate interests” or “snap”?

Why do people say it’s not pure? How does it affect me? So far everything seems nicely enough documented and works mostly conveniently out of the box.


That's easy to find "What's wrong with Ubuntu|Canonical?"


On freebsd on my laptop for more or less one year; from this short experience, I would recommend it for anyone looking for a stable and reliable system. Never used on servers yet but it has nothing to prove there, already used by the biggest.


NixOS stable or good ol Debian stable. Both are smooth, but NixOS is just such a joy to configure with generic hardware and software configurations, yet you don't get the feel that everything has to be manual like on Gentoo or Arch.


I've been running FreeBSD for all my production stuff for 20+ years. Works great, stable, consistent, reliable. Give it a try.


The BSDs are criminally underrated operating systems. Another great thing about FreeBSD is that there is robust support for compiling from source (the ports system) or using binary packages. It also fully supports ZFS out of the box.

I also highly recommend FreeBSD’s cousin OpenBSD if you have supported hardware. It’s default out of the box configuration is extremely secure and bloat-free.


Void linux. For personal use, I switched to it couple of years ago after I had some weird issues on Ubuntu. Very stable, everything works as expected: the OS is just there in the background, almost unnoticed, as an OS should be.


I do not know what a "Pro agenda means", but you looking for a Linux ? To me the kernel itself is highly "corporatized".

So, I would try OpenBSD or maybe FreeBSD. Note, for OpenBSD, Nvidia Graphics are a no-go.


My experience with Linux enterprise deployments is not as thorough as using it for desktop & debug environments (every Linux is pretty much same for debug environment if you can handle package update nightmare), I use Arch btw :-)

For production, take a look at SUSE (3 production servers with no hiccups, good support and no particular weird design decisions), CentOS stream after RHEL debacle (have 1 deployment on client request, no complaints or production issues so far), and of course the good old Debian if you want familiar things with Ubuntu


For what it's worth, I was expecting this to be mostly about desktops. Presently I'll throw some love to MX Linux, which appears to be a good example for "Just works."

Broadly, this is feeling more and more like a cycle that one can figure out how to "ride?" As, before for me it has been e.g. Lubuntu, Xubuntu, LXLE, etc. Basically something like "find something Debian-ish" that there's a little bit of, but not TOO much, new energy by people who just want something simple.


it's a very subjective question, depending on stuff like how much you care to update the system, but I'd say:

you might like https://mxlinux.org/ which has seen growing popularity, it's based on debian stable, but with more up to date software, and ability to use newer kernels in case you need better hardware support for very new hardware

you might also just run normal debian, if you don't care for latest versions of everything... Debian is what Ubuntu is based on, and Ubuntu is very close copy of Debian, except Debian's release cycles are very long, so you'll be running on old software all the time, if you choose debian's stable branch...Ubuntu picks packages from debian's Testing branch, verifies it's good enough, and pushes them to ubuntu users, while debian stable is trying to be "super ultra mega stable", so it takes a loong time until everything's sufficiently tested...but using debian's testing branch is pretty fine (unstable is often fine, but might break sometimes)

and another contender: I feel like OpenSUSE is criminally underrated... as they get their money from the enterprise side of things (paid support for linux servers, etc, much like red hat and others), they don't have any reason to mess with what works, and they've been around for like, over a decade also, I forget how long, but a looong time... it would be a bit different, though, using RPMs instead of DEBs, and such, but one very useful feature is, they have builtin YaST control panel which allows you to, either GUI or TUI interface, do a lot of common tasks, without having...


You seem to want the real thing, that is, Debian.

However, if you need something really small and fast, mostly for VMs or servers, although it can also be turned int a full fledged, still very resource efficient desktop, take also a look at Alpine Linux. Musl makes all the difference there.

Then, should someone with no prior Linux experience ask you to install a desktop for them, I would go with Manjaro, which to me is the most straightforward, while still not bloated, for new users.


NixOS. Declarative config, so easy to reproduce, very easy to create a custom iso, nothing like preseed/kickstart nightmare.

Nix language is as (in)digestible as haskell, Guix System so might be a wiser choice, but unfortunately it's development is too steered toward HPC/academic usages that miss some things to be ready IMO for servers and desktops in a generic computing environment, unfortunately.

In 2023 NO CLASSIC distro or OS is wise IMVHO.


I’ve been using Arch Linux for several years now. It has a rich ecosystem of packages, a wiki for many aspects of the system (you might be using the wiki even if you came from another distribution), and AUR fills the hole for whatever packages not included in the official repo. All packages are kept up to date, so it’s great for personal usage where latest packages are preferred over stable packages.


I just recently decided I’ll be switching all my desktop computing to fedora.

The xfce spin is okay, everything i need is in the repo and for everything else there’s flatpak.

F@@k snap.


I will never understand why people prefer debian's old packages. When I was using it I constantly was looking for features in newer versions because they actually are useful and made things things possible which where not easily before. Also installing anything out of tree is just a huge mess. Also when you are on testing things would fall apart almost every week.


I have been liking Pop[1] on desktop, it's pretty functional out of the box, doesn't stuff snaps down your throat (but makes them, and Flatpak, possible if that's your thing). On servers Debian is familiar-feeling but more straightforward for me.

1: https://pop.system76.com/


I've just got this one

________________________________________

  root@ansible:~# apt full-upgrade
  Reading package lists... Done
  Building dependency tree... Done
  Reading state information... Done
  Calculating upgrade... Done
The following security updates require Ubuntu Pro with 'esm-apps' enabled:

  imagemagick ansible libopenexr25 libmagickcore-6.q16-6-extra
  libmagickwand-6.q16-6 imagemagick-6.q16 libmagickcore-6.q16-6
  imagemagick-6-common

  Learn more about Ubuntu Pro at https://ubuntu.com/pro
  The following packages have been kept back:
  python3-software-properties software-properties-common update-notifier-common
  0 upgraded, 0 newly installed, 0 to remove and 3 not upgraded.
________________________________________

and to have those security upgrades I see I have to pay 500$ per year!!!!

wtf!

I'm going to remove all my comments from ubuntu help


I'm going to slightly subvert the spirit of your question, because I think this is exactly what you referred to when you said "ruined by corporate interests", but Fedora Desktop would be my go-to Ubuntu replacement. Red Hat are very hands-off (I really don't think the analogy of it being a testbed for RHEL is accurate) and the stability is excellent. Additionally, SELinux is preconfigured, and flatpaks allow a very basic sandbox implementation via GUI (though it is far from perfect).

OTOH, if you don't give a shit about security (SELinux and AppArmor in particular) then try NixOS. Extremely powerful, idempotent system config. You can do Gnu Guix if you prefer but I have yet to be convinced by the Guix crowd it is superior, and since it has less apps I'm generally Nix-oriented.


I would suggest Debian (for ease of migration) or Red Hat Linux (for industries that are heavily regulated by compliance).

However, with Red Hat you will most likely run into the "corporate interests" factor again.

Maybe you can think of a medium-term and a long-term solution. Begin with Debian, and then look into the feasibility of using FreeBSD. That powers some of the largest services, such as Netflix and Whatsapp.

However, FreeBSD does not use Bash but tsch, so you will want to ease the migration and administrative pains by looking into transitioning any OPS scripts and the like, but FreeBSD does provide a package management solution that holds it ground against apt/aptitude.

This is by no means a comprehensive comparison, just a few insights that might serve as starting points.


Slackware 14.0 was released in 2012.

Last update was on January 19, 2023, a security update on sudo. Check for yourself:

http://slackware.uk/slackware/slackware-14.0/ChangeLog.txt


I think Pop_OS! is also a good choice if you want it only the the desktop and not servers.

They are doing a great work with their shell on top of Gnome and their own DE is on the way. Even though they are a company, they take the community seriously.

I can't wait to have a DE with screen independent workspaces.


Valve's reliance on arch means I think it will continue to see interest and active development


You could give Linux Mint a try for desktop. With the regular edition you should get pretty much what Ubuntu offers, with fewer "anti-features".

There is also a Debian-based Linux Mint (LMDE) which might be preferred to regular Debian for desktop use.


It seems most people agree that the snap is a good idea, but not well implemented. I for one use snap packages when they are available for better security, because, if there is a vulnerability, say in Firefox, the malware will be jailed in its container. That’s one more layer. Sandboxing is desperately needed in desktop operating systems.

Another advantage that is portability. Snap export nextcloud and import somewhere else!

Sure, snaps are slower, and take some space, but with todays CPUs and storage, that’s not a dealbreaker. BTW, if you don’t like snap, you just remove it. What else would be the problem?

Check out also Pop!_OS based upon Ubuntu. Ubuntu itself has excellent hardware support.


Flatpak is just the better snap. The only reason Ubuntu doesn't use it is that it isn't controlled by them.


Flatpak is probably better overall. It loads faster, and not controlled by a company, although it takes more space and is less supported on servers.


I have about the same feeling. Leaning towards Debian. My infra allows automatic build / deploy on dedicated servers and I made sure that it works on Ubuntu, Debian, Arch and Mint. Not looking forward to Arch for production obviously.


I have been using Linux since the late 90s, and honestly, there's no correct answer. I use AlmaLinux for serious stuff. I use Intel's Clear Linux on my workstation. I use Slackware for hobbyist stuff.

In your case, I would suggest taking a serious look at Debian since you're already using it via Ubuntu. While I do not care for rule by committee, Debian has a rather good track record. You may also want to look at Pop! OS

https://pop.system76.com/

Beyond Debian and Pop!, Arch is quite common. I never cared for it personally, but many people (and many whom I respect) love it.


NixOS is by far the best choice if you want an os to learn for the next 20 years.


I love NixOS and was a nixpkgs committer. I think a lot of ideas from Nix will get adopted by the wider Linux ecosystem. But I think it is far more likely that immutable systems based on OSTree, like Silverblue and Fedora CoreOS will become the OS for the next 20 years. It is much more familiar to most people and provides many of the same benefits. Especially now OSTree is adopting Docker/OCI images as a transport [1], the line between building systems and building containers is blurring. A lot of organizations have plenty of institutional knowledge in building Docker containers and OSTree with containers makes system building pretty much the same.

[1] https://coreos.github.io/rpm-ostree/container/


I had a similar problem back around 2015 when I could not dist upgrade the UbuntuGNOME I was using. Switched to Arch, which was painful to install (remember LFS?) but never regretted switching to it.

For customers I still recommend going for RedHat or Debian though, because the LTS support is great and the distros keeps working and is really low in maintenance overhead.

Whereas on Arch you always can fix it, but when it breaks you can't do that on a server without a terminal/monitor/keyboard, so I wouldn't recommend running it on server infrastructure if it's the host system.


Using Nobara (AKA a copy of glorious eggroll’s hard disk - jk!) which is Fedora based with tons of kernel patches and packages for the new gaming and video editing stuff. Really good stuff. Things like steam being able to use rumble on xone which never worked to me basically works out of the box.

On my servers I use manjaro, not a personal choice but because those are Pine based hardware and it seems manjaro-arm has been very well maintained. The actual software is being run from LXC containers which are mostly alpine and ubuntu.


As personal/work OS, I ran Fedora for eons and then I moved to Pop which was great. However, while waiting for next iteration (22.10 omitted), I briefly distro hopper again. Fedora, fine but I had various dmaller issues that I knew I wouldn't on debian/pop. Tried Alma, but same thing. In the end I defaulted to Ubuntu 22.04 LTS + Lambda Stack. So far so good, albeit a bit boring which is fine in my book. I'll see where Pop and Alma go next. Watching those two basically, until then - Ubuntu+Lambda.


Keep running Ubuntu for servers, even consider using Pro if pricing is suitable for your use case. 10 years of support is pretty good these days. As for desktop: Arch Linux if you like tinkering and need "bleeding edge" software; use Fedora Linux, if you want to be on "leading edge", and want system that you install and it just works; Debian for something more stable. Linux Mint is decent too, but their plan for Wayland is not known yet, if that's something that important for you.


Fedora - stable and predictable. A perfect distro for home\work PC.


Meh. I'm going back to bsd. Though I have to admit, the ease of configuring 3rd party proprietary drivers has sort of delayed my departure from ubuntu.

I'm real close to throwing out my Intel NUC + Ubuntu and starting from scratch. It's just too unstable. I tried going back to SuSE, but alas, it's everything I don't care for in ubuntu, only with different minor nits. Maybe RYO leenucks where I build all my tools from scratch.


I've moved to Fedora for desktop use in the last little bit. It's been quite nice, I have to say! My server stock is a bit all over the place at the moment, as I'm experimenting with a few things. I've got:

* Ubuntu 22.04

* Debian Bullseye

* Oracle Linux 8 (This was more accidental than anything...)

* Armbian

I haven't settled on where I'll be going but I think in the end I'll mostly be moving towards Fedora for everything. Their server OS is pretty nice too!


> without being ruined by corporate interests

I cannot begin to address the issue of "agendae" of various Linux and BSD distro builders, as any number of disagreements have arisen about almost any of them for many reasons (i.e. systemd, dbus, proprietary software, personality differences, etc. etc.)

I'll just add that if you have time, set up several Linux and/or BSD OSes in VMs and take them each for a spin to find what suits you.


I've used Fedora for years as my desktop linux distro on my laptops and CentOS or RHEL for servers.

Fedora is the desktop focussed project which is upstream from the commercial RedHat Enterprise Linux, CentOS is the community distribution of RedHat Enterprise Linux.

This is a great, stable, and feature rich ecosystem that contributes a lot to opensource and upstream back to Linux and has a long track record stretching back more than 20 years.


I used to like Ubuntu, but gave up on it for similar reasons. I have an old machine, so I find Ubuntu too bloated. I've been on Debian Stable for a few years now, using MATE. It's fuss-free. I don't find the software too old, but I use a few bits that I've compiled myself. Linux Mint has a good reputation, so I would consider that if I was thinking of moving away from Debian.


I switched to Fedora once the snap-pushing started getting too annoying. It works well, packages are up to date and not modified from upstream.


I’m slowly gravitating to that, although I mostly deploy my servers as usual and remove snapd. My breaking point was being forced to install Firefox via snap.

Anything with a GUI now runs Fedora (which can be annoying sometimes because it feels unfinished, but largely works), and I’m using Debian for other things.


2 options: 1) gentoo 2) FreeBSD

Which of those you choose depends on your needs. I’d try FreeBSD first, and if you find it lacking then I’d go for gentoo. BSD seems like a well kept secret, but it won’t work for everyone. They do have stellar docs and a friendly community, buts it’s not ubuntu.


For me it was Archlinux. Listen Ubuntu. You rely on third party packages and don't really mind to fix them, just bounce users upstream. Yes, you are based upon Debian. You cannot fix "their" bugs. You should, instead, because those are your users'.

Archlinux is far from perfect, but nearer than you, Ubuntu.


Try building a Linux system from scratch! It won't be as functional immediately, but you'll learn a lot.


I always had bad experiences with Ubuntu upgrades to the next major version.

Distros with rolling releases seem to do much better.


Major Kernel upgrades occur in major versions


Linux Mint is the next easiest step from Ubuntu.

Then Debian.

If you want to learn more, try Arch.

If you want to try something avant garde, maybe Guix or NixOS.


Nitpicking: Not for 20 years, but for 18.5 years at most - Warty Warthog, 4.10, came out in October 2004.


I’ve been an Ubuntu user for a long time, but I’m moving to Fedora for my laptops and CentOS Stream for my servers. Canonical was great and added a lot of nice things to Ubuntu and contributes a lot with upstream Debian, but my patience with things like snaps has limits.


> breaking apt upgrades on 20.04 LTS

Could you be more specific? I upgraded from 20.04 to 22.04 without any issues.


Attempts to apt upgrade on my 20.04 workstation is met with a notice that certain security updates require a Pro subscription with “esm-apps” enabled. My desktop has been nagging me with notifications — daily — that 20.04 will no longer be supported this year. This despite 20.04 being an LTS release that should provide security updates for another couple of years.

Sure, this might be a simple bug in their apt implementation, but it’s been present for weeks now. It’s a sign of corporate-driven rot, and my 30 years experience in the industry tells me that this is the tip of an iceberg that could sink their Titanic. I want off the boat before it sinks.


It's really hard to find a clear answer of what's going on too. After looking at their website [1] and a forum thread [2] it makes me think they're going to hold updates that would have gone into 'universe' hostage until you sign up for Ubuntu Pro.

The thing I really don't get is that I always assumed 'universe' was mainly driven by 3rd parties. Who's writing the security patches? I can't imagine Ubuntu is capable of doing that for +23k apps, so I'm guessing the maximum they're doing is packaging and the minimum is simply giving us a new repository name that's paid instead of free.

Plus, how does "best effort" work for LTS updates if the finished patch/update is already sitting in the ESM repo? There's not really any extra effort required to push those into another repository, so I think the support for 'universe' will become arbitrary.

My hunch is that 'universe' updates are going to cease to exist in Ubuntu if you're not paying for Ubuntu Pro. I have no definitive proof of that, but it's not clear what they're trying to do and I tend to assume the worst in those situations.

That forum thread feels like they're being intentionally vague about what they're trying to do which I don't like.

Can anyone here explain what's actually going on with updates in Ubuntu now?

1. https://ubuntu.com/security/esm

2. https://discourse.ubuntu.com/t/why-is-extended-security-main...


Not the original poster, but I just had issues where normal operations of Ubuntu desktop (literally just installing updates after installing a new distribution) completely broke the OS - something crashed during apt update/upgrade and it nuked /var/lib/dpkg and all it’s data, which broke everything in a non recoverable way.

It did it twice, but I couldn’t figure out why.


Just putting in two cents for OpenSUSE. They have one of the best rolling release distros along with regular releases. I used it for a while after Manjaro ate itself. It’s been around almost as long as RedHat so they’ve managed to survive.


RHEL/Rocky for server / Enterprise.

Debian is another option that I also use for my personal workstation. Debian is nice in that there won't be any crazy profit driven decisions. It is super stable. I think it is a much saner option than Ubuntu.


Gentoo solved literally all my problems with Ubuntu. There's a bit of a learning curve to use it properly on a server farm, but it's worth it, especially if you have any even slightly atypical performance constraints.


Debian.

From Debian we came and to Debian we will return. It "just works" without any corporate plans which need to be implemented and for which users are to be used as pawns in the chess game of market domination.


I'm going to throw my hat in the Debian ring like a lot of other comments. The whole snap thing annoyed me too. I use Debian solely on Desktops now I do have some Ubuntu servers but slowly moving what I can to Debian.


Another vote for Debian. Depending on the workload, FreeBSD is excellent as well.


NixOS.

Not going back to imperative, none atomic updates aka I don't know what is configured on this machine or it just broke completely apart in the middle of the update and I need to clean up the mess right now.


An alternative to Ubuntu will emerge when competent people have reached the limit of their patience. Personally, I am more and more tempted by Debian. Ubuntu has become a dangerous tool.


Debian. It's what Ubuntu uses under the hood (as you very likely know) so the transition will be smooth. For the GUI, my absolute favorite is i3wm. But LXDE is more user-friendly.


I was a similar situation, and was distro hopping for a while. I ended up running Debian on most of my systems. Testing with KDE for desktop, and stable on servers.


A lot of people mentioning Debian. Great distro. If you're comfortable with Debian, you might as well check out Arch too. Can't go wrong with either one.


Because of this thread, I switched back to Debian on my laptop after 8 years on Ubuntu, and everything works nicely. Thanks HN!


I moved to Debian after getting fed up with the fact that I had to upgrade the server on almost daily/weekly basis. Couldn't be happier!


> ... breaking apt upgrades on 20.04 LTS in order to push their Pro agenda

I'm running 20.04, and this is news to me. What is the nature of the breakage?


Have a look at FreeBSD. The documentation and the fact userland and kernel are developed and released together changed my whole admin experience.


I like debian for servers and Mint for desktop


Red Hat for production. Centos for personal. Actually also install Snapd for those few sod apps that need frequent updates.


I don't see the problem with snaps. They just work for me. Firefox snap is very fast and don't have issues.


Same here. Chromium, firefox, beekeeper studio, they are okay.

There was a bug in nodejs snap though involving the vm module, but seems pretty edge case to me.

On most normal user use cases it is more useful and not cumbersome


I use Debian for my personal workstation. I can’t get basic shit like Bluetooth to work but that’s all right lol.


Do you have any issues with Debian. It is very stable and is the base for Ubuntu and many other os’s.


Debian. It's always been Debian.


I run my personal servers on Debian and OpenBSD. My OpenBSD machine just hit 12 years of uptime.


If you want control and trust into the system, you're choosing between Arch and Nix


Debian was always a solid choice I ultimately ended up with in the last two decades.


Gentoo, no I'm serious.


Arch Linux


To add to the noise: I like pop os. It's been a great desktop distro


Red Hat for work debian for my rpi and personal cloud services


Why Ubuntu in the first place, when there's Debian?


What about Linux Mint?


PopOS for desktop? i think debian for server?


ChromeOS is the future of general purpose computers IMO. Browsers only get more capable, other OS’s continue to bloat, and it’s fundamentally easier to run securely.


Just skip to the end and use MacOS.


Debian or RHEL for the server

Fedora for the desktop.


Arch, Gentoo


You could try NixOS if stability is that important. It requires time investment, but it saves time too.


I'm going to bet that whatever you choose, you'll be using Ubuntu in six month's time.


i use alpine linux on my servers. it's actually quite good.


Debian for server

PopOS for everyday use


Hannah Montana Linux


fedora/centos


75 6 de j'y


OpenSuse Leap


Fedora?


Debian.


NixOS


Kubuntu.


I've used Kubuntu for years and really like KDE, but Kubuntu has the same frustrating snap issues so is not a solution here.


macOS has been breaking my heart lately. Some of it is a consequence of being on ARM instead of x86, but some of it is the OS- for example, I don’t have to tell this group Docker on macOS is the worst (versus Linux and Windows). The theory behind getting a Mac is that it lasts ~5 years before I really feel the need to buy another Mac, but this time, my 2020 M1 MacBook has only made it about 2 years before I’m now trying to unload it. 16GB just isn’t enough for Docker and the IDEs and other apps I have running. It sucks because I followed the “max out the Mac because you can’t upgrade later” rule. 16GB was as high as I could go, and hell 16GB was enough for my 2015 Intel MacBook.

Anyway, because of the above and the lack of Nvidia/CUDA support for ML/NLP stuff that I do for work, I built a desktop and run Linux on it. I’ve been jumping around distros over the last few months and here’s what I’ve learned:

Ubuntu Desktop has always been on the buggy side for me, going all the way back to 10.04 LTS. I guess taking Debian and GNOME and plastering your own brand on top will do that. The most-stable desktop experience has been Fedora Workstation, which uses pure GNOME as the desktop environment. I wanted a distro with a stable desktop environment (most of my woes pertain to having a 5K display), that supports fractional scaling (200% is too big for a 4K/5K display IMO), and runs all the stuff I want to run. Also rolling release would be nice, because I’m tired of running into issues where a guide I’m following works for Ubuntu 20.04 but not 21.04 for example, or a package (like AWS VPN Client) is made for 20.04 but doesn’t work out of the box with any newer version of Ubuntu.

So brass tacks, stable (and modern-looking) desktop environment out of the box, plus rolling release = Manjaro (Arch) and OpenSUSE Tumbleweed, both with KDE Plasma. There are other distros that meet this criteria, but they’re niche or having been around for very long.

I’ve had a good experience with both Manjaro and OpenSUSE Tumbleweed, but OpenSUSE had some annoying UI bugs that I don’t experience with Manjaro, so… I’m just sticking with Manjaro + KDE Plasma.

The only downside is that as packages go, everyone makes DEBs but Arch you’ll often have to download a DEB or RPM and convert it and hope it works without any further action on your part. This also means that you can’t have apps auto-update like those that live in APT or Yum. Arch’s AUR technically supports updating the apps you have installed, however for unofficial packages (eg those that someone just converted from DEB or RPM and uploaded to AUR), you’ll be left behind the latest version until someone/the maintainer of that package in AUR manually updates it with the new DEB/RPM they converted for AUR. I think this is an acceptable trade off considering I’m free from Ubuntu and get a rolling release for the OS/desktop environment. Heck, what’s the point in having your apps auto-upgrade if the OS and DE only get updates a couple times a year?


Debian (server, workstation), Archlinux, Manjaro (desktop)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: