Coming from windows, I’m a little confused how this works, is he reinstalling Linux/Ubuntu every week on all his machines? Or is it possible to “upgrade in place” just the kernel and leave your files/data alone, if the latter, is there a good guide for how to do that and for a homelabber would that be a good idea to avoid security bugs?
In a classic Linux distro, various OS components are much less tightly coupled than in Windows. You can easily update the linux kernel, without updating all the system libraries, daemons, configs, tools, applications, much less your user configs and data. This is done every time "apt-get upgrade" installs an upgraded kernel package, which can be more often than monthly, depending on linux distro. And all the other components can be updated separately, like openssl libraries, init system binaries, tools like git, etc. You can swap in your own alternative for any component, if you know how. The linux kernel is one of the easiest components to swap because the linux syscall ABI is very backwards compatible. (Linux compatibility challenges are about user-land libraries which are all separate and up to the development policy of those library authors, modulated by the update policy of the distro.)
I find modern Windows and macOS updates to be frustratingly opaque and slow. Linux distro updates, on the popular/common distros like debian and arch, are one of my favorite parts of using the system. It'll just install updates for like 200 separate packages (libraries, tools, etc) bang, bang, bang, less than a minute, done. And with the absurdly high speed of todays CPUs and storage, why should it take longer? What are Windows and macOS even doing? I will accept linux has some drawbacks and disadvantages, but the system package managers have been fantastic, for about 2 decades now.
Installing kernel in Ubuntu is simply, given that you already have a kernel deb, a single line "apt-get install". You can create your own package apt repo or use the one provided by the author https://github.com/zabbly/linux#installation
Be aware that upgrading kernel usually mess with graphics driver, especially for Nvidia. In the best case you'll have to unload and reload the Nvidia driver, in the worst case your driver just stops working.
In the _most common_ case (with nvidia), your graphics stops working entirely and you spend (at least) an hour in a virtual console trying to undo what you did. Fun times.
As the other replies to this state, yes it's standard practise to update a kernel by installing the newer one without having to re-install or update all the other packages. Kernel updates are handled the same as any other package update but with the only difference being that a system reboot is required. (There's also the possibility of performing live kernel patching so that a reboot isn't needed, but that's typically a paid for service with enterprise linux).
Also, you can have many kernels installed concurrently and select which one to boot from at the GRUB boot screen. This is mostly used when you update the kernel and suddenly find on rebooting that something has gone wrong (e.g. necessary drivers not included in the initial ram disk - initrd), so you can reboot and select the previous working kernel to boot the system and resolve whatever issue you had.
> Or is it possible to “upgrade in place” just the kernel and leave your files/data alone
Yeah. I've never heard of reinstalling the OS just to get a newer kernel.
> is there a good guide for how to do that
I think it's best to continue to use your distro's package manager to handle actually installing that kernel, so the instructions would all be distro-specific. Some distros make working with a custom kernel easier than others.
> for a homelabber would that be a good idea to avoid security bugs?
Depends on your distro. I would expect major ones and enterprise-oriented ones to do a good job of backporting security fixes to the older kernels they run. If they do a really good job of this, it might even be more secure on the whole.
But yeah, just using
the latest stable kernel is probably the simplest way to ensure that you have the latest security fixes. (It'll also ensure you have the latest undiscovered security bugs. ;)
I was also confused since to my understanding "mainline" is usually the release candidates, i.e. Linus's master branch. Here "(stable) mainline" appears to mean the latest stable release without distribution-specific patches, what I'd call a "vanilla kernel".
"Mainline kernels" also gets used to refer to kernels built from Linus's repo in general, as a way to distinguish them from kernels built from the OS vendor's / hardware vendor's branch.
For example when people talk about phones or tablets having mainline support, they mean that Linus's tree has all the drivers etc for that hardware and using the hardware vendor's arbitrary kernel drop isn't needed. They don't necessarily mean that the support is only in master and not in a stable branch. Eg https://mainline.space/https://not.mainline.space/
The thing is you don't want Linus's kernels unless you are a kernel developer or testing something. Greg KH maintains the stable kernels. You want his branch.
I wonder if he has any insight on why the ubuntu kernel is so heavily patched. How much of it is legitimate customization for things like ZFS, and what are in his opinion unnecessary changes?
I run the stable kernel, which I build myself. There is still the occasional regression. About a year ago it had a regression in the Intel graphics driver which broke graphics for my Haswell chip. A patch was available but this wasn't merged for months. Luckily Gentoo makes it super easy to apply custom patches so I did. IMO if you want to run a stable or mainline kernel yourself you might as well build it yourself too.
The Debian wiki explanation[1] is technical, but it's definitely shorter than a book, and if you're running your own kernels it shouldn't be too difficult.
It gives you the "here, run these commands" version if that's what you want.
The truth is, it doesn't necessarily have to be a book long, but much like setting up PKI, it certainly can be.
What we really need is simple tooling that handles just the case of "I want to sign kernels for my own machine(s)".
Of course, some tools do exist for this case, but I'm not aware of one that is totally generic. Lanzaboote for NixOS seems interesting (disclaimer: have not tried.)
Wait, wtf. There's this in addition to lanzaboote? But no mention of it? (And at one point there was another new list with bootspec support, maybe still is)
Please, people, if you release software that overlaps or competes with another existing in the space, take the 3 minutes to write a comparison note or "why this exists". Please.
I will admit though, it being from Determinate Systems makes me wonder why they built it. They're not some hobbyists, they're a company built around making Nix tooling - and one of their most publicized tools, the Determinate Nix Installer, is actually a tool that, obviously, overlaps with existing tools, but has a very clearly stated objective and reason to exist. It seems very likely to me that there is, in fact, a reason for why they built this. If I had to guess, it's probably meant to be simpler, more robust, more elegant, etc. but I'd love to hear about it.
Unfortunately, unlike many of their projects, they don't seem to have a blog post yet for it.
I mean, at the very least, I do this because I make/release things because there's a gap to fill or an itch to scratch. And if I'm releasing something it's because I'm hoping it will be useful to others.
It seems like a natural conclusion to spend a fraction of the energy to author, to detail it's reason to exist.
Anyway, no one owes me anything but I trust that Determinate Systems has a grander vision than I can see, I just want to be clued in, to be honest ;).
Ideally signing and enrolling in the UEFI the key to a signed Unified Kernel Image (UKI) makes more sense: only having SecureBoot verifying the kernel is okay'ish (and it does work: I tried modifying a single bit from my kernel and the UEFI refused to boot it) but it's not that great if the attacker can still modify the initrd etc.
I used to use https://github.com/berglh/ubuntu-sb-kernel-signing and the mainline tool from cappelikan ppa, I think it worked on even with DKMS modules such as the Nvidia driver. I've since switched to xanmod with secure boot disabled do my memory is a bit hazy on that last point.
Ah, yes, I'm familiar with this. That's how "vanilla" kernel source supports Debian packaging. Lintian predictably complains, but I suppose that's to be expected.
No offense to the maintainers of the 'performance' and 'gaming' kernels, but they're usually not that well maintained. TBF, I haven't tried ALL of them, but a lot of the popular ones. They usually configure 'odd' defaults that risk stability (in the 'staying up' sense, as well as data integrity sense).
Same. I tried some, and even in games they didn't provide much of an improvement. Sometimes performing worse, even.
I just put some niceties like irqbalance and add a few lines to sysctl.conf for better latency at a cost of throughput (doesn't matter if you're just using the system as a desktop and not as a server). And that gives me a very nice experience overall.
Edit: Please, be careful. This is under a very well performing machine, take a look at replies for more information.
Oh, I was mostly describing network latency and file IO cough should apply to mouse polling as well but bear in mind what another user has said. For mouse specific concerns, I would take a look at the following tutorial:
By decreasing latency, you decrease throughput. Thus if you have a slow and old computer, you could end up dropping inputs entirely by decreasing latency, or make it feel sluggish/laggy due to it processing a queue.
For older devices, the most gain would probably come from using a lower resolution input and a lower resolution output (to match what was available at the time). Throwing a 2k monitor on a graphics card from 2014 is probably going to be a barely passable experience (buffers weren't as big back then), same with mice.
Here's an intro to his project Incus (TIL, fork of LXC):
https://stgraber.org/2023/08/10/a-month-later