Hacker News new | past | comments | ask | show | jobs | submit login
Stable Linux mainline builds for Ubuntu (stgraber.org)
106 points by mariuz on Aug 25, 2023 | hide | past | favorite | 57 comments



Excited to see what Stéphane builds next.

Here's an intro to his project Incus (TIL, fork of LXC):

https://stgraber.org/2023/08/10/a-month-later


See also https://github.com/bkw777/mainline which offers a UI to install the mainline kernels from mainline kernels PPA.


You can install mainline builds natively from here: https://wiki.ubuntu.com/Kernel/MainlineBuilds


Don't the mainline builds not support ZFS though?


It's covered in the article.


It would be great if he gave some examples which issues he encountered. I just used the standard ubuntu out of box experience and had no issues ever.



Coming from windows, I’m a little confused how this works, is he reinstalling Linux/Ubuntu every week on all his machines? Or is it possible to “upgrade in place” just the kernel and leave your files/data alone, if the latter, is there a good guide for how to do that and for a homelabber would that be a good idea to avoid security bugs?


In a classic Linux distro, various OS components are much less tightly coupled than in Windows. You can easily update the linux kernel, without updating all the system libraries, daemons, configs, tools, applications, much less your user configs and data. This is done every time "apt-get upgrade" installs an upgraded kernel package, which can be more often than monthly, depending on linux distro. And all the other components can be updated separately, like openssl libraries, init system binaries, tools like git, etc. You can swap in your own alternative for any component, if you know how. The linux kernel is one of the easiest components to swap because the linux syscall ABI is very backwards compatible. (Linux compatibility challenges are about user-land libraries which are all separate and up to the development policy of those library authors, modulated by the update policy of the distro.)

I find modern Windows and macOS updates to be frustratingly opaque and slow. Linux distro updates, on the popular/common distros like debian and arch, are one of my favorite parts of using the system. It'll just install updates for like 200 separate packages (libraries, tools, etc) bang, bang, bang, less than a minute, done. And with the absurdly high speed of todays CPUs and storage, why should it take longer? What are Windows and macOS even doing? I will accept linux has some drawbacks and disadvantages, but the system package managers have been fantastic, for about 2 decades now.


Same. A long time ago, I envied Windows and Mac users. For the past decade, I mostly just pity them.


Installing kernel in Ubuntu is simply, given that you already have a kernel deb, a single line "apt-get install". You can create your own package apt repo or use the one provided by the author https://github.com/zabbly/linux#installation

Be aware that upgrading kernel usually mess with graphics driver, especially for Nvidia. In the best case you'll have to unload and reload the Nvidia driver, in the worst case your driver just stops working.


In the _most common_ case (with nvidia), your graphics stops working entirely and you spend (at least) an hour in a virtual console trying to undo what you did. Fun times.


This case ain't very common anymore, just Nvidia proprietary driver users suffer with this commonly these days.

Intel and AMD don't have significant breakage when upgrading the Linux kernel.


Yeah, I should have mentioned this is specific to nvidia. Update: fixed.


As the other replies to this state, yes it's standard practise to update a kernel by installing the newer one without having to re-install or update all the other packages. Kernel updates are handled the same as any other package update but with the only difference being that a system reboot is required. (There's also the possibility of performing live kernel patching so that a reboot isn't needed, but that's typically a paid for service with enterprise linux).

Also, you can have many kernels installed concurrently and select which one to boot from at the GRUB boot screen. This is mostly used when you update the kernel and suddenly find on rebooting that something has gone wrong (e.g. necessary drivers not included in the initial ram disk - initrd), so you can reboot and select the previous working kernel to boot the system and resolve whatever issue you had.


> but that's typically a paid for service with enterprise linux

Ubuntu Pro is free for five machines and includes live-patches for security updates. Non-security updates still require a reboot.


> Or is it possible to “upgrade in place” just the kernel and leave your files/data alone

Yeah. I've never heard of reinstalling the OS just to get a newer kernel.

> is there a good guide for how to do that

I think it's best to continue to use your distro's package manager to handle actually installing that kernel, so the instructions would all be distro-specific. Some distros make working with a custom kernel easier than others.

> for a homelabber would that be a good idea to avoid security bugs?

Depends on your distro. I would expect major ones and enterprise-oriented ones to do a good job of backporting security fixes to the older kernels they run. If they do a really good job of this, it might even be more secure on the whole.

But yeah, just using the latest stable kernel is probably the simplest way to ensure that you have the latest security fixes. (It'll also ensure you have the latest undiscovered security bugs. ;)


The only reason I’d use Ubuntu these days is their shipped kernels coke with ZFS. Otherwise I’m a Fedora/CentOS boy.


checking your github repo tels me that: This branch is 2273 commits ahead, 14587 commits behind torvalds:master. ????


I was also confused since to my understanding "mainline" is usually the release candidates, i.e. Linus's master branch. Here "(stable) mainline" appears to mean the latest stable release without distribution-specific patches, what I'd call a "vanilla kernel".


"Mainline kernels" also gets used to refer to kernels built from Linus's repo in general, as a way to distinguish them from kernels built from the OS vendor's / hardware vendor's branch.

For example when people talk about phones or tablets having mainline support, they mean that Linus's tree has all the drivers etc for that hardware and using the hardware vendor's arbitrary kernel drop isn't needed. They don't necessarily mean that the support is only in master and not in a stable branch. Eg https://mainline.space/ https://not.mainline.space/


The thing is you don't want Linus's kernels unless you are a kernel developer or testing something. Greg KH maintains the stable kernels. You want his branch.


If anyone is interested in this but for CentOS/EL 7, 8 & 9 there's the awesome elrepo repo.

http://elrepo.org/tiki/kernel-ml


I wonder if he has any insight on why the ubuntu kernel is so heavily patched. How much of it is legitimate customization for things like ZFS, and what are in his opinion unnecessary changes?


I think he means stable rather than mainline?

I run the stable kernel, which I build myself. There is still the occasional regression. About a year ago it had a regression in the Intel graphics driver which broke graphics for my Haswell chip. A patch was available but this wasn't merged for months. Luckily Gentoo makes it super easy to apply custom patches so I did. IMO if you want to run a stable or mainline kernel yourself you might as well build it yourself too.


I didn't realize that Canonical had lost Stéphane. Huge loss for them. Am incredibly excited what comes from his period of self-employment.


If I wanted to keep Secureboot enabled, would signing with a MOK and enrolling that key be sufficient?


You say that as though it's easy. I've yet to find an explanation that's shorter than a book.


The Debian wiki explanation[1] is technical, but it's definitely shorter than a book, and if you're running your own kernels it shouldn't be too difficult.

It gives you the "here, run these commands" version if that's what you want.

[1] https://wiki.debian.org/SecureBoot#MOK_-_Machine_Owner_Key


The truth is, it doesn't necessarily have to be a book long, but much like setting up PKI, it certainly can be.

What we really need is simple tooling that handles just the case of "I want to sign kernels for my own machine(s)".

Of course, some tools do exist for this case, but I'm not aware of one that is totally generic. Lanzaboote for NixOS seems interesting (disclaimer: have not tried.)


It so happens I'm running NixOS... so thanks a lot for the reference! :D


Another option for NixOS is bootspec-secureboot, I'm using it with no real complaints: https://github.com/DeterminateSystems/bootspec-secureboot


Wait, wtf. There's this in addition to lanzaboote? But no mention of it? (And at one point there was another new list with bootspec support, maybe still is)

Please, people, if you release software that overlaps or competes with another existing in the space, take the 3 minutes to write a comparison note or "why this exists". Please.


Why should any project have to justify it's existence? Maybe it just scratches an itch?


I will admit though, it being from Determinate Systems makes me wonder why they built it. They're not some hobbyists, they're a company built around making Nix tooling - and one of their most publicized tools, the Determinate Nix Installer, is actually a tool that, obviously, overlaps with existing tools, but has a very clearly stated objective and reason to exist. It seems very likely to me that there is, in fact, a reason for why they built this. If I had to guess, it's probably meant to be simpler, more robust, more elegant, etc. but I'd love to hear about it.

Unfortunately, unlike many of their projects, they don't seem to have a blog post yet for it.


I mean, at the very least, I do this because I make/release things because there's a gap to fill or an itch to scratch. And if I'm releasing something it's because I'm hoping it will be useful to others.

It seems like a natural conclusion to spend a fraction of the energy to author, to detail it's reason to exist.

Anyway, no one owes me anything but I trust that Determinate Systems has a grander vision than I can see, I just want to be clued in, to be honest ;).


Ideally signing and enrolling in the UEFI the key to a signed Unified Kernel Image (UKI) makes more sense: only having SecureBoot verifying the kernel is okay'ish (and it does work: I tried modifying a single bit from my kernel and the UEFI refused to boot it) but it's not that great if the attacker can still modify the initrd etc.



Indeed, that is what the parent is referring to when they say this:

> Ideally signing and enrolling in the UEFI the key to a signed Unified Kernel Image (UKI) makes more sense

(It's much more useful to have a link to it, so thank you!)


https://wmealing.github.io/signed-kernel-modules.html

Maybe I need to write it a little more clearly, but perhaps that will get you there.


Seconded.

Step 0: deploy your own PKI, install certificates on your motherboard firmware, sign your kernel, sign your modules.

Step 0.5: Sign your DKMSs from Broadcom, Nvidia, and Intel.

Step 0.75: Re-sign everything because you missed a step.


I used to use https://github.com/berglh/ubuntu-sb-kernel-signing and the mainline tool from cappelikan ppa, I think it worked on even with DKMS modules such as the Nvidia driver. I've since switched to xanmod with secure boot disabled do my memory is a bit hazy on that last point.


It would seem so. Been doing that since a while on my laptop for locally compiled stable kernels from kernel.org.

My hacky script has more lines to fetch the signer name from the kernel (once it's been signed) than to just sign the vmlinuz image.


Can anyone see how these Debian packages are being built? Asking because the last time I looked at Canonical's packaging, it was a little hairy.


It's quite trivial actually. It's just `make deb-pkg`. It may complain about a cert. https://github.com/torvalds/linux/blob/master/scripts/packag...


Ah, yes, I'm familiar with this. That's how "vanilla" kernel source supports Debian packaging. Lintian predictably complains, but I suppose that's to be expected.


How is this different than xanmod, liquorix or pf-kernel? I see zfs mentioned, is zfs not included in the gaming/performance project kernels?


No offense to the maintainers of the 'performance' and 'gaming' kernels, but they're usually not that well maintained. TBF, I haven't tried ALL of them, but a lot of the popular ones. They usually configure 'odd' defaults that risk stability (in the 'staying up' sense, as well as data integrity sense).


Same. I tried some, and even in games they didn't provide much of an improvement. Sometimes performing worse, even.

I just put some niceties like irqbalance and add a few lines to sysctl.conf for better latency at a cost of throughput (doesn't matter if you're just using the system as a desktop and not as a server). And that gives me a very nice experience overall.

Edit: Please, be careful. This is under a very well performing machine, take a look at replies for more information.


How do you improve the latency? Especially reacting to mouse and keyboard, I have an old PC which could use that.


Oh, I was mostly describing network latency and file IO cough should apply to mouse polling as well but bear in mind what another user has said. For mouse specific concerns, I would take a look at the following tutorial:

https://www.quakeworld.nu/wiki/Smooth_Quake_in_Linux

Best of luck!


Thanks! Curious, why changing network/disk IO latency, does it help in online games noticeably?


It does, even when navigating the web.


By decreasing latency, you decrease throughput. Thus if you have a slow and old computer, you could end up dropping inputs entirely by decreasing latency, or make it feel sluggish/laggy due to it processing a queue.


You're correct, I should add something to my post.


For older devices, the most gain would probably come from using a lower resolution input and a lower resolution output (to match what was available at the time). Throwing a 2k monitor on a graphics card from 2014 is probably going to be a barely passable experience (buffers weren't as big back then), same with mice.


I wonder what instabilities he has experienced?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: