Hacker News new | past | comments | ask | show | jobs | submit login
GRUB2 UEFI SecureBoot Vulnerability: 'BootHole' (debian.org)
195 points by edward on July 29, 2020 | hide | past | favorite | 103 comments



I wish the process of restoring a non-booting Debian-derived system were easier. After needing to do it on several dual-boot systems, I keep the procedure handy now. Here it is for posterity:

1) Boot Linux from your distro CD or DVD

2) Get a shell

3) Mount your normal Linux partitions. (Make sure you know where they are. The following example assumes / on /dev/sda1 and /boot on /dev/sda2.) E.g. mount /dev/sda1 /mnt Note: If you have a separate partition for /boot, then mount it too: mount /dev/sda2 /mnt/boot

4) Mount the special nodes: mount --bind /dev /mnt/dev && mount --bind /dev/pts /mnt/dev/pts && mount --bind /proc /mnt/proc && mount --bind /sys /mnt/sys

5) Change your shell's root: chroot /mnt

6) Re-install grub: grub-install /dev/sda

7) Update the grub boot menu: update-grub

8) Undo chroot: exit

9) Unmount the special nodes: umount /mnt/dev && umount /mnt/dev/pts && umount /mnt/proc && umount /mnt/sys

10) Remove media and reboot


After several Win10 upgrades nuked grub on a dual boot system, I have found a much simpler process. Create a rEFInd [1] USB boot key once and for all. Then whenever grub is hosed, boot on this key: rEFInd will find the Debian partition and allows to run it directly. Then once in Debian just do an "update-grub" to fix things.

[1] https://www.rodsbooks.com/refind/getting.html


You could just install rEFInd and make it the default bootloader with its auto find mode enabled.


win10 install nukes it sometimes as well. its a PITA


Have you found a way to deal with this? Swear I have refind on the non-Windows disk, but Windows updates sometimes replaces either the partition, or the default boot device. Mega frustrating!



Like boomboomsubban, I have never had this happen. My current laptop shipped with Windows 8 and has had Windows 10 installed since the weekend after the public beta was released. It's also had an Ubuntu dual boot since exactly the same time. rEFInd was installed a few months later when I discovered it was still a thing (I had been a rEFIt user on Macs years ago).

Never once has the Linux boot manager/loader (either the original GRUB or rEFInd) been broken, by a Windows update or otherwise.

Windows of course has been well documented to happily overwrite any MBR bootloaders when you install it, but even that doesn't happen in a UEFI environment. Windows, rEFInd, GRUB, and whatever else you may have installed all have their own folders. It might change the default, but restoring your own choice is literally a matter of copying a single file.

EFI has plenty of problems of its own, but this is one it solved pretty well.


Windows repair likes doing it.

I've had it nuke my bootloader a few times when I used to sill dualboot (now full Linux).

One time the windows installer confused the encrypted LVM disk with a corrupted NTFS partition (because it had an NTFS partition on it at some point and underlying data wasn't wiped properly) and tried to fix it. Suffice to say, that required a back restore, so now all my encrypted disks have a 100MiB NTFS buffer partition for Windows repair to fuck around it (until I removed dualboot, I occassionally found remnants of an attempted repair in them)


Is it sufficient to install Windows to a completely separate disk instead of a separate partition to prevent this? I can't imagine Windows starting to overwrite partitions or boot sectors on a drive it is not even installed to


I always install Windows to a different disk, so yes, Windows will try to recover partitions on other disks, especially when it finds multiple ESPs and the windows one isn't the active one (upon which it sometimes tries to install it's bootloader there, looks if that disk has the windows partition on it and confirms that yes, windows is installed on that disk because it has a windows-like ESP that it just installed, so it goes for a repair)


I've seen it do it as well and I'm pretty sure now that it's part of the "automatic repair" process that Windows does when it detects 3 (?) failed boots. What happened pretty often to me is that I'd miss the boot screen and Windows would start booting (it's the default bc it's a shared PC) and in order to not waste time, I'd force a reboot. After a few times, next time Win would actually boot, it would go into rescue mode and start poking around the EFI partition and efivars doing god knows what that ended up in my GRUB not even showing up in the F11 boot menu.


I've had this happen too, but I think it just changed something in the efivars, grub itself was still there and still operational, even though it didn't show up in the available boot options.

The workaround for me was the "boot from file" boot option, where I would be able to walk the directories in the efi partition and start grub. Once this was done, it was back in the boot option list.


Yeah, I've only had it remove the actual files once, when I had to use the recovery USB thing to fix a broken update. Unfortunately, my motherboard is from the very early days of UEFI and doesn't allow me to select files or even manage keys so I have to keep a microSD card with a bunch of bootloaders handy in case this happens (I still haven't gotten around to learning UEFI shell in order to manually boot).


Over two years of dual booting with Windows 8 I never had it screw up my UEFI boot. The MBR I had tried in the past never lasted but with UEFI it just left everything on the ESP alone.



This strongly depends on the issue though. In most dual booting cases I've had to deal with recently, a single efibootmgr command was enough.

Windows and some Linux distros like to replace the fallback EFI command and the two can conflict every now and then. By properly configuring the UEFI to pick the right OS at boot (sudo efibootmgr -o 0001,0000 or something like that) should be enough.

If you're still using MBR or you have a shitty motherboard with bad UEFI support you're right that you're still forced to do a full Grub reinstall.

I've had to do recovery on a non-booting Windows machine that broke after a recent update and it was just impossible to recover. Some file on the FS was corrupted, but SFC couldn't recover it and wouldn't report what file was corrupted. Rebooting into recovery takes forever, boot recovery constantly fails and the only saving grace is the "reinstall Windows" that uses a complete reinstall as a fix for a broken OS.

Yes, fixing Linux is difficult, but at least it's doable. On Windows you're basically stuck with the two or three auto-recovery options or an OS + software reinstall.

I can't speak for macOS but I can't imagine it being much easier (especially because the limited variety of macOS hardware makes it more unlikely for the OS to fail catastrophically because that's easier to test for).

It shouldn't be too hard to create a Linux boot ISO that lists installed operating systems and then automounts them for recovery. It's only a matter of parsing GRUB config files + fstab + crypttab and some predefined mount patterns for distros after all.


Mac lets you reinstall the OS from the recovery partition without losing data

https://support.apple.com/en-us/HT204904

If the recovery partition is toast you can do a clean install over wifi - Option-Command-R or Shift-Option-Command-R

(would only work over an open wifi the last time I did it)


Does this include programs and settings? Windows lets you reinstall with data as well, but installing every single program and getting it set up all over again takes forever, that's my main problem with the Windows recovery process.

The install over the internet is a nice touch, though I don't expect that feature to ever make it into normal computers because it would probably force manufacturers to put a minimal Windows installer in their UEFI.


>Does this include programs and settings?

Yes, I believe so - I'm guessing this is due to things being more siloed

I should also mention it won't let you choose an OS version, although it does say the name of the one that will be installed


It also works over Ethernet!


I tried it via an Ethernet to USB adapter and the laptop didn't see the wired connection

Maybe a proper rj45 would work


rEFInd can automatically detect linux, Windows, and mac partitions and boot them. You could just install it to a usb stick and use it whenever necessary.


That's a very neat tool! I'm kind of turned away by this though: > Warning: Your kernel and initramfs must reside on a file system that rEFInd can read.

Can it boot fully-encrypted disks (by chainloading GRUB, for example)?


What you mentioned is rEFInd’s auto detect feature which by design doesn’t work with encrypted disks. rEFInd will automatically detect other .efi binaries so you can just install your other bootloaders alongside rEFInd, make rEFInd the default and chainload into your other bootloaders when you need to boot an encrypted disk.

I have a setup where I moved /boot of the encrypted luks partition to the esp and boot from there using a custom entry in rEFInd


This is essentially the process used for installing the base system in Arch. As usual, the Arch Linux wiki page on this topic [1] is a good resource, even if you're using a different system.

If you want to learn more about how a typical Linux system is organized, but don't have the time or patience to go through the Linux From Scratch book [2], installing and configuring Arch once is pretty insightful and also kind of fun.

[1] https://wiki.archlinux.org/index.php/Installation_guide

[2] http://www.linuxfromscratch.org/lfs/


I mean, it's just a chroot; it's the basic working "run a system without booting from it" that underlies a lot of base system tinkering and fixing.


There are lots of people who have used Linux for years but have never actually gone through this exercise. It's not very fun to have to do it for the first time when an important system fails to boot.

It's great that the modern Linux install process is so easy, but one drawback is that it also makes it easy to gloss over how the system is put together.

I just meant that it can be insightful to put together a toy system from scratch, so that you can learn (at your own pace) how chroots work, learn the conventions around manually recreating your root hierarchy in /mnt, learn what systemd services you actually need because you need to manually turn them on, rather than having a bunch of stuff you don't understand enabled by default, etc.


I was going to say Gentoo, for much the same reasons.


I usually do the bind mounts a little differently:

for ii in dev dev/pts proc sys ; do mount --bind /$ii /target/$ii ; done

Unmount: umount /target/dev/* /target/* /target

This can also be useful on newer systems where systemd systems need to be chrooted.

systemd-nspawn --directory /target --boot -- --unit rescue.target


UEFI makes this problem much easier—even if you still use a bootloader (no need!) there’s not much need to update it after it starts working.


Why is the Debian-derived system non-booting?

Many years ago I had disk error problem and could not boot. I tried a popular, Linux-based "system rescue" CD and it could not boot without accessing the disk. I tried my own "live" NetBSD USB stick and booted up no problem. No disk access needed.

I stayed away from Linux for many years as a result of experiences like that. I have been using Linux lately and I continue to find more stuff like this that would just never happen on BSD.

I do not understand why people use grub, let alone grub2. There are plenty of other bootloaders. What is wrong with syslinux?


GRUB2 is the most flexible bootloader. I use it for a multi-boot flash drive, used for booting 64-bit FreeBSD on a 32-bit-EFI machine (earliest Mac mini + 64-bit CPU)…

But yeah for normal desktop usage… there's rEFInd.


Flexibilty would have been my first guess. Are there any other reasons besides flexibility?


I'd say support in distros. With grub you can throw pretty much any distro onto your machine and it will show up (except Solus) . Also I don't think you can just choose which bootloader you want in most distros except Arch and Debian if I remember correctly. You can do that manually of course, but grub is easy and it works.


In arch you can choose the bootloader as well. The recommendation I hear a lot and that I use is systemd-boot (formerly gummiboot), which strangely doesn't have much to do with systemd.

It can pick up Windows for multiboot and is straightforward to configure (can even be configured from a windows live disk if you mount that FAT partition).


Well, I just had a look at the Arch Wiki article on Boot Loaders [1]. GRUB is by far the most widely supported and also the most widely compatible bootloader.

[1] https://wiki.archlinux.org/index.php/Arch_boot_process#Boot_...


The table is somewhat misleading. The filesystem support for example refers to being able to boot a kernel that is on that FS from disk.

In most gummiboot setups, your kernel will be on the ESP partition along with everything else, so it doesn't matter what FS the root is.

The lack of support for MBR or BIOS doesn't matter much either, systemd-boot requires a 64bit System and most 64-bit systems that are still around and largely used (or actively sold) have a UEFI that supports GPT. If you absolutely need to, systemd-boot supports booting from a GPT that has a MBR wrapper.

So while the table looks like systemd-boot lacks support for a lot of things, the reality is that when you setup systemd-boot a lot of these columns simply don't matter


So does that mean that I can't use those filesystems for my /boot/efi partition and everything else is ok? Then it's really not that bad. I haven't played around with other boot manager as you might be able to tell.


That's pretty much your limitation; your linux kernel + initramfs can't be on filesystems other than VFAT (ie, must be on the ESP but there is some ways you can have it work across disks).

Hence the table being slightly misleading.

For example, it also mentions that EFISTUB means you can't boot on btrfs and friends anymore. But it's the same limitation, initramfs and kernel need to be on the ESP, everything after that is up to the initramfs to bring up.


Obviously you can with Gentoo. That's basically its "thing" (choice).


Oh yeah, I forgot about Gentoo.


This isn't applicable to a UEFI Secure Boot system, where the shim and grubx64 binaries are distribution signed and installed from the package - not via grub2-install, which on UEFI produces an unsigned binary and will not boot under UEFI Secure Boot.


Congratulations, you just learned how to install Gentoo.


This is essentially identical to the procedure I did 15 years ago. What's the issue?


I don't think GP is complaining because the procedure changed in the last 15 years, but precisely because it didn't. Surely, there's room for improvement.


I think the issue boils down to the fact that it depends almost completely on exactly how the system is put together. I mean, the process is:

1. Boot into a working-enough live system - only way to improve this is to put a recovery system on the same system, but then you risk making it unbootable as well

2. Mount needed filesystems - depends completely on what filesystems there are, which is extremely variable; my only thought would be to use labels and hope that the installed system labels root/boot/efi the way you expect

3. Mount special filesystems - this is possible to automate; ex. arch-chroot (https://wiki.archlinux.org/index.php/Chroot#Using_arch-chroo...) does it, but that requires that you know what you need, so it's going to be brittle unless you stick with distro-provided tools (hence, arch chroot)

4. Fix the system - again, totally depends on what happened and how it should be set up; yes, in the trivial case you could make a "just-reinstall-grub.sh", but it'll fall apart for non-trivial setups

If every system looked the same, then yes you could make a livecd that automatically booted, set up mounts, chrooted in, fixed grub, and rebooted out, but this is the world of Linux-based systems so even within a single distro there's worlds of difference.


Semi-automating the common cases with a GUI would probably help many of the less sophisticated users. E.g. scan the existing system for partitions and ask them which one they want to repair.

Windows' repair mechanisms also only work in the common case and you have resort to quite similar CLI stepss when they don't.


Well, he still doesn't use UEFI. He could, it has been available for more than decade (i.e. better part of those 15 years), so the procedure didn't change because he has chosen so.

Not that it is a bad thing, some people do prefer that kind of stability. But then, why complain about that?

Btw, Intel did plan to remove CSM (legacy BIOS compatibility) with Tiger Lake and require UEFI class 3. We will see if they will follow through. Then the procedure WILL change, and we will hear from people who didn't expect it.


Wouldn't UEFI make it more complex, since you now potentially need to mount / and /boot and /boot/efi all separately before you can fully recover grub?


Not more complex, but different.

It does not have magic sectors on disk, in MBR for boot loader, on somewhere in dummy area of the filesystem for grubenv, everything happens on the EFI partition and with EFI variables (stored in nvram). Creating bootable EFI media means having vfat-formatted filesystem and copying files there. No need for tools like Rufus.

Your firmware would find it at boot time, and allow you to boot from it; most UEFI implementation have boot manager built-in.


Man, I’m glad ArchLinux has arch-chroot to do this for you!


For anyone looking at how to mitigate against this, a "defence in depth" approach using grub-mkstandalone [1] has always been wise. If you're building an appliance-style system, or just want to prevent abuse of Grub features on a secure boot system, a standalone image lets you "lock" the grub config file inside the signed binary. Once you use non-default secure boot signing keys, this attack would appear to be prevented, by avoiding the changing of the config file. The config file can be adjusted to prevent using edit mode in Grub at runtime.

I currently have a standalone grub image set up, with fixed path/filename kernel and initramfs in use, meaning I don't need to update the grub image unless changing the config or updating grub itself. You can then combine this with full disk encryption (dm-crypt + luks) over the entire disk including /boot [2], and get a pretty safe setup, that would mitigate against this in the first place, as well as any other attacks trying to tamper with modules/fonts/config files for grub (as they get wrapped into the signed grub binary).

[1] https://wiki.archlinux.org/index.php/GRUB/Tips_and_tricks#GR...

[2] https://cryptsetup-team.pages.debian.net/cryptsetup/encrypte...


Note that secure boot trusting Microsoft's key is a completely useless feature.

In addition to countless holes like these (since Microsoft signs software written in C), and the fact that you need to already have compromised the system, all that secure boot does is ensure that an unmodified kernel is running; you can however have it run arbitrary user space, including for instance running the user's previous OS in a VM or emulator and altering its behavior arbitrarily, and thus it effectively provides no protection whatsoever.


Yes, because of Microsoft's key signing program, UEFI security is already fatally flawed even without this new issue. See for example https://habr.com/en/post/446238/

> In this article we proved the existence of not enough reliable bootloaders signed by Microsoft key, which allows booting untrusted code in Secure Boot mode. Using signed Kaspersky Rescue Disk files, we achieved a silent boot of any untrusted .efi files with Secure Boot enabled, without the need to add a certificate to UEFI db or shim MOK.


Linux does have mechanisms to prevent changes to userspace (in particular the Integrity Measurement Architecture), but you’re right that distributions don’t generally implement these in a useful way. Some more locked-down distributions like Google’s Container Optimized OS do use these to prevent offline userspace changes.


There are some android features that recently have been upstreamed to the mainline kernel like dm-verity that allow you to boot into a read-only, verified userspace.


Secure Boot provides an initial part of chain of trust. Rest of parts should be prepared by distribution.


I've never liked "secure boot", neither its "security" nor its user-hostility.

All of the Linux distributions shipping with Microsoft-signed copies of shim have been asked to provide details of the binaries or keys involved to facilitate this process.

It's sad to see Linux distributions, even the more "principled" and presumably non-corporate ones like Debian, essentially bowing down to MS. As Linus Torvalds said when this whole secure boot thing started: "I will not change Linux to deep-throat Microsoft."

(Linux hates UEFI too, and I agree with him on that point as well.)


I’ve seen this on Twitter a couple times today. Is it even possible to involve GRUB in a secure boot setup in a way that’s actually secure? I’ve never encountered a Linux (other than gentoo, but that’s not exactly normal) where the initramfs wasn’t in plaintext, and unsigned. You can get whatever arbitrary code you want running as PID 1 from there. If you want secure boot, the way that makes sense is to use your own keys and combine the initramfs and command line with the kernel, and sign that with your secure boot keys. I don’t know why there isn’t a super slick way of doing that, but it is definitely smooth and more secure. Wasn’t EFI supposed to free us from things like GRUB anyway?


This is how my arch box is setup. I've done it by following some page on the arch wiki [0].

There is only one binary containing the kernel itself, the kernel command line and initrd that is signed and booted directly by the EFI. There's no bootloader in the grub sense.

That being said, I can see how one could argue that that's "not exactly normal", in the same sense that gentoo isn't.

I'm surprised this isn't more widespread, especially since most UEFI PC's I've seen have a very practical way of choosing which OS to boot.

[0] https://wiki.archlinux.org/index.php/EFISTUB


You can create a version of grub which requires gpg signatures for everything it loads, including any modules, the initramfs and grub.cfg, then sign that version of grub with a secure boot key. You'll possibly need to then sign your kernel twice (once with sbsigntool and once with gpg).

I don't know if this meets your definition of "actually secure" but it does mitigate the issues around initramfs etc.


This is along the lines of what I’ve been wondering as well. Secure boot is really the only line of defense here and it seems to be highly underutilized, to the point of making this grub issue look less worthy of press releases. And even if we did have a nice UI or workflow to get more people using secure boot, securely, there will certainly be bugs in the EFI implementations to compromise the boot chain there as well.


Besides power management, this is one of my biggest bit peeves with Linux distributions now.

I don't understand why there isn't a single distribution that offers a full Secure Boot implementation or LUKS Encryption with a password sealed by the TPM out of the box.

Also, there seems to be a lot of misconception about what Secure Boot does, unlike what the name implies, Secure Boot doesn't inherently provide any extra security or protection. It's just a mechanism to sign the software running on the system.

To make the most out of Secure Boot the distributions would need to sign and lock the boot-loader, kernel, and initrd, Then they could seal the LUKS encryption passphrase using the TPM, so if anybody tries to run any unauthorized software, they wouldn't be able to access the data on the drive.

It would be very similar to what windows does with bit locker; your hard-drive is automatically decrypted on system boot without entering any passwords.


There is - Heads[1].

[1] https://github.com/osresearch/heads


> Most vendors are wary about automatically applying updates which revoke keys used for Secure Boot. Existing SB-enabled software installations may suddenly refuse to boot altogether, unless the user is careful to also install all the needed software updates as well. Dual-boot Windows/Linux systems may suddenly stop booting Linux. Old installation and live media will of course also fail to boot, potentially making it harder to recover systems.

That means: with Secure Boot enabled, once the machine's firmware is updated, all currrently existing Linux install media will stop working. Users have to either wait for new install media to be released, or disable Secure Boot. At least it's still possible to disable Secure Boot, or enroll your own keys; will that still be the case once x86 CPUs start being replaced with ARM CPUs? IIRC, Microsoft requires that systems with ARM CPUs not allow disabling Secure Boot or enrolling your own keys (https://www.softwarefreedom.org/blog/2012/jan/12/microsoft-c...).


The requirement to disallow users disabling UEFI Secure Boot on ARM, applies to the Windows hardware certification spec. There's no requirement that a vendor must follow that spec in order for the hardware to run Windows. It is a spec designed to tie co-marketing of Windows and your product, e.g. "made for Windows" with minimum compatibility standards (or at least Microsoft's idea of minimums). For example it also requires a TPM 2.0 be present and enabled.


No, that was the case for the old Windows RT devices (see that 2012 in the URL…). With the modern ARM devices like the Surface Pro X, the Microsoft requirement is like with x86: required to allow disabling Secure Boot.


> With the sole exception of one bootable tool vendor who added custom code to perform a signature verification of the grub.cfg config file in addition to the signature verification performed on the GRUB2 executable, all versions of GRUB2 that load commands from an external grub.cfg configuration file are vulnerable.

Perhaps the ability to sign grub.cfg should be added to GRUB2, and this feature should be enabled by default.

Though this would mean rather than allowing users to enter arbitrary kernel boot options (and being able to leverage buffer overflow exploits), a bunch of preset menu items would have to be present. Alternatively, this signed grub.cfg can have its boot menu password-protected. (If I recall correctly individual menu items cannot be password protected.)

Lowering the GRUB2 attack surface area is a good idea, so hopefully these suggestions get deeply considered.


How would that work? If the public key is baked into the signed grub, the only person who can sign the config is whoever built grub. If the keypair is generated locally and the public half put on the ESP, an attacker can just replace it. Signed config works if you never need to modify the config, but for a general purpose OS you need to be able to modify the config.


Sorry, I forgot that typical grub.cfg contains the root partition's UUID (and at least historically, the partition device node). While it is possible to configure GRUB to scan for a root partition rather than using a UUID, this is less secure (eg, GRUB residing on your hard drive could then accidentally select your root partition residing on a USB stick containing Linux live media).

Good point that in general, the operating system vendor does not know the grub.cfg on an installed system, and that an attacker with direct access to the ESP can modify the files that are present there.

A static grub.cfg that selects "the Linux root partition is the first partition on the device on which this GRUB bootloader is installed on" would work. I don't believe GRUB supports this kind of behavior (maybe it should). It seems worthwhile and possible to design a mechanism where a simple grub.cfg can be signed by the operating system vendor. Disabling the ability to arbitrarily modify kernel boot options on a general purpose operating system is not a big deal, and could be mitigated with extra GRUB boot menu items.


Does anyone actually use secure boot in the intended way? As in: adding a key to the register and then booting a binary with that key.

I'd guess everyone simply disables secure boot and installs their system. Then, what is the point of secure boot?

I'm pretty ignorant about this stuff, but secure boot makes no sense to me.


Secure boot should not be trusted.

If you want to be secure against evil maid attacks, use full disc encryption and keep the bootloader on a usb drive on your person.


Does anyone else think that LILO was more intuitive?

I feel at the whim of my BIOS with this whole efibootmgr situation. My laptop was suddenly unbootable and I had to repair everything manually. I had not changed the system at all, so something must have changed in the BIOS.

Never happened with LILO, which also was better documented.


I see where you're coming from here - to my mind there's 3 issues here.

1 - The increased complexity of the UEFI stack having many components, and no good simple explanation of it to bring people up to speed with (at least that I'm aware of) - UEFI boot introduces NVRAM which is a fairly big change from the old way, and introduces the ESP partition for storing bootloaders. Fairly significant changes, coupled with not every UEFI firmware (i.e. BIOS) implementing things in the same way - not every motherboard gives the same options to users for managing boot entries in NVRAM.

2 - The introduction of secure boot at the same time, and the confusion around shim and similar for running Linux. Don't start on the complexity of enrolling your own keys and how some motherboards let you do it directly, while others make you use keytool or another efi binary to do it.

3 - Bootloaders becoming more and more complex as a result of secure boot requiring them to sign all their code, pushing them towards external configs and modules, coupled with multi boot becoming a native feature since the ESP-based loader needs to find the right config and load it, then find the right filesystem and go from there.

Much of the parts of part 3 were needed for LILO and MBR, but it feels like fewer moving parts were in play.


Aaaand it's gone ... As in the ability to run custom/new kernel-modules on a system with secure boot enabled, without the system considering itself "too tainted" to run certain apps/binaries, that require the system to be "immaculate".

This would also mean, that the old adage "If you can touch it with your hand, you can run unsigned code on it, given the right tools & time." wont be true anymore. That's why server room doors have access control systems.

But the owner of the device should always be able to modify/circumvent/audit any part of the boot process.

All PCs since the first ones with ME/Trustzone, and all phones in existence are already locked down to some degree, making some kinds of R&D difficult. I see the proposed changes as something, that will ultimately lead power users to have even less control over their own systems.

Or am I wrong here? Please, can somebody provide evidence to the contrary?


Can distros maybe consider moving to systemd-boot at some point? Systemd is already built in and can handle things like mounting pretty easily and simply.

It is a lot leaner than grub, doesn't use a billion superfluous modules. That and it is a lot easier to prevent tampering compared with the cumbersome nonsense that is grub passwords.

Oh and it enables distros to gather accurate boot times and enables booting into UEFI direct from the desktop.

It works with secureboot/shim/Hashtool. Also each distro has it's bootloader entries in separate folders to avoid accidental conflicts.


Honest question - is it really significantly easier to prevent tampering with systemd-boot? I had a look at this recently, and ended up having to modify the source (admittedly quite easily though) to avoid relying on important parameters in the config file.

I wanted to disable editing cmdline and similar from the prompt, and ended up simply compiling that feature out (along with others). I'm not sure if there's an easy way to fix this either, since the obvious way to "secure" the bootloader is via a config file, but we really need to assume the config file is editable by an attacker, and therefore compromised.

That you don't have to go build a standalone EFI image to get modules and similar embedded into the binary is certainly safer, but I would say most stock Linux bootloaders are still a fair way from being easy to prevent tampering on.


I think the main advantage here is less about tampering (if we assume neither of these bootloaders have bugs then a GRUB2 password should be as secure as its systemd-boot equivalent) but more about the fact that systemd-boot doesn't have decades of legacy cruft accumulated that's irrelevant for UEFI and thus is less prone to having disastrous bugs.


Totally agreed on the reduced cruft - when I was modifying the codebase I felt quite comfortable with the code and it's readability. Despite it being someone else's code, it was understandable and intuitive and I felt at home with it. I could see what to patch and edit, and it worked as expected without surprises. Important for something as critical as a bootloader to use principle of least astonishment.

I only picked up on tamper resistance based on the GP as I was wondering if I missed something and ended up patching unnecessarily or was misunderstanding something. It's also possible I'm using a stricter definition of tampering, as in my project I considered removing the SSD and modifying the ESP as being "in scope". I recognise for many that's not in scope, and where you fall back to relying on FDE to prevent booting the system anyway.


Isn't this the point of secureboot? Shim/Hashtool both work with systemd-boot, you can/must sign it yourself.

If you are talking about editing config, you just disable it in the config (editor no) and then nobody can just add an entry at random during the boot process.

Seems pretty simple to me.


Secure boot will ensure the bootloader binary itself is signed, but won't do anything for the config itself for obvious reasons. I was working on a high assurance scenario though, so I think my meaning of tamper resistance differed significantly from the above post.

I found the editor option, but the issue was that the config file could be edited offline to enable it again. Stripping the whole feature out of the binary solved the issue for what I needed. I guess it just goes to show there's a broad spectrum of interpretations of tamper resistance. If you're using dm-verity for example, you want to protect your cmdline parameters to at least the level of security offered by secure boot.


Speaking of SecureBoot and how practically no Linux distribution actually makes use of its potential (in terms of increasing security), does anyone here have any experience with SafeBoot?[0] It looked pretty interesting to me, though mounting the rootfs read-only didn't seem go well with how most Linux distributions these days still require you to change files in / on an almost daily basis.

[0]: https://safeboot.dev


> [...] how practically no Linux distribution actually makes use of its potential (in terms of increasing security) [...]

FWIW, I double-checked with a Fedora developer; the above statement is incorrect. Fedora uses it (Secure Boot) to enforce lock-down on the kernel and then require code signing, etc.


Does that include the initramfs?


> Speaking of SecureBoot and how practically no Linux distribution actually makes use of its potential (in terms of increasing security)

In Ubuntu and Fedora you can’t load unsigned kernel-modules when using secure boot.

How does that not increase security?


Ubuntu's (and to my knowledge also Fedora's) boot chain is not fully validated. An evil maid can easily swap out the initramfs.


I'm pretty sure both Ubuntu and Fedora installers display and have the option for MOK enrollment, when you have SB enabled and you select ~"Install additional drivers", meaning you can install your own modules.


Any protection that makes sense would use a real encryption mechanism, not an obfuscation like UEFI. The "secret" for UEFI SecureBoot is embedded in the "secured" system, so it is already in the wild (i.e. outside of the brain of the user) - not a secret at all.

So I believe that a meaningful modern security setting would be based on some dm-crypt/luks ciphered storage and a passphrase to unlock it.


I saw a similar report from Red Hat not long ago. Like ~30 min ago.

Severity was reported as "Moderate", but its enough that we'll patch soon.


[1] https://eclypsium.com/2020/07/29/theres-a-hole-in-the-boot/

The mighty GNU moos MÄGYCK! Now hurry, put your seven-league boots on :-)


Meanwhile, I just disable secure boot to be able to use the Liquorix kernel.


So this says it impacts Windows, but it seems to only be an issue with grub?


There are only two references to Windows in the article. The first one says Microsoft might push an update to the UEFI revocation list blacklisting the vulnerable binaries. The second mentions that dual-boot systems are affected, basically a reminder that if you're dual-booting with SecureBoot enabled, you need to make sure you've got the new non-vulnerable binaries installed on Linux before any updates to the revocation list (applied in the previously-mentioned possible Windows update) prevent you from booting it.


And that if the attacker has admin and physical access to Windows they can just install GRUB from there, then exploit that to install a rootkit to persist their access.


Do you really need physical access for that?

Grub is not signed by Microsoft CA, only shims are. So the exploit is installing an old shim and a vulnerable grub.


So this just caused me to google 'arch boothole' and umm... NSFW


I applied this update and my machine didn't boot afterwards

turns out apt-get uninstalled the signed grub2 image


The signed image's package is only removed if you have a conflict you somehow manually created. What command did you actually run?


That's why you should be careful when using dist-upgrade or full-upgrade. safe-upgrade wouldn't remove packages.


[flagged]


C is so old I wouldn't be surprised if harping on it was considered passé by the time Visual Basic was released. Language "elitists" don't comment on C often because everyone knows that it's a terribly designed language. And anyone who actually says C is elegant, or downplays its flaws, is often too deluded or spiteful to be convinced otherwise.

The problem with C is that after so many years of use, many of developers don't have a choice. C is essentially COBOL but with orders of magnitude more code in use. So you can complain about how legitimately horrendous the language is, but it doesn't change anything.


C is a bad language, but it's still less bad than all the others a reasonable person would use to write an OS (so functional languages don't apply).


No, I wouldn't say C is the least terrible option. Languages like Ada/SPARK, C++, D, Rust, and Zig all outclass C in terms of safety and usability without sacrificing performance. And although ATS has many sharp edges, it demonstrates how a functional language can have the exact same profile as a C program.

I think it would be a profound mistake to write a new OS in C. The safety issues are reason enough to abandon the language.


From a language design perspective, C++ is worse than C. From a practical perspective, C++ can be better than C if the programmers are good and define a sane subset of the language to use.

D has garbage collection by default. I know it can be disabled, but when 80% of people use the default, using anything else means means you're a second class citizen.

Ada/SPARK I haven't looked enough into, mainly because Ada's extremely verbose syntax puts me off, and I guess most system programmers have a similar opinion.

Rust and Zig have big potential to become good languages, but they're too immature right now (no standard, only 1 viable implementation, etc). Maybe in 10 years using Zig to write an OS will be a no brainer, but not yet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: