Hacker News new | past | comments | ask | show | jobs | submit login
Fedora considers deprecating legacy BIOS (lwn.net)
176 points by bitcharmer on April 21, 2022 | hide | past | favorite | 222 comments



> UEFI is defined by a versioned standard that can be tested and certified against. By contrast, every legacy BIOS is unique.

The "standard" for BIOSes was at first the IBM PC ROS's Reference Manual, and later the PS/2 Reference. Naturally, many vendors failed to implement it correctly. But the problem with EFI is the same. Still hoping that someday, EFI netboot support will be something usable.

I once considered using EFI as a basis for booting some of my bare-metal works, but:

- it requires the GPT partitioning scheme, which in turn requires handling of UUIDs. Their endianess is not little, big - its mixed. mixed in the same value. This is a quirk inherited from Microsoft.

- EFI system partition is the FAT file system, which is an FS that is even older than the legacy BIOS. Many gory details. Legacy from Microsoft.

- EFI "executables" are actually windows executables, who still start with the 'MZ' header from MS-DOS. The real mode code for displaying "This program cannot be run in DOS mode" is, depending on the toolchain, sometimes still included. Also legacy from Microsoft.

I don't see why Linux users should be enthusiastic about adopting this.


- previously, x86 MBR-style partition tables (which were the only thing really supported in Linux) gave you no strong semantic information about what a partition was[1]. GPT may involve GUIDs, but in the grand scheme of things that's a small part of the cost of mounting stuff.

- FAT is old, and FAT is well-supported by basically anything, and what features do you want in the partition that contains your bootloader that FAT doesn't support?

- Yeah in an ideal universe we wouldn't have to deal with PE binaries and we wouldn't have to deal with the Windows 64-bit calling convention for jumping into the firmware and also we've solved all of this shit and it just isn't a big deal any more. We can look at any platform and complain about the implementation details, but at least this one is better documented than the BIOS interface ever was.

In summary: Linux has to boot on computers that exist, and most computers that exist have UEFI. The Linux community has had the opportunity to make meaningful improvements to the UEFI spec in a way that wasn't true with BIOS. UEFI isn't ideal, but it's better than what came before in this respect.

[1] Partitions could be identified as "Linux", but that gave you no information about what they were or where they should be mounted. Current systemd-driven development has allowed us to define the partition mount point as part of the GPT data, which means they can be automatically mounted in the correct place without static configuration about partition layout


> what features do you want in the partition that contains your bootloader that FAT doesn't support?

Well, I'd love to be able to drop a bunch of 8GB (notably FAT doesn't support files this big) LiveDVD disk images on a bare file system and see them in the computer's built-in boot-menu immediately. I already enjoy the fact I don't need a traditional boot loader (like GRUB) to handle multi-OS as I can have 2 independent EFI boot partitions (one for Windows and one for Linux) and use the computer's boot menu too choose which to boot from.

Ideally the whole OS should be just a read-only boot image and a traditional partition should only be used for config/data files IMHO.


I can have 2 independent EFI boot partitions and use the computer's boot menu too choose which to boot from.

You don't need 2 EFI boot partitions for this -- you can have multiple boot loaders in the same EFI partition, each with its own entry in the boot menu. In fact, this is how I boot: the default entry boots the linux kernel directly from the EFI partition (some UEFI implementations require the kernel to have a .efi extension, others don't), and I have separate fallback entries for refind and shellx64 in case I need to boot with different kernel parameters.


> Well, I'd love to be able to drop a bunch of 8GB (notably FAT doesn't support files this big) LiveDVD disk images on a bare file system and see them in the computer's built-in boot-menu immediately. I already enjoy the fact I don't need a traditional boot loader (like GRUB) to handle multi-OS as I can have 2 independent EFI boot partitions (one for Windows and one for Linux) and use the computer's boot menu too choose which to boot from.

Different systems have different constraints. For example, loading the firmware interface on my systems so it can present a boot menu is way, way slower than rEFInd or Grub. And presents an ugly menu in a non-native resolution for my monitors. And doesn't let me override kernel parameters adhocly at boot time if needed (thought it has been a number of years since I have, I'm reluctant to let go of the option).


Ventoy can do this: small EFI partition with keys you can enroll in SecureBoot and a fat second partition you drop all your ISOs onto.


GRUB can also do this. In fact, I'd be in favour of deprecating GRUB as the default for UEFI on Linux distros as it's a huge codebase (including lots of legacy things) which is overkill for booting a single OS (the Linux kernel itself can be a UEFI application, no bootloader needed) but for this use-case it's perfect as it's essentially its own mini-OS and can handle a wide array of filesystems including LUKS, Linux mdadm RAID, LVM, etc.


To be fair - some distros are moving away from GRUB as the default.

Pop!Os uses systemd-boot (formerly gummi-boot) by default if you're on an UEFI system, and only falls back to GRUB for legacy bios.

Arch is also much easier to setup on systemd-boot.

The issue is that GRUB still has a very compelling support matrix - it'll work basically everywhere, and with most all configurations. So if you're already running a batteries included distro, where someone else is doing most of the configuration and the downstream systems are hugely variable (old consumer hardware) - then GRUB still makes the most sense.


That's a reasonable desire, but it's also not something that's supported by traditional BIOS - UEFI isn't any worse in this respect


The first and only instruction that runs is on the “read-only” boot image which can be the whole OS. The solid state image is read-only because the tab is physically in the read-only position. The is the entire specification for booting.


> notably FAT doesn't support files this big

ExFAT can support those. You could also use the UDF file system.


But ExFAT lacks the FAT's virtues of being universally supported and royalty-free, yet still is a very dumb FS with no journal (which means unreliable) and no extended attributes (which means data-metadata separation impossible) so I don't see a reason for it to exist anywhere outside severely resource-limited embedded applications. I would rather use Ext4 everywhere for everything. Is the problem preventing wide adoption of Ext4 GPL?


Seemly ExFAT is specially unreliable... Nintendo Switch community first advice when you buy one is: If you buy a SD Card with ExFAT, reformat it to FAT, because ExFAT will eventually cause your files to be corrupted...


That may have more to do with Nintendo's implementation. The exFAT support that was added to the Linux kernel a few years ago has been fine.


The Switch has a bad exfat driver.


A EFI partition is mostly read-only, so I don't think that's a particularly large problem?

> I would rather use Ext4 everywhere for everything. Is the problem preventing wide adoption of Ext4 GPL?

The BSDs don't support ext{2,3,4} particularly well, and Linux doesn't support FFS/UFS particularly well. I mean, there's support for these things, but it's far from complete or perfect and it's taken a long time.

It's just a fair amount of effort to implement filesystems well, and there's very little tolerance for errors. Ext4 isn't spectacularly complicated, but it's not exactly a simple FS either. I think that has more to do with lack of support than anything else: lots of effort for not all that much practical benefit.


NTFS hasn't been widely adopted by firmware manufacturers either. I don't think firmware manufacturers want to spend time implementing support for even the NTFS/ext/APFS era of filesystems, never mind anything newer like ZFS or btrfs.


NTFS was also undocumented, and whatever the ntfs-3g folks or others working on alternate implementation figured out, it was via reverse engineering.


Sure, but the manufacturers presumably play for licenses for the AMD AGESA, etc. so could pay for a NTFS license if they wanted to.

And ext has plenty of documentation and they haven't implemented that either.

So I think keeping down the software complexity is the more likely method. Down to

1. The limited software investment these companies make (often just buying and reskinning firmware from AMI)

2. The limited space available on the ROMs due to hardware cost savings (e.g. companies have had to drop GUIs or support for less popular APUs to add support for new generations of mainstream CPU in firmware updates)


> NTFS hasn't been widely adopted by firmware manufacturers either.

Although very complex and very undocumented, NTFS has actually been adopted very widely and quite reliably. Most of the pre-smart TVs and set-top-boxes can read FAT and NTFS USB drives which makes NTFS the only choice if you want movies exceeding 4 GiBs.


In fact NTFS is available on most commercial UEFI implementations. You can use NTFS to format your Windows install pendrive...


ExFAT is not supported by EFI.


> what features do you want in the partition that contains your bootloader that FAT doesn't support?

Atomic updates and not getting corrupted if the power is pulled/lost while the filesystem is being written to.


I suspect your thinking about FAT as a general purpose filesystem rather than the uEFI System Partition (ESP). For the latter case, one simply serializes access to the FAT/directory and renames files into their final resting place. That is going to be as robust and uncorruptable as any of the more "advanced" file systems jouraling mechanisms. Sure you might have a lost FAT chain, or a FAT chain mismatch between FAT copies, but its not going to cause a boot problem and a FAT "fsck" operation done during boot is going to be the equivalent of throwing the incomplete journal entries away, and probably just as fast given most ESP's contain a less than a dozen files.

So, not a problem, with the huge advantage FAT can be implemented/validated/etc in a few dozen lines of code.

KISS


> previously, x86 MBR-style partition tables (which were the only thing really supported in Linux) gave you no strong semantic information about what a partition was[1]

LVM appears to solve this issue as well as the related issue of only supporting a limited number of partitions in BIOS.


I believe that btrfs can be whole-disk and avoid partitions entirely, which would drastically simplify things.

The only snag is swap, which I don't believe can be on a subvolume.

vgchange is a struggle for me, from the first time I saw it in HP-UX.


> The only snag is swap, which I don't believe can be on a subvolume.

Linux can make swap a regular file, even. It doesn't need to be a partition.

Google result shows: https://wiki.archlinux.org/title/swap#Swap_file


And its something you want to do for security reasons anyway, since linux by default isn't encrypting swap partitions. Putting swap on an LUKs encrypted partition is a bit of a PITA but allows one to hibernate/resume without fear that ones private keys end up in plaintext stored on a disk.


And relevant to OP, you cannot hibernate on Linux with secure boot enabled, I think precisely because Linux doesn't know how to sign/encrypt the RAM dump (no idea how it's actually called)


Well, its an artificial limitation on secure boot in the Linux kernel pending some cleaups, and is fairly trivial to work around if your willing to comment out the line in question build your own kernel and sign it with a key of your own creation you have enrolled in the firmware.

The problem is less about linux being capable of encrypting/protecting the swap file and more around being able to assure that is true. So like many Linux kernel issues recently its less technical, more political.

So, as I mentioned previously its entirely possible with off the shelf distro's to enable encrypted swap, the average user just has to choose between hibernate, assuring the swap is secure, and secure boot. Its a bit irritating, but seems to be low priority as the focus seems to be on suspend or hibernate without secure boot.


until kernel 5.0 swap files were not supported on btrfs.

there's also some limitations, see https://btrfs.readthedocs.io/en/latest/btrfs-man5.html#swapf...


I don't think these limitations are that major, and not everybody uses btrfs either.


Most (all?) linux filesystems work just fine against raw devices. They are simply unaware of the difference. The larger problem is firmware trying to find partition tables/etc in the middle of a filesytem, but even then its mostly a non issue because random data doesn't tend to look like partition tables or MBRs.


FAT filesystems tend to get the cluster chains corrupted, then all your files are suddenly truncated to exactly 1 cluster long.

So I'd rather use EXT3 or NTFS because of Journaling, and not FAT.


[flagged]


Matthew left Red Hat 10 years ago. If you're going to "full disclosure" people, you should probably do a better job with the details.


workED for Red Hat, per wikipedia: https://en.m.wikipedia.org/wiki/Matthew_Garrett

Right now at Aurora


I don’t think this is an important detail, if anything it may skew people against them as it seems they have corporate interests


[flagged]


I'm responsible for a great deal of the UEFI support on x86 Linux systems. You're absolutely free to have concerns about a lot of the political and social positions I hold, but this is a topic where I am literally a domain expert. If you disagree with me on this subject, present technical arguments.


As a newcomer to this thread, just wanted to say thanks for bringing your domain expertise to this discussion even though you're certainly not obligated to do so. And, though I presume you've been paid for at least some of the work you've done on Linux UEFI support, thanks for that as well. I'm thankful that someone is willing to go to the trouble to make Linux relatively easy to boot and install on modern PCs, though I imagine the desktop Linux space isn't very lucrative.


As an outsider to this debate, you’re the hostile party. You’re replying with name calling to someone who gave you a respectful and detailed answer.


blueflow's favorites list is remarkably revealing.


i fixed it! thanks for reminding me.


Check the screenshot i posted in your sister post and guess why the font colors are off.


I am actually for keeping the legacy scheme around for a good while, mostly because there's plenty of hardware even now that relies on it.

But UEFI is more sane. Bios systems map the first 512 bytes which are both code and partition table into ram. Into that you need to cram enough code to locate stage 2, which I think tended to be located in the alignment space between the MBR and the next sector. Stage 2 once loaded needed enough logic to find and load your boot partition, and load your config from it, parse the kernel list and finally load one of those kernels. I seem to remember grub hard coding where the stage2 actually was, although I might be wrong about that. Either way it was a kinda fragile mess.

Sure, every boot process needs to get enough logic loaded to do the next thing, but with efi this is all taken care of during efis various internal loading phases, nicely located in code on onboard flash where it probably should be. By the time it comes to look for what you want to load, you are already in long mode and we aren't hiding stuff in places not covered by a partition definition. It is kinda annoying they didn't consider a boot filesystem with large enough file storage for big images (exfat, udfs, other reasonable choice, we have the filesystems) but we can cope with that.

The process for detecting bootable partitions and updating boot entries is a lot, lot saner than BIOS systems tend to be. Sure there are details of the design I don't like, and I kinda agree with your points there too, but substitute those for better choices and you still have a reasonable overall framework.


What I like about bios is how simple stupid it is. It reads a bunch of bytes in and boom and starts running them. Partition tables and multiple stages, yes that is the convention but you dont have to adhere to that if you dont want to. Having to change into long mode, yes that is an important wrinkle I give you that. But either way, the design is so simple that you could write a little hello world mbr-program with a hex editor. It's a handful of bytes.

Ok contrast that with UEFI. There are FAT boot partitions and PE executables and what not. That's a lot more complexity.


The legacy BIOS boot system and MBR format has some real issues that could be improved but this just sounds like an ideological battle against things inspired by Microsoft rather than legitimate technical concerns. As far as I know, despite the things you list having Microsoft roots, they are well-documented and have free/libre implementations so Microsoft influences are not really a problem.

The main problem you have is that convincing manufacturers and proprietary OS makers (including Microsoft) to switch to a hypothetical UEFI replacement free of those issues is going to be an impossible task, while UEFI is already here and while not perfect appears to be better than the legacy BIOS boot system.


> - EFI "executables" are actually windows executables, who still start with the 'MZ' header from MS-DOS. The real mode code for displaying "This program cannot be run in DOS mode" is, depending on the toolchain, sometimes still included. Also legacy from Microsoft.

No. They are not Windows executables. They won’t run under Windows at all for multiple reasons. They are PE executables.

PE is the executable format developed for Windows, and yes, it is backwards compatible with MZ executables, so it has the now-confusing MZ header even though that doesn’t make sense under EFI. It also has some other weird details kind of hardcoded in, like some of the data directories, many of which don’t make that much sense under EFI. Worse, it’s a little weird to parse, with many things being deduced by using known structure sizes combined with offsets specified in fields.

However, PE as a format is totally fine. Good, even. In its purest form, it really doesn’t have that much baggage. It flat out has some advantages over ELF whose symbol table non-sense is notoriously complicated. Not so under PE: explicit table of exports, explicit table of imports. The imports go to specific modules. Symbols don’t conflict. It’s got cruft, but it’s very simple to write a parser or loader, and I’ve done so a number of times before. With PE the runtime linking is handled by whatever OS is running, be it EFI or Windows; none of the INTERP stuff is needed.

Frankly, I’d be OK with using PE on Linux too. When I was a kid, I toyed around with making a binfmt for PE. Of course I didn’t finish (most notably, I never figured out how to do linking, since the ELF binfmt didn’t really do that) but it’s quite straightforward to get the actual loader going.

As for the rest of the grievances… As far as I know, FAT and GPT are only required to be supported by EFI, not required to be used. Some EFI implementations support other filesystems for the ESP (like Apple with HFS+,) and I’ve not used one that won’t happily use an ESP on an MBR.


> No. They are not Windows executables. They won’t run under Windows at all for multiple reasons. They are PE executables.

Starting with the fact they have a dedicated set of "subsystem" values: https://docs.microsoft.com/en-us/windows/win32/debug/pe-form... - the win32 loader won't load anything that isn't the CUI or the GUI Windows one, although the kernel also knows about the native one. You'll just get the "this is not a Windows application" messagebox.

With NASM, you can also override that linker stub. For example save this to a file:

    org 100h
    
    start:
        mov dx, msg
        mov ah, 9
        int 21h
        mov ah, 4Ch
        int 21h

        msg db 'Are you still running DOS in 2022? Wow! 640K enough for you!',13,10,'$'
nasm -fbin -o stub.com stub.asm

then if you are using Microsoft's tools you can use LINK.EXE <usual args> /STUB:stub.com and voila, if anyone _happens_ to (try to) your code under DOS, they'll get an amusing message.

If you want it to take up minimum space, use this one:

    org 100h
    
    start:
        mov ah,4Ch
        int 21h
which will just exit that program if run under dos.

Apparently the header is hardcoded in mingw, but you could easily binary patch it in the resulting executable.

I think I read somewhere you can also remove the stub completely, but I've never tried it.

Edit: for my last comment, yes, you can definitely drop it: https://stackoverflow.com/a/9659538.


Only for the record, I don't know in Linux, but on windows (at least up to 10) it is perfectly possible to use MBR partitioned media to boot in UEFI (i.e. even if generally speaking GPT is "tied" to UEFI, GPT partitioning is not a requisite unless a greater than 4TB mass storage with 512 bytes/sector is used, 4K disks are generally not bootable for a number of operating systems, but they are rare anyway as boot media).

Viceversa, it is possible with some loaders to boot BIOS from GPT (not easy-peasy or straightforward but it can be done).

As well, the FAT is only a requisite if there is not an EFI driver for the filesystem, as an example some motherboards and RUFUS provide a NTFS driver that allows booting from NTFS volumes under UEFI.

Still, while in theory GPT and UEFI have a few advantages, there is not in practice (yet) any meaningful reason to remove support to BIOS, though lately a number of notebooks have UEFI firmware only (no CSM aka BIOS), I cannot see how removing an option/choice from a distro can be a good thing.


IIRC Windows 11 will no longer boot from MBR partitions & you have to convert them to GPT.


It will; there are tweaks out there that explicitly allow for installing windows 11 on an mbr partition.

Hint: download the latest version of Rufus, point it to a windows 11 iso and select the "Extended installation" mode. The result will happily install itself onto an MBR partition.

I performed an upgrade (!) of a windows 10 installation to windows 11 in this manner; I'm writing from this SSD right now.

  PS C:\Users\alexa> $(get-disk | ? {$_.IsSystem}).PartitionStyle; [System.Environment]::OSVersion.Version 
  MBR
  
  Major  Minor  Build  Revision
  -----  -----  -----  --------
  10     0      22000  0


> UUIDs ... endianess is not little, big - its mixed. mixed in the same value

UUIDs do not have any endianness. They are just a sequence of individual bytes.

It's true that some ways of generating UUIDs work by using the bytes of longer numbers, which I can believe use mixed endianness (I don't know them well enough to remember myself). But no one should be relying on that when reading UUIDs back out, except maybe for debugging purposes.

Or does GPT require interpreting the parts of UUIDs? If so, that is the real problem.


Its relevant for converting them from and to their string representation.

https://en.wikipedia.org/wiki/Universally_unique_identifier#...


Interesting link, thanks, I wasn't aware of that. The link seems to suggest that the odd mixed-endian string representation ("Variant 2") is now quite rare.

With the one that's more common ("Variant 1"), I'd say that I *slightly* disagree that representing the byte sequence 99,aa,bb,cc as 99aabbcc is "big endian". If you don't philosphically think of the latter as a number, then it's just the bytes written out compactly. But I do see that if you think of it as a number then it's big endian, especially if you're distinguishing from variant 2.


Variant is a very abstract concept mostly for humans, programs pay no attention to it and serialize all kinds of UUID the same way.


UUID is defined as a sequence of 6 integer fields of various sizes. Their conversion to text is fairly straightforward, but serialization to bytes may vary depending on how you do it. "Whatever ends up in memory" is microsoft little endian format.


EFI doesn't actually mandate FAT for the system partition. The system partition can be any filesystem that the firmware supports.

Of course, pretty much all EFI implementations only support FAT, so it's a bit of a moot point; the only one I'm aware of that supports anything else is the one on Intel Macs, which also understands HFS+.

You can find a huge selection of EFI filesystem drivers at https://efi.akeo.ie/ but they're derived from GRUB and hence GPL, so don't expect the likes of American Megatrends to be bundling these any time soon.


Well splitting hairs, it mandates that everyone support FAT, individual system+OS vendors can add their own filesystems, but then its vendor lock.

So, just do FAT and be done with it. Adding other filesystems is just a waste of time because the ESP only needs to store a half dozen files or so.

13.3.1.1: "The EFI firmware must support the FAT32, FAT16, and FAT12 variants of the EFI file system"


I think FAT is a fair baseline. It can be implemented in a few dozen lines of code. Support for it is ubiquitous. It is not encumbered by any active patents. All the alternatives I have seen proposed are both considerably more complex and not ubiquitously supported.


EFI is here, non-specialty Linux distros don't really have the option of not adopting it.

So the choice is between adopting it and keeping BIOS indefinitely, or adopting it and at some point in the future dropping BIOS.


False dichotomy: Not picking EFI does not mean keeping BIOS indefinitely.


What replacement do you propose, then?

Especially what replacement do you propose that works with the kind of typical hardware that non-specialty linux distros like Fedora want to support?

Intel systems make up a large chunk of the hardware Fedora is used on, and as far as I can tell they're all-in on EFI.

So Fedora has to use EFI to be able to boot on Intel systems.


OpenFirmware: https://en.wikipedia.org/wiki/Open_Firmware

Many systems get by just fine with a minimal ELF or multiboot loader and a firmware-provided devicetree specification. Dynamic hardware enumeration is performed by every OS anyway, the only hardware that must be initialized is the bootloader/kernel storage.

For the x86 platform, this could be implemented via coreboot (libreboot always reads to me as lib-reboot) with a multiboot payload.


Do you propose that Fedora not support EFI and tell its users to replace the firmare or only buy devices with that?

That would be a massive step backwards in hardware support.

Like I said: They don't really have the option of not supporting EFI.

Alternatively they can support EFI and OpenFirmware, but that increases the number of supported paths instead of decreasing them.


Actually I do not understand which is the problem of Fedora in this case.

The distribution needs to do almost nothing to support either EFI or the legacy BIOS or any other booting method.

That is the job of a bootloader package, not of the Linux distribution. For example I am using syslinux as the bootloader for all my computers, while grub is another example of a frequently used bootloader (but which seems to be excessively complex in comparison with syslinux).

I assume that by supporting only EFI Fedora means that they will remove all bootloaders from their installation image, so that the Linux kernel will be launched by its EFI stub.

I do not know about other bootloaders, but an installed syslinux package occupies only a few megabytes, maybe 10 megabytes at most, so deleting it cannot provide much space for anything else.

I cannot see how deleting a bootloader package may be claimed to be a significant simplification for the maintenance effort of the Fedora distribution.


1. They no longer need to package bios bootloaders

2. They no longer need to support bios booting in the forums - no asking "how do you boot this? Bios, EFI? Have you tried the other?", no telling people "EFI is required for feature X"

3. They no longer need to maintain the bios boot documentation

4. They no longer need to maintain the bios path in the installer/boot media

5. They no longer need to test the bios paths (or hope they don't break and be ashamed if they don't)

6. This makes it easier to switch to a EFI-only default bootloader (instead of Grub 2, which can do both)

Is this massive? No, it's possible to keep maintaining BIOS support. But it's not just freeing space on the package mirrors either.


I agree that restricting the boot method to EFI would reduce the testing time.

Grub 2 is complex to configure, but there are other bootloaders that are much easier to configure, e.g. syslinux.

Restricting the boot method to EFI does not reduce the need for documentation in any way.

The user must still be instructed to enter the BIOS setup and verify whether their computer is not configured to boot in legacy mode, which would prevent booting. Also the user must be instructed to enter the BIOS setup even if the EFI mode is used, because the installation media might not boot anyway, because a wrong boot order is configured for EFI booting, and it must be changed.

The most complex part of the installation is not the booting, but identifying the device where Fedora should be installed, which might be needed to be reformatted and repartitioned.

So a lot of documentation is needed in any case, for novice users.

Removing the booting in legacy mode increases the chances that the installation media will not boot without the user having to modify the BIOS setup, so it increases the chances of the user having to search support in the forums.


>Grub 2 is complex to configure, but there are other bootloaders that are much easier to configure, e.g. syslinux.

~~Yes but syslinux is bios-only. So if you have to support EFI too (and you do because there's EFI-only hardware), you now either need to support syslinux and an EFI bootloader, or a bootloader that supports both like Grub 2.~~

Edit: The article seems to suggest that syslinux can be removed if bios boot is no longer supported. I read that as it being bios-only, but it seems to support EFI?

>Restricting the boot method to EFI does not reduce the need for documentation in any way.

You still need documentation, yes. But you no longer need any documentation for booting with BIOS.

You no longer need to say "X is only supported in EFI, if you boot via BIOS you need to do Y" or anything like that, and keep those parts updated.

The part of the docs that says "To boot EFI, do X. To boot BIOS, do Y" can be cut down to "To boot Fedora, ensure X".

That is a reduction in the amount of documentation.


You arguably don't need a bootloader at all with EFI. You could just use the efistub.


Yes, it is easy to make a bootable device that can use either syslinux to boot when legacy BIOS is used or efistub to boot when EFI booting is used, eventually loading the same kernel.

No other bootloader is needed.


Or systemd-boot. I will admit I am using Grub2 on my current Linux installation but I never liked grub. I think grub sucks.


> The distribution needs to do almost nothing to support either EFI or the legacy BIOS or any other booting method.

I think you're seriously underestimating the amount of effort the bootloader and hardware enablement teams who work on Fedora put in to making _systems boot Linux at all_.


No, I don't propose that Fedora not support EFI. I do propose that Red Hat use some of its parent company's clout to push hardware vendors in a more open direction.


"Many systems get by just fine with a minimal ELF or multiboot loader and a firmware-provided devicetree specification"

Not really, none of those ecosystems has a fraction of the device variation that x86 has. When they do (arm) its a giant mess of incompatibility and non working hardware. Modern DT's are basically still tied to the linux kernel the same way that the old arm/platform descriptions tied firmware id's to individual kernel configurations. Which is why the answer to so many arm problems is "match your DT to the kernel revision", god help you if your trying to multiboot a *bsd/etc as well.

PS: Openfirmware is basically dead, that might have been a valid answer in 1998, but even IBM/etc provide alternative boot mechanisms for linux/PPC at this point. The only thing that comes close to a current replacement is UEFI.


But OEMs didn't choose OpenFirmware. They chose UEFI. Therefore, Linux must support UEFI, and that's where developer effort will go.


I have encountered various embedded computers which no longer have the legacy BIOS boot option, so using EFI is indeed necessary.

However, the vast majority of server or desktop motherboards and of laptops still have the option for legacy BIOS booting, even if the option may be difficult to find in the BIOS menus.

I have about a dozen servers, desktops and laptops and I have configured all of them to use legacy BIOS booting, because EFI booting does not have any advantage, only disadvantages.

It would have been very easy to replace the ugly legacy BIOS booting method with a simple and clean method for booting, but unfortunately those who made the EFI specification have failed to achieve this goal.


>because EFI booting does not have any advantage, only disadvantage

For an already installed system it doesn't really matters, so the point is moot here.

But for the new systems setup I would always prefer a UEFI (with or without SecureBoot) because it allows to expose the HID management of the onboard/expansion board controllers, a proper way to return to the management/boot menu in the case of the failed boot.

Sure, it doesn't always work good, but neither BIOS/CMS.


I don't. I'm waiting for something else, until then, we'll stick with BIOS. Supporting EFI means implementing more legacy cruft than we already have with BIOS.


This is literally not an option - systems have shipped without BIOS compatibility for a long time now, so refusing to support UEFI is just not an option.


Except for some embedded computers with Atom CPUs and some enterprise-oriented laptops, I have not seen any systems without legacy BIOS compatibility.

Nevertheless, the option to enable the legacy BIOS booting can be quite hard to find in the BIOS menus, which may deceive many into believing that a system does not support legacy BIOS booting, even when it actually does support it.

On work computers belonging to a company, the BIOS configuration may be locked, so it might not be possible for the users to enable legacy BIOS booting.


From the article:

> Intel stopped shipping the last vestiges of BIOS support in 2020 (as have other vendors, and Apple and Microsoft), so this is clearly the way things are heading - and therefore aligns with Fedora's "First" objective.

You may not have seen many computers yet without legacy BIOS compatibility but this is going to be the norm for new computers very soon.


The last new PC that I have seen, and which still had legacy BIOS booting support, was a Dell laptop purchased in Q2 2021.

However, it was a model launched in the second half of 2020. It is indeed possible that the models introduced since 2021 might omit the legacy BIOS booting support.


I have a couple of AMD ASUS Vivobooks that don’t have CSM/BIOS boot available

Never noticed until I wanted to boot memtest.

It’s possible that I missed it, but Google results didn’t look promising either.


[flagged]


I don't think that's going to result in everyone shipping UEFI without BIOS compatibility suddenly changing their mind.


Let me take a moment and point out, unless your using hardware from < 2005 or so, or one of the rare devices running a custom firmware you literally are using UEFI when you think your booting in BIOS mode.

Others have pointed out that the CSM is no longer being shipped. The CSM is the "Compatibility Support Module" and its a UEFI shim driver that adds the legacy bios INTx operations to a UEFI implementation.

So, most people booting in BIOS mode on hardware built in the past 15 years or so are actually running UEFI with an extra shim.


Intel ME is also here, and newer systems absolutely do not have the option to disable it, despite all it's flaws.

ME security vulnerabilities cannot be fixed in many cases, and it is an intolerable risk for some.

If you insist on a system that does not run ME at all, then the best you can run is a Core 2 Quad x9650 on BIOS.

Be mindful of what you are losing when you deprecate that machine.


Less legacy code for them to maintain. I see that as a win.


Fedora does not maintain any code related to legacy BIOS booting.

That is done by the maintainers of the bootloader packages.

What Fedora presumably intends is to remove all bootloader packages from the installation image.

In that case, the Linux kernel can be booted only by using its included EFI stub.

One less package in the installation image might be claimed to imply less maintenance work for Fedora, but in any case such maintenance work has nothing to do with the work done for maintaining legacy code.


Most of the issues you describe sound either trivial or non-issues entirely.

The mixed-endianness of GUIDs sounds annoying, but also easily worked around. It seems worth putting up with for the resiliency offered having the GPT duplicated at the beginning and and end of the disk.

I'm not sure the UEFI system partition has any practical need for anything newer or more complex than FAT. It is a very simple file system for which support is ubiquitous, and is far more capable than an MBR boot sector.

The EFI executable format being derived from Microsoft does not seem like an inherent problem, unless you can point out some meaning limitation it has compared to an alternative. However I do not know enough about binary executable formats to debate the merits of PE over ELF, so I could be missing something.

It seems to me the benefits of UEFI far outweigh the negatives here. And with UEFI now being ubiquitous in consumer and commercial hardware for about a decade now, I do not think a bleeding edge distro like Fedora dropping BIOS/MBR is a huge deal.


> - EFI system partition is the FAT file system, which is an FS that is even older than the legacy BIOS. Many gory details. Legacy from Microsoft.

I've always wondered why it was required that my EFI partition be FAT. I've had systems where that partition has gotten messed up and required an fsck from a recovery boot, and it seemed like it should be possible to either do that automatically in some way, but being able to use a filesystem that self-corrects would be nice too (although maybe overkill for the 1 GB or whatever I give it). I guess maybe it would only be possible to perform the fsck if I booted from a different partition, which then could develop the same issue. This does make me wonder how silly it would be to have a duplicate boot partition on each of my machines that I only mount to sync over changes to the real one so I could avoid needing to grab my flash drive when stuff like this happens...


You don’t like it because Microsoft has a hand in developing the standard?


I don't see why Linux users would care about this. It boots the system, nobody cares it uses some old MS tech.


It doesn't just "boot the system". The full EFI specification includes support for runtime services, i.e. proprietary code that keeps running even after your FLOSS operating system has booted. That's something Linux users should care about: it provides a backdoor for clinging to closed-source drivers on a pretend-open platform. There's already been proof-of-concept EFI viruses, it's only a matter of time until the first antivirus EFI service appears.


The UEFI runtime services provide functionality that is not generically exposed through any other OS environment. Linux could just refuse to provide those runtime features, and things would roughly work[1] - we'd still need to call some UEFI features in the boot stub, but the same is true of BIOS (look at what the 16-bit code does in terms of obtaining information that the kernel uses after init). If you want to drop all access to runtime services after kernel boot, you can by simply passing efi=noruntime to the kernel arguments.

[1] Some features would be broken, like recognising whether the system had booted successfully


Even back in the BIOS days Linux overrode lots of BIOS stuff such as partition size limits once it booted.


can efivarfs still be accessed after booting with efi=noruntime? If so, I might add that on my systems and see if anything breaks.


No, all runtime services are gone. You don't /strictly/ need them in order to boot - most Linux distributions will install a loader in the fallback path and the firmware will run it even if there are no boot entries.


Because "removing legacy" was one of the key arguments of EFI enthusiasts, and its still plenus bovis stercus.


Your point is a bit moot because removing bios support will actually allow the maintainer to stop maintaining the bios parts regardless of the fact some other old things are kept.

While actually not removing bios support won't allow you to get rid of the aformentionned MS techs such as fat support.

Having said that I think this is a wee bit early.


You made a factual mistake: MBR/BIOS based boot does not depend on a specific filesystem, nor does it depend on a specific executable file format except for the boot sector. You can safely remove FAT support from your kernel config, except when you need it for EFI.


> You made a factual mistake: MBR/BIOS based boot does not depend on a specific filesystem, nor does it depend on a specific executable file format except for the boot sector.

I never said that.

I said removing bios support allows you to remove anything that provides bios support and that choosing to not remove it won't allow you to get rid of the EFI dependencies so it is a moot point to talk about old techs used by EFI.


"Full of faeces from cows" is and remains vulgar no matter how it's spelled.


There's nothing "vulgar" whatsoever about cow dung. It is an eco-friendly, renewable material and commonly used for many sacred rituals in India.


The GP literally used it as a weird coded insult to UEFI (for whatever reason), India doesn't really enter this discussion.


My short summary of this: pretty much every x86 client system since 2012 has shipped with working UEFI support (because Microsoft required it for new Windows 8 systems), and from a compatibility perspective Linux works Just Fine with basically all of them. Servers took a little longer (HP, especially, wanted to do things like just add GPT support to their BIOS implementation), but even that's in a good position now. The biggest concern I have is around cloud, where many providers still don't offer UEFI support.

I was on the Fedora technical committee in the past, and if I were still there I feel like I wouldn't go for this now. But I've also been very marginally involved in Fedora in recent years, and I don't think I have a good sense of what the tradeoffs are here. There are legitimate points where it's reasonable to say goodbye to the past, and maybe this is one of them.


Drawing the line at 10 years old is ... way too close for comfort. I could understand to drop i386 because it basically amounts to another entire architecture, and besides it seems with i386 you are also generally RAM limited which makes it harder to use a recent DE.

But a 10 year old computer is perfectly capable of running even the latest version of the heavy-est DEs.

Also, "working UEFI supports" means working enough to boot Windows and not much else. Kernel bugzilla still has lots of UEFI bugs open for early UEFI firmware, and even not-so-early UEFI firmware (cough efi=no_disable_early_pci_dma ).


My current desktop's motherboard and CPU are from 2009. I have no idea if the MB supports UEFI; it definitely boots to BIOS by default. I've upgraded the storage, memory, and GPU, but there's been little cause to waste money on a new MB/CPU. But I guess I wouldn't install Fedora on this machine anyway, so shrug.


Eep! efi=no_disable_early_pci_dma only does anything if CONFIG_EFI_DISABLE_PCI_DMA is set, and distros should not be setting that by default (it's a thing that works Just Fine in theory, and specific implementations may fail hard with it - eg, if ExitBootServices() triggers a callback that assumes that a PCI device is able to DMA, things may explode). It's a useful security feature (I mean, I wrote it, I would say that) but it can break even if implementations follow the spec perfectly.


You (and they) forget the part where graphics cards play an intimate role in MBR or UEFI boot. And there still innumberable quite functional graphics cards that will never, ever, allow a system to boot under UEFI. GPUs are deeply integrated into the boot up process.

This is easy for IBM/Fedora to forget because GPUs probably don't matter much to them. They just take their intel integrated graphics on their workstation (or server) and go. And other linux devs are probably rich enough to buy a modern GPU.

But it is a huge issue and it'll not have become irrelevant for at least another decade. Not being able to boot MBR will break (and prevent) much more than the Fedora, LWN, or most of the threads here are aware of.


What graphics cards make it unable to boot via UEFI?


I have a UEFI system right here under my desk that provides legacy BIOS services for running option ROMs, video cards, SCSI host bus adapters, network adapters, anything. It even scans out their legacy video outputs in a little window. There's nothing incompatible between UEFI and legacy option ROMs. Perhaps you are thinking of Secure Boot.


>Perhaps you are thinking of Secure Boot.

Nope. I think there must be a miscommunication here so I'll be more explicit.

If you have CSM/BIOS mode on your UEFI default motherboard and you set it to CMB/BIOS mode of course you can run BIOS based video cards.

But if you switch your mobo to UEFI boot (say, because you want to run future Fedora, or maybe boot off an nvme storage device) you can't use your BIOS firmware video cards (unless, like some gigabyte cards they shipped for a few years with both BIOS firmware and UEFI firmware). It has nothing to do with secureboot or signing or any of that. GPU's are intimitely involved (INT_10h in BIOS and something cursed in UEFI GOP) in the first few operations on boot in both systems and the firmware on the GPU has to be able to fulfill that role.

So Fedora removing BIOS boot effectively removes the ability to use most video cards ever made. Anything designed before 2015 has a decent chance of causing trouble.


"you can't use your BIOS firmware video cards"

Actually you can, you just don't get firmware boot support. Plenty of those boards work just fine in linux/etc because they reprogram the entire board using AtomBios/etc when the ati/nouveau/etc drivers load. The Arm/PPC/riscv people are all running the same PCIe boards as everyone else, and outside of a few cases they are doing just fine not running the x86 option roms.


A lot of firmware supports using CSM to do GPU init and then providing UEFI interfaces on top of that. Of course, this is incompatible with Secure Boot.


In fact, you can't really run any other legacy option ROMs without legacy VGA support. You can run legacy SAS option ROMs for example and still do UEFI boot because of similar support for Int13, but to even run them require VGA text mode to be working.


The majority of x86 computers I have are BIOS only, only one of them is UEFI and I only have it because I found it discarded on the side of the road. I don't use the UEFI mode though, since my current install doesn't support booting in UEFI mode (no GPT) and wouldn't support booting in BIOS mode if I switched it to GPT and UEFI.


> My short summary of this: pretty much every x86 client system since 2012 has shipped with working UEFI support

That is AFAIK not true for industrial boards. For example current PC Engines APU2 boards just have Coreboot-based BIOS without UEFI.


Linux works Just Fine with basically all of them.

Do you remember this?

https://news.ycombinator.com/item?id=11008449


Yeah, turns out representing EFI variables as a filesystem was a mistake (sorry, that was my fault)


Oh god. So after Windows drops support basically for any PC that is 4 years old, a Linux distribution entertains the idea to follow closely and drop support for any PC that is 10 years old? Is this corporate influence?

I cannot believe BIOS boot support is even close to the amount of code required for UEFI boot.

Before you say "UEFI is much older than 10 years old", please remember that UEFI only started being a default ever since Windows 8 (10 years old). Windows 7 can't even boot on UEFI (it requires BIOS emulation to be active). I have a system from 7 years ago that while it claims to have an UEFI BIOS, it is so full bugs it is unusable except to boot Windows. I mean, critical bugs, as in, you boot Linux with UEFI just once, you risk the UEFI var storage getting corrupted and the motherboard becoming a paperweight.


Look, even if RHEL 10 gonna drop UEFI support maybe sometime in the future, there's gonna be some retro enthusiats who would happily fork it with BIOS support returned.

Also, why are you so focused with corporate decision of a distro that is obviously cattered for future devices?


"10 year old hardware" is retro ? Ivy Bridge is retro these days?

This hardware is not just usable, it is _perfectly usable_, up to the point you'd barely distinguish it Gnome-performance-wise from something from last year if you pair with an SSD (which is not so strange).

This is not a 486 with 4megs of RAM that requires a special DE. These are machines with many gigs of RAM that have graphic cards which are still better than some of today's low-end cards. If any change like this is ever accepted, drawing the support line at "10 year old hardware", it's going to be hard to ever again claim that Linux is good for the environment.


Yeah it's funny, my workstation is a 12yo HP-Z600 workstation upgraded for like 30$ to 2CPU's/16Cores and 47GB Ecc-Ram, and now someone tells me it's retro but it's still often faster than new Hardware.


You can use several GNU/Linux distros which support BIOSes, even libre ones.


I agree, and that's the strength of linux, but here's a slight devils advocate:

If someone is heavily invested into RH for their BIOS only infra, this could be an issue for them. I'm sure there's large operations that would need fixing up if these change happens.

Not that RH is obligated to cater fully to that crowed of course. But 10 years is a bit silly, especially with how expensive hardware is now due to covid.

All that said, it's not going to be an issue like everyone is making it out to be.

Individuals should have few problems forking mainline to keep BIOS support, and large organizations will be given amble time to upgrade (at least years. RH isn't stupid with it's corporate customers.)


RHEL has a 3 year cadence, so if (if) RHEL 10 makes this change it won't have been 10 years, it will have been 13 years.


Yes Fedora is "a distribution of linux". A distribution that is an upstream of an enterprise distribution. Why do you care if enterprise markets don't care about legacy support? Just use another distribution. That's one of the boons of linux.


honestly at some point we should think about deprecating old code old stuff and i think 10 years might not be such a bad number, of course good deprecation policy would tell you for few releases before that something will get removed but its linux so even if they said we are removing BIOS support in fedora 40 everyone would cry

also it takes about same amount of code to boot BIOS and UEFI


>a Linux distribution

I think it matters quite a lot that the distro in question is Fedora which is by design forward looking and kind of a testbed. They have been shipping Wayland as a default since 2016 when it still had countless of hardware and software problems. Part of the diversity of the linux ecosystem is that distros can be opinionated.


Yes. I mean if it was centos / alma or debian, I would be scared we enter into an era of discarting perfectly capable hardware, but Fedora has always been packed "with the lastest but stable enough".

I'm not a UEFI huge fan (I've use boot-repair a lot of times since its creation), but I can see why maintaining a legacy bootloader could be distracting.


Wayland! Ye gods. I tried. I really did. I understand that from a developer standpoint, it's the future. I understand from a security standpoint, it's more secure. From a user standpoint? It completely breaks so much functionality (copy / paste, screen share, blue light filters, etc). Video players crash and sometimes take down the whole OS with it. It puts so much responsibility that used to be the job of the x server off on the window managers.

This shift to wayland has been one of the most user hostile decisions I've seen. Every distro that defaults to wayland pushes me that much closer to simply running linux in a WSL container on windows.


Wayland has changed since you last tried. I haven't got copy paste or blue light issues with it since Fedora 35 went with Wayland-by-default. Screen share is iffy on Electron, but that behemoth is slow to pick up new technologies.

I don't know about video players crashing either.

When talking about a fast moving new piece of technology, it's good to give it a try once in a while and not let your 4 year old experience colour your judgement.


I gave up last week. Vlc and celluloid were crashing under gnome. They worked under swaywm but couldn't find a way to install a redshift wayland clone that worked. There are a whole lot of things that still simply 'don't work' especially if you venture outside of gnome and kde.


VLC doesn’t support wayland. It’s an X application.

take a look at https://arewewaylandyet.com/ to find some alternative app for your X-only software

Also make sure to alias vlc to your new alternative, i keep forgetting i don’t have vlc installed anymore...


I tired using celluloid which is a gnome wrapper around mpv, but it stopped working. Running it on the command line, I'm prompted with an exception and admonished that wayland support is 'experimental' and known to be unstable.


I use celluloid on Wayland. Never crashed. It'd be helpful to know what distro you are using. Fedora has the best Wayland experience of them all, others not so much.

You wouldn't have a nice Wayland experience on Ubuntu 20.04 LTS for example.


>You wouldn't have a nice Wayland experience on Ubuntu 20.04 LTS for example.

That's a bingo! :^(

Looking to move to an arch flavor, but need to figure out how to backup ~ 500GB of stuff I don't want to lose.


If you were using GNOME, then it has a redshift clone built in.


Yes, gnome has night light, but unfortunately vlc crashes under wayland and it takes the entire OS with it. I haven't discovered any smoking guns, but I assume it's some sort of OOM issue, which is funny on a system with 8GB of ram.

Switching to gnome on xorg, it's rock solid stable.

I will wait to switch back to wayland when I'm forced to a few years down the line. I'm done bleeding on the edge.


> vlc crashes under wayland and it takes the entire OS with it. I haven't discovered any smoking guns, but I assume it's some sort of OOM issue

As someone already pointed out, VLC doesn't support wayland natively, so it's running under Xwayland. Ironically the issue could very well be X related.


It's Red Hat, they dropped their support for community based linux. Why wouldn't they also drop support for old hardware.


That's not true... CentOS Stream is more of a "community" than CentOS was. If you had a bug in CentOS all you can do is file an issue in the Red Hat bug tracker and wait because there was no way to contribute.

With Stream it's open to not just bugfixes but community-contributed enhancements.


The three businesses I knew that used CentOS, aren't anymore. They're looking for alternatives. They didn't pick CentOS because it was cutting edge. They wanted a stable environment that wouldn't change.


"Is this corporate influence?"

Of course? Fedora Linux is an asset of the company Red Hat, their prevailing revenue model seems to have something to do with adding their own brand of complexity to otherwise simple open source systems, see systemd.

Since its acquisition by IBM (NYSE:IBM), Red Hat has entered the conglomerate of publicly owned companies, their profits are linked together with other public hardware manufacturers like Intel (NASDAQ:INTC), through intraday traders and index funds.

In summary, removing support for older hardware increases sale of new hardware, hardware and software companies coordinate profit sharing through stock markets. Pretty simple.


This is a bit of a tone deaf idea to explore right now. If anything, the environment and chip shortage issues should be pushing people to promise extending support for older systems for another decade.

That said, while there are loads of systems out there that could still use BIOS, I like to look at the second hand market.

There are a lot of Dell R710s for sale on eBay still (700+ listings). These are commonly suggested as a good "intro to homelabbing" server in a variety of places. These systems were released in 2009, often featuring CPUs from 2007-2008. They support UEFI 2.1.

Contrasting this, some UEFI implementations are horrifically buggy and largely seem designed to only work with Windows.

I think 10 years is probably too close a cutoff but I do think that eventually it becomes a lot more work to maintain it (like i386 builds, x32 builds, etc.) and becomes not worth it except for niche distros.

If we factor in things like environmental impact, it makes a great deal more sense to continue to support older systems for some longer period of time.


> That said, while there are loads of systems out there that could still use BIOS, I like to look at the second hand market.

I have lots of UEFI hardware with never-to-be-fixed bugs that are mitigated by just using BIOS boot. These are from Dell and Supermicro.

The Dells when set for UEFI would boot 50% of the time and 100% of the time in BIOS mode. The workaround was well known and is the only reasonable thing to do. It's EOL from dell it will never be patched.

The Supermicro's would work great in UEFI but the factory default was BIOS boot. The trouble is that these particular systems had a reasonably high chance of loosing BIOS setup data if they took a power hit (APC UPS are complete garbage FWIW). We'd have to send someone to a remote location to switch it back to UEFI or just reinstall in BIOS mode. By default they would remarkably turn back on when the power was restored and would recover fine if they could boot.

This stuff isn't useless or even THAT old. It's just unreliable in UEFI mode. While most newer stuff from these brands have more reliable UEFI, we're going to keep using hardware that is good enough. We work around hardware/firmware problems because there's nothing out there that doesn't have quirks. "Old" hardware has known quirks that have workarounds that have already been "paid for" with expended labor. New hardware has new quirks that you have to pay off.

Of course it's all moot for this discussion as we don't use rolling quirk factories like Fedora.


> Contrasting this, some UEFI implementations are horrifically buggy and largely seem designed to only work with Windows.

UEFI specifications are like Web specifications: they're good to wipe your ass with, but on the Web when the rubber meets the road, the only "standard" that matters is "does it work in Chrome" (in the past, IE). Similarly the only standard that a UEFI implementation need comply with is "does it boot Windows".


Side note: Red Hat disabled the kernel code for the Perc 6/I raid card which is standard in the basic R710 chassis starting with RHEL8.

Unless you have the higher end H710 raid controller or whatever else, Red Hat rendered this chassis controller inoperable with their custom in-house kernel patching process.


UEFI emulation is a thing.

For people with legacy-only firmware systems, you too can run UEFI on lazy cloud providers and legacy hardware. I do this on my 14 year old dell laptop just so that all my x86_64 systems have the same boot efistub linux kernel images. All you need is to use one of the EDK DUET bootloader builds such as BootDuet¹, or for an easier user experience CloverBootloader². Only complaint is secureboot emulation can be a pain. But I imagine with Windows 11 around the corner requiring TPMs (which might require emulation on older hardware that doesn't have it) and secureboot, UEFI firmware emulators with these feature will probably get more popular and more accessible.

¹ https://github.com/migle/BootDuet

² https://github.com/CloverHackyColor/CloverBootloader


Windows 11 isn't going to work with emulated TPM stuff (unless you run Windows in a tiny Linux hypervisor will all PCIe space forwarded and an emulated virtual TPM, perhaps) but this approach should work perfectly fine for all other operating systems.

In my Windows 10 setup, Bitlocker refuses to boot without a recovery key when I don't use GRUB to load Windows; presumably, Windows recorded the system state when I enabled Bitlocker, which was booted via Grub. In similar fashion I expect Windows 11 to actually work just fine on systems with a TPM as long as Grub is used every time.


> But I imagine with Windows 11 around the corner requiring TPMs (which might require emulation on older hardware that doesn't have it) and secureboot, UEFI firmware emulators with these feature will probably get more popular and more accessible.

Unfortunately, there's no appetite - there's no point since that in a significant break Windows 11 only runs on new processors that happens to have both UEFI and TPM (not always enabled by default but it's there).


Apparently you can officially just shut off the restriction that forces Windows 11 to only run on new CPUs and the newest TPM by just setting a registry key AllowUpgradesWithUnsupportedTPMOrCPU on install.¹ But I never use Windows and don't plan to start anytime soon, so it's not something that I will try.

¹ https://support.microsoft.com/en-us/windows/ways-to-install-...


Interesting... I'll check if it can emulate 64 bits UEFI on a 32 bits one (yet 64 bits cpu): Finding anything to boot on my old tablet PC is becoming a pain.


GRUB does this relatively easily (without needing emulation) as I too have done it before on an old tablet. I am not sure what OS you are using but it should work fine for Linux and Windows although only the prior is easy and just works. On Debian and it's derivatives you just need the grub-efi-ia32 package, and then regular GRUB install process and 64bit loads fine without anything special.


The problem is less booting the OS once installed, but finding an iso I don't have to mess with too much to install it. Most of them assume 64 bits CPU means 64 bits UEFI...


I don't think you can emulate a TPM via software. You need a hook in the firmware and use SystemManagementMode/SMM, very likely.



I mean "in a way that would allow you to boot Windows on top of it".


Perhaps via a virtual machine?


Leave things as they are. Code continues to rot.

What is this ever-present BS about "rot"!? Why do people think continual changes are even needed? Code should become more stable over time, an ideal that I wish much more software would follow. The way BIOS boot works has basically remained unchanged ever since the first IBM PC, and it's incredibly simple. Linus Torvald's opinion of EFI is worth reading:

https://yarchive.net/comp/linux/efi.html

7C00h forever! ;-)


The intersection point between simple and useful is this:

- Firmware understands some sort of minimal filesystem and how to talk to a device containing it.

- Firmware has a configuration store that holds a few variables and a device tree.

- Firmware has the ability to load binary images into RAM from the minimal filesystem above.

- On boot, configuration is checked, the kernel and initrd are binary-loaded into RAM, and the firmware jumps to the kernel with a pointer to the commandline, initrd, and device tree.

U-Boot more or less does this and it's beautiful. The bootloader loads your OS and then gets out of the way. No multistage crap or overengineered firmware interfaces.


I don't think firmware should care at all about filesystems, because that's already halfway towards being an OS. It should just load the first sector from the selected boot device and jump to it.


- You're already getting into "halfway towards being an OS" territory if you want to netboot.

- Eliminating variable code from the boot process makes it more secure and reliable. Why do I need to load a separate loader to load my OS which is overwriteable by my OS if the firmware can do it in a standard, fixed way?

- At least have some mechanism to load an arbitrary number of sectors.


while fedora defenetly is not my cup of tea, first thing i do when i get a new system(desktop or laptop) is disable legacy bios/CSM whatever it is labeled as. This is to prevent mirriad of issus BIOS brings including its horrible MBR. GPT is million times better and for someone who dual boots on many machines (and has a drive with 14 different operating systems as a test) to all you legacy BIOS and MBR fans MBR doesnt cut it and is cause of many headaches if you dual boot.

Remember how some people have had issues with windows destroying booting of linux that is due to MBR being a useless piece of antiquated software and breaking. It wasnt windows's fault its just that its update utility had to overwrite a part of MBR and since MBR was already basically a house of cards all you had to do to break it is have a little mishap and your linux boot is gone (it was same on other side just less people were crying about it because well they blamed windows for breaking itself)


This is fine for you and your use case, but some of us (as pointed out in the article) are forced to stay with BIOS either due to owning legacy hardware that is still fully functional and even necessary, or because we use VMs and/or hosted services that require BIOS and don't support UEFI, or both. I'm one of those; I use a few legacy machines locally and I have VPS instances hosted with Vultr.

Granted, I don't use Fedora so this doesn't directly affect me yet, but the Linux community has a history of too-early adoption of ideas started at Fedora (systemd, pulseaudio) that take years to reach production-ready status, if ever. At some point those of us who still use legacy hardware at home/work will be forced to either throw out perfectly good machines, or switch to a holdout distro like Slackware or Void (not that there's anything wrong with either of those) and lose valuable time moving our workflow. We'll also be at the mercy of our hosting providers as they decide whether to overhaul their entire hosting backend, or else drop Fedora and any other distro that follows their lead.

I get that UEFI is the future of bootstrapping, but it's too early to pull the plug on BIOS.


At this point we should have learned the lesson from systemd, I think red hat now has a bad enough reputation that everything with their brand is an instant rejection, and any suggestion they throw is taken as a suggestion of what not to do.

Red hat flatlined when it was acquired by IBM, a consequence of a free as in free beer model to software.


Ironically, the first thing I do is the opposite. BIOS boot has been simple and reliable for decades, and EFI remains a horrible mess.

Remember how some people have had issues with windows destroying booting of linux

The MBR is tiny (you could even write out all the entries on paper as a backup, like I became accustomed to many years ago whenever I partitioned a disk) --- and restoring it is also equally straightforward. I don't even know where to begin with troublshooting EFI's horribly overengineered boot entries and NVRAM variables...


to all you legacy BIOS and MBR fans MBR doesnt cut it and is cause of many headaches if you dual boot.

That's neither here nor there. You can still perform BIOS boots from a GPT-partitioned disk, in fact it works even better: while in MBR mode the stage1 grub loader must be placed in the unused 31kB between the partition table and the first partition start, GPT allows you to explicitly allocate a partition for the stage1 code. This means no more borkage because Windows overwrote an officially unused part of the disk, and the stage1 payload is no longer limited to 31kB.

Point being, your rant has nothing to do with BIOS vs UEFI boot.


I agree in general, but not with this: "It wasnt windows's fault its just that its update utility had to overwrite a part of MBR". If they cared, they could spend some time to design / document a nice way for systems to redirect to the next entry. They could've used grub and chainload. They could've invited others to collaborate. It was very much in MS interest not to care and I totally blame them for doing just that.


Recovering mbr boot corruption on CentOS is relatively easy.

Boot from the install media in rescue mode, chroot into /mnt/sysimage, then grub2-install onto /dev/sda.

I've never had to do this outside of the redhat realm, but the procedure is not complex after a few rounds to commit it to memory.


> GPT is million times better

Agreed.

> It wasnt windows's fault its just that its update utility had to overwrite a part of MBR

It was a reasonable assumption that whatever drive Windows' was installed on already had a working bootloader installed, otherwise it wouldn't have been able to boot itself to do Windows Update. Windows did NOT have to override it.

This was not MBR fault. AFAIK Linux did NOT do the same thing as overriding, which is why it has separate `grub2-install` and `update-grub` commands. Once installed to a drive, `update-grub` will only change the simple config file it reads at boot. Perhaps both of them did override themselves when a bugfix or new feature was available, but I don't think Windows's bootloader changed much after any OS release.

Now, GPT/UEFI is a million times better in that aspect because it can allow easily multiple different bootloaders that don't have to know of each other's existence. But I blame Windows in MBR case very much.

Also, some though not all motherboards allow you booting from BOTH old-school MBR and newer GPT so you may not need to disable CSM and still eat your GPT cake. This may be useful if you e.g. have an old MBR drive with Linux and another GPT one with Windows.


This is the incorrect take on the subject. Dual booting is a feature used by less than one in ten thousand users. Obsoleting BIOS will obsolete millions of hardware pieces.


Nothing against EFI but secure boot as it is is a pain in the ass. Want to install some kernel module to enjoy your Xbox controller? Too bad you need to follow this convoluted guide about generating and installing certificates to sign your driver... or just disable the whole secure boot thingie.

Its like selinux hardware edition.


This is a misnormer though. Secure Boot and kernel modules are not inherently dependent on each other. However modern Linux distribution carry out-of-tree patches which throws the Secure Boot keys into the Linux platform keyring and enforce lockdown mode.

This isn't a thing on the stock kernel.


They also do this because it is likely that Microsoft will stop signing their bootloaders/kernels with their UEFI CA keys if they allow arbitrary user modules to be loaded (because it would be trivial to abuse those kernels to break Windows' full disk encryption).

And if Microsoft stops signing your bootloaders it is an automatic death sentence for your distribution, as you can no longer boot the LiveCD without "scary prompts" and/or fiddling with the BIOS setup.


Secure boot allows you to load your own keys. That's the way some Linux distros actually recommend you to set it up: sign your own bootloader, kernel, kernel modules, everything, and tell your motherboard to trust that. It's arguably even more secure than Microsoft's approach because anyone can boot a Windows install disk, but getting a boot drive with your signature on it requires breaking into your system. This could be a little challenging if you try to update firmware through manufacturer supplied boot images that expects their Microsoft signature to work, but it's not impossible to work around that.

For dual booting you'd need to load both sets of keys (your own and Microsoft's) or configure your primary bootloader to trust Microsoft's signature and chainload.

There's nothing inherently Microsoft related about secure boot, except for that on some Microsoft devices where the ability to use your own keys has been taken away from you. Don't buy a Microsoft Surface without checking its Linux limitations, basically, but that's a Microsoft problem, not a secure boot problem.

If you don't like being restricted, just turn off secure boot. Or turn off any verification that happens after secure boot; it's the Linux kernel that's enforcing drivers it loads to be signed, not the secure boot standard. Patch out the verification routine with a return true if you have to.

Everything will boot and load, which may or may not be a good thing, depending on your requirements.


> There's nothing inherently Microsoft related about secure boot,

Microsoft is the root of trust for ~100% of OEM secure boot implementations.

Theoretically, you can implement Secure Boot with an alternative root of trust... but you'd have to get the OEMs on board... to the tune of many millions of dollars. Per OEM.

The only alternative is to get users to install their own keys, which is fiddly and technical.

Therefore, for all intents and purposes, Linux on the desktop is only a thing at all because Microsoft deigns to allow it for the time being.


> The only alternative is to get users to install their own keys, which is fiddly and technical.

It's a bit worse than that actually; it's actively scary and dangerous, as with much EFI stuff. Quoting the Arch Wiki:

> Warning: Replacing the platform keys with your own can end up bricking hardware on some machines, including laptops, making it impossible to get into the UEFI/BIOS settings to rectify the situation. This is due to the fact that some device (e.g GPU) firmware (OpROMs), that get executed during boot, are signed using Microsoft's key.

The key process creation itself is extremely manual and finicky, and probably prone to error.

The process of enrolling your platform key involves deleting all enrolled certificates. Let's hope your hardware provider implemented this properly so you didn't just brick your system.

> Once Secure Boot is in "User Mode" keys can only be updated by signing the update (using sign-efi-sig-list) with a higher level key. Platform key can be signed by itself.

So any loss of your platform key (e.g. by cosmic ray flipped bit, or hard drive failure, or simply user error) results in effectively bricking your hardware, right? (Unless and until you can rewrite the firmware with a hardware device.)

I'm sure some folks have worked out a good process for managing all this, but it feels so flaky to me and I don't have a good handle on what is required to do this right. Back in the day, installing Linux for me used to involve 2-3 cycles of screwing up something with GRUB, having to boot into the LiveCD, and fixing things. Right now it feels like one screw up could be fatal to hundreds of dollars of hardware. That's before you get to the issue of having to mess with the EFI variables, which has resulting in bricking hardware in the past: https://www.theregister.com/2016/02/02/delete_efivars_linux/

Again, some of the above could be based on my misunderstandings, but that's kind of the point as well - the scary thing about secure boot / UEFI for Linux users is that it's a new area of required knowledge that you seemingly need to be 100% right about or risk burning hardware.


I wrote sbctl to try and make sure self-enrolling keys is completely painless

https://github.com/Foxboron/sbctl


I am not saying that you can't fiddle with the BIOS (and or preloader or shim) to workaround this; I'm just saying that this is the MS-signed distro's motivation to lockdown bootloaders and kernels when you are booting with SecureBoot on.


>And if Microsoft stops signing your bootloaders it is an automatic death sentence for your distribution, as you can no longer boot the LiveCD without "scary prompts" and/or fiddling with the BIOS setup.

Not really?

Several popular Linux distributions simply do not support Secure Boot. Arch Linux is one of them.


That's because current generation of hardware does not mandate secure boot on x86. I expect that will change once Windows 11 has had a few years to turn the majority of the computers secure-boot capable due to its hardware demands.


That would be against the current UEFI spec. I get that people are cynical and expect this to happen but I don't think it will.

There are however going to be a lot more issues self-enrolling keys going forward.


Just for clarification, I believe you mean that it's not something supported out of the box, in the form of a signed kernel / bootloader. It is something Arch Linux users could choose to set up themselves; there's a whole wiki article on it.


Perhaps this is a good time to ask: I'd like to use UEFI for my qemu+libvirt virtual machines, but I need snapshot support. Since QEMU doesn't support pflash internal snapshots <https://gitlab.com/libvirt/libvirt/-/commit/9e2465834f4bff40...> and libvirt can't revert or delete external snapshots <https://bugzilla.redhat.com/show_bug.cgi?id=1519002>, I don't see a way to achieve this. The issue was discussed on virt-tools in 2017 <https://listman.redhat.com/archives/virt-tools-list/2017-Sep...> and the situation appears to be unchanged. Do others have a workable solution?


The idea was turned down, luckily.

There are too many downside at this point in time.


I can see grub2, syslinux, and anaconda are affected in the proposal. (Syslinux removed, others simplified).

But then, these are common linux components, not specific to fedora. They are old and battle tested by now, do they really require significant resources to maintain beyond running some automated tests?


I understand why this stuff usually bothers people but I think this makes for Fedora specifically... they're popular and bleeding-edge, so they want to work on the latest popular hardware and not cater to minorities. If a hacker needs different, there's other distros and DIY, if a company needs different then they can just invest in maintenance of that themselves. Yeah, EFI sucks bad and we should be working to replace it with something that's free/open and actually good but this is the environment we're actually operate in and being in denial of that doesn't help anyone.


Yeah it seems like a bad idea

Especially because of:

> Fedora is also installed on cloud servers and virtual machines of various sorts, some of which do not support anything other than booting via BIOS. The proposal noted that the time of the 2020 discussion, Amazon's AWS did not support UEFI, but that has changed. Marc Pervaz Boocha pointed out that many virtual private server (VPS) providers do not support UEFI, giving Linode and Vultr as examples. Dominik "Rathann" Mierzejewski reported that OVH is also affected

(or just running virtualized stuff locally)


It's easy to use EFI when virtualizing things locally.

libvirt/kvm supports it, Hyper-V supports it, Virtualbox supports it, ESXi supports it.


But change a working virtualized system from BIOS to EFI and there is a good chance a bunch of other things will break. Lots of memory mappings and hardware detection stuff changes if you switch from BIOS to EFI, together with the likely need to repartition or add a virtual disk for the EFI boot partition.


As far as I can see, only Hyper-V enables it by default as of today; this means most of the VMs being created _right now_ we can assume to be BIOS rather than UEFI.


> libvirt/kvm supports it,

I recall reading somewhere that using UEFI on qemu/kvm caused problems with snapshotting and/or migration; is that still the case?


libvirt/kvm sort of supports it, unless you want to use snapshots I guess. https://bugzilla.redhat.com/show_bug.cgi?id=1881850

That bug appears to be a duplicate, I use legacy bios type for all my VMs to get around this since it's been a problem for at least 5 or more years.


I’m not sure I like that idea. Some older PCs have a very buggy and crappy UEFI implementation. Using UEFI on those can be a nightmare. And upgrading the firmware might not be possible, or only possible via Windows. (Thinkcentre M72e, I’m looking in your direction.)


Fedora didn't even support UEFI on their cloud images before Fedora 35, released 5 months ago.

https://pagure.io/cloud-sig/issue/309


The whole step moving forward into the UEFI direction on Client Computing Systems (laptops, desktop) is the right way.

Even if I dislike the UEFI specification and existing implementations, the spec is more than 24 years old. It might fulfill its purpose as a bootloader/OS interface.

The UEFI BDS interface, the one the most people see and the operating system interfaces, is now standardized for many operating systems and OEMs, making it easier to integrate and maintain. It is finally possible to use standard security mechanisms (verified, measured boot) to secure your device. So we can use such technologies to ensure device security reasonably as we do in my corp www.immu.ne

The custom certificate provisioning is a mess and hard to use, but that can be possibly made easy by projects like: https://github.com/Foxboron/sbctl

I don't believe the UEFI interface is beneficial for data-center or embedded devices. The UEFI BDS interface was developed for client platforms and required physical presence. It leads to complexity in the DC and embedded world. I feel a more suitable approach would be www.linuxboot.org which is already used by many Hyperscalers and Embedded companies.

That's my 50 cents about the story. My background, I am coreboot developer and security architect.


In my admittedly limited time using Linux (switched to it full-time in ~2001), the total amount of effort I've spent troubleshooting issues has probably been dominated by two categories:

1. Getting audio to behave.

2. Boot problems. (waves at LILO)

I don't know enough details about how UEFI and BIOS and MBR and GRUB and all of this works -- which probably contributed to my difficulties. But the problems certainly didn't end when I left LILO behind.

So it always makes me wonder: Have the people responsible for the bootloaders ever used a Mac? My goodness it could not be simpler. Hold the right key combo at boot time and you get a nice interface that shows you all the bootable devices you've got plugged in. USB, SATA, NVMe, whatever, it's there. And when you select one and boot, it actually works.

Recognizing that we don't actually need a GUI for this, and the challenge is different because Apple gets to use firmware and only worries about one type of OS, what would it take to make a PC bootloader work just as nicely?


> what would it take to make a PC bootloader work just as nicely?

If we take Apple's bootloader as a base line:

Drop GPU support, except for blessed models, from initial boot support.

Remove RAM customization options. Can't change speed and timings. Can't support ECC. Might as well disallow customizing RAM alltogether and solder it to the board or in the SOC.

Remove CPU customization options. Can't change speed, enable or disable supported features. Can't chage ACPI tables.

Remove storage customization options. Can't change storage speed, protocol support. Can't even add or remove internal storage devices at all, if we go by the latest trend.

Implement a bluetooth stack onto the bootloader firmware, because that's for sure safe and could never go wrong. Force OSes to write their pairing keys in the firmware storage.

Implement a wifi stack onto the bootloader firmware, because that's for sure safe and could never go wrong. Force OSes to write their pairing keys in the firmware storage.

Implement internet-based, network boot support from a centralized, blessed location. Because that's completly safe and could never go wrong.


> Apple gets to use firmware and only worries about one type of OS,

And a very, very narrow set of potential hardware compared to the entire PC ecosystem.


I understand the need to get around some of the quirks inherent in BIOS, but the idea of rich pre-OS applications just seems completely backwards. The goal for the early boot environment should be to hand off to the real operating system as soon as possible. Just let me select which disk partition to boot from and the operating system will take it from there, thanks.


> the idea of rich pre-OS applications just seems completely backwards.

is at odds with:

> let me select which disk partition to boot from

As soon as there's any UI at all, one has to deal with much of the complexity of modern UI. For one thing, I'm sure modern firmware has to support modern HID input devices, such as USB keyboards. But also, ideally, that UI should be accessible to people who can't use a minimal screen-and-keyboard UI implementation, such as blind people. This is an area where, even with UEFI, PCs fall short. I like the way Apple has resolved this on its Apple Silicon Macs [1]. As I understand it, as soon as the Mac has to display any UI at boot time, it boots into a minimal version of macOS itself, where VoiceOver and any other accessibility tool can run.

I agree, though, that in the common case where the default OS is booted, the path from power-on to OS kernel should be as short as practical. Booting into a BIOS in 8086 emulation mode hardly seems like the best way to do this. But then, a design-by-committee solution like UEFI might not be optimal either. It pains me somewhat to say this, but Apple's proprietary, vertically integrated solution might be near optimal.

[1]: https://github.com/AsahiLinux/docs/wiki/Introduction-to-Appl...


its all about UX, if you have an environment that has lots of menus and settings something even bios had to control certien aspects of your system like virtualization support and such you would like to have good UX, because of that UEFI was created. Now if you wanted to do that from OS you can but you need a technology many people hate and want to remove from their systems and hardware called Management Engine. Yes this is the purpose of ME allow changing of UEFI settings from either the OS of machine itself or changing of UEFI settings from another machine without requirement of IPMI(idrac iLo and simular implementations) and rebooting to get to it. Implementation of some sort of ME allows you to overclock your CPU from OS. So AMD has some sort of ME(of course its not called ME since that is just name of intel implementation) as well(how wide in functionality is another question) so do GPUs.


I wish people would stop sharing LWN subscriber links for karma. They're not meant to be shared in social news websites. https://lwn.net/op/FAQ.lwn#slinks

20 of the poster's last 30 submissions have been LWN subscriber links.


They're not meant to be shared in social news websites.

From your link:

"Where is it appropriate to post a subscriber link?

Almost anywhere. Private mail, messages to project mailing lists, and blog entries are all appropriate. As long as people do not use subscriber links as a way to defeat our attempts to gain subscribers, we are happy to see them shared."


Having access to nearly every paid LWN article on a news aggregator certainly feels like "a way to defeat our attempts to gain subscribers". But others said these links are subscriber-specific so I guess LWN is fine with it if they aren't taking the links down.


As well as LWN's declared policy, the subscriber links are also linked to the user that generated them - if LWN felt they were being abused, they could disable that user's access to them.


At a time when the world is producing several hundred million tons of e-waste each year, that this was even entertained is bad. I suppose its fine to make toxic trash mountains in the third world to get rid of a small amount of code bloat?


Anyone who says things like this, should be the ones that have to support the legacy code. They should be sentenced to a 15+ year old computer and to explain to their colleagues how we just can't do anything new, because we need to use GCC 4.7 to build the world because <legacy CPU>, now used by 20 non-paying users, won't work with anything newer.

Fedora is only one, relatively fast moving distro, which gives you their software for free, and which didn't even make this decision, but who could blame them if they did, in their position?


> They should be sentenced to a 15+ year old computer

I bought my computer about 15 years ago, with what at the time was a pretty beefy setup. I still use it for my everyday work because, as it turns out, you don't need much more than 4Gb of RAM to surf the internet.

Due to buggy UEFI support, I cannot reinstall Windows 10 on that PC. If Linux stopped supporting Bios, I'd have to throw it away. But more importantly, this computer has better performance than the computers of many of my relatives in South America.

I wish every developer in the first world had to explain at least once to their nieces that no, Roblox will not work on their Chinese Android tablet with 1Gb of RAM.


But Linux isn't ending support of BIOS, Fedora considered it but won't yet. If Arch Linux decided not to support BIOS, we'd say, "Well they are a bleeding edge distro", but guess what, "Shhhh... so is Fedora."

Old hardware is great. Expecting people to support it forever for free is folly. This is exactly why people pay IBM/RedHat for support.


Windows 10 doesn't require UEFI. None of my computers have it and they all boot Windows 10 just fine.


You will have to pry that BIOS from my cold dead hands.

Burnt EEPROM still remains the ultimate security.


> UEFI is defined by a versioned standard that can be tested and certified against

But is it actually tested against or only can be?


A whole lot of the maintenance headache could be solved by choosing five or fewer BIOS implementations to fully support and letting everything else go by the wayside. Choose the ones from VMware, VirtualBox, and such and a couple of popular server and desktop ones.


Fedora is often the first distro to adopt radical changes like this and then all the other distros follow and then everyone has to live with their decisions, which often turn out to be made too soon.

Not looking forward to having to deal with this one.


it's not really happening, or at least not yet. I don't see why this even got posted the way it was.


Good. More time to work on new stuff.


According to Wikipedia, Intel’s 945 chipset for Core 2 processors ships with UEFI. This was originally released in 2006.

Note that Google Chrome which is the web browser with the highest adoption rate requires a CPU with SSE3 support which was introduced in 2004.

Which platform older than a Core 2 can even run the latest operating systems? The very latest Pentium 4 range (2004-2006)? So we would just be dropping support for them? That’s OK.


OpenBSD 7.1 was just released and supports my G5 Mac. It also supports i386, UltraSparc, and Digital Alpha.

NetBSD supports i386, Alpha, Amiga, 32-bit Sparc, 32-bit MIPS, ARMv6, StrongARM, sun2, sun3...

LinuxMint still supports i686.

Debian supports i386, i686, armel, armhf, s390, and MIPS.

Slackware supports i586 and s390.

Kali supports i686.

Void Linux supports i686, ARMv6, ARV7.

FreeBSD supports arm, armel, i386, ia64, mips, mipsel, sparc64, pc98, powerpc, powerpc64, ps3, xbox.

Alpine supports i486, ppc64le, s390x, armhf.

Devuan supports i686.

Gentoo supports i486, i586, i686, alpha, arm, hppa, ia64, mips, powerpc, ppc64, sparc64.

Which of these don't have recent releases?


According to discussions on the Gentoo [1] and Fedora [2] forums, i686 without SSE2 could be broken in some packages in subtle ways. One is that the stack alignment changed from 4 bytes to 16 bytes, another is long NOPs, another is that some software seems to require SSE2 on i686 like rustc or qt.

Even those platforms that claim to build for i686, I’m not sure if everything works correctly on pre-SSE2 (Pentium Pro, Pentium III, older Pentium 4) CPUs. And I don’t think any of the Linux distributions will actually run on a i386, i486 or i586 as i686 is almost required to have multithreading without busy waits.

The BSDs I don’t know about, but it’s a fair number of distributions that seem to support older x86 CPUs you posted. I didn’t know there would be so many. I couldn’t find Linux Mint though.

[1]: https://forums.gentoo.org/viewtopic-t-1087434.html?sid=4b682... [2]: https://lists.fedoraproject.org/archives/list/devel@lists.fe...


I found that the latest BIOS only devices were released in 2013 which are based on the AMD Bobcat processor (AMD’s version of Intel Atom). I wouldn’t know what operating system to run on those systems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: