Hacker News new | past | comments | ask | show | jobs | submit login

The UEFI environment is a given, unless you're in a position to replace the firmware - using grub doesn't avoid it in any way. But for the most part the security properties of the underlying firmware don't matter that much if the attack surface it exposes can only be touched by trusted code, which is the case if secure boot is enabled (and if secure boot isn't enabled then there's no real reason to bother attacking the firmware, you can already just replace the OS)



The UEFI environment does not exist on older PC's.

UEFI started to become mainstream around 2013 -- that is, an increasing amount of PC motherboard manufacturers started to put it on motherboards (rather than the older BIOS) around this time.

It should be pointed out that on some motherboards the UEFI software may be placed on an IC (EPROM, EEPROM (Electrically Erasable Programmable Memory), Flash, NVRAM, ?) -- which may be writable, or writable under certain conditions (i.e., if the boot process doesn't load software which explcitly blocks this when the system starts up, or if such blocks, once existing, are bypassed, by whatever method...)

If the UEFI-storing IC is writable (or that IC replaceable, either via socket or solder), then the UEFI (again, under the proper conditions) is subject to modification then it is modifyable; changeable; updatable, programmable; etc. etc.; use whatever linguistics you deem appropriate...

>"The UEFI environment is a given, unless you're in a position to replace the firmware"

If what I've written above is the case -- then any such UEFI envrionment (aka "firmware") under such conditions is very much replaceable!

And if it is replaceable, then that firmware code can be made simpler by somone "rolling their own" -- and replacing it!

Now that I think about it, I'm going to have to do more research for the next motherboard I buy... if it has to have UEFI on it, if I am compelled to buy a UEFI motherboard, then I want that UEFI firmware to be overwriteable/customizable/modifyable/auditable -- by me!

Also -- I'd never trust "trusted" code implicitly...

Didn't Ronald Reagan so eloquently say "Trust -- but verify?"

It's the but verify part -- that's key!

Anytime a security vendor or vendor (or any authority or "authority" for that matter) tells me to trust or "trust" something, my counterquestion is simply as follows:

"Where is the proof that the thing asking for my trust is indeed trustworthy?"

In other words,

"How do I prove that trust to myself?"

?

In other words,

"Where is the proof?"

?

And let's remember that proof by analogies (Bjarne Stroustrup) and proof by polled social approval consensuses ("4 out of 5 dentists recomend Dentyne for their patients that chew gum") -- are basically fraud...

Anyway, your assessment, broadly speaking, is not wrong!

It's just that there are additional "corner cases" which require some very nuanced understandings...

Related:

https://en.wikipedia.org/wiki/Open-source_hardware

https://en.wikipedia.org/wiki/Right_to_repair

https://en.wikipedia.org/wiki/Non-volatile_memory

https://libreboot.org/

https://www.coreboot.org/

https://en.wikipedia.org/wiki/Open-source_firmware


UEFI didn't exist at all on older systems, so instead you had BIOS which provided no security assertions whatsoever and exposed an even larger runtime attack surface (UEFI at least as the boottime/runtime distinction, and after ExitBootServices() most of the firmware code is discarded - BIOS has no such distinction and the entire real-mode interface remains accessible at runtime).

In terms of how modifiable UEFI is - this is what Boot Guard (Intel) and Platform Secure Boot (AMD) are intended to deal with. They both support verifying that the firmware is correctly signed with a vendor-owned key, which means it's not possible for an attacker to simply replace that code (at the obvious cost of also restricting the user from being able to replace it - I don't think this is a good tradeoff for most users, but it's easy to understand why the feature exists).

If you want to be able to fully verify the trustworthiness of a system by having full source access then you're going to either be constrained to much older x86 (cases where Coreboot can do full hardware init without relying on a blob from the CPU vendor, ie anything supported by Libreboot) or a more expensive but open platform (eg, the Talos boards from Raptor). If you do that then you can build this entire chain of trust using keys that you control, and transitively anyone who trusts you can also trust that system.

But there's no benefit in replacing all of the underlying infrastructure with code you trust if it's then used to boot something that can relatively easily be tricked into executing attacker-controlled code, which is why projects like this are attempting to replace components that have a large attack surface and a relatively poor security track record.


Anytime something (i.e., stack layer, software component, software layer, blob -- in this case, initialization/boot code for hardware which was first simple BIOS'es whcih became larger and more complex BIOS'es which later became still more complex UEFI -- a much larger attack surface by size) in technology is replaced by something new and complex which is touted as "more secure" -- it's usually less secure (bugs and attack vectors are found in hindsight), and minimally, less transparent and understood.

Anyone interested in the subject could Google (or search on their favorite search engine) "UEFI Vulnerabilities" -- for no shortage of issues/problems/security vulnerabilities.

Am I saying that an old BIOS is perfectly secure?

No!

But older BIOSes are an order of magnitude simpler, better understood, and more documented -- than UEFI is at this point.

If UEFI morphs to something else more complex in the future, which it probably will, given the track record of hardware boot/initialization code specifically, and software generally, then my advice at that point in time (10+ years in the future) will be "go back to UEFI, it's simpler, more documented and better understood than what we have now".

But not until that day, and then not unless every computer on the planet is, or has become absolutely incapable of initializing/booting from the older code.

As a generalized pattern/understanding in Software Engineering, older code / older software / older codebases (of whatever form, firmware, etc. -- in this case BIOS hardware init/boot handoff code) -- are generally smaller, simpler (less bloat for approximately the same functionality), vanilla, spartan, better understood, and have had many of their security issues found, fixed, and solved than their present-day over complex and bloated counterparts... (and, did I mention better documented?)


Older BIOSes are much simpler, and also offer no security boundary at all - nobody talks about BIOS vulnerabilities because it wouldn't give you anything you don't already have!


It may be argued that a Ford Model-T (one of the earliest and probably the simplest of all mass-produced vehicles in the early 20th Century: https://en.wikipedia.org/wiki/Ford_Model_T ) had no "security boundary" at all, and that conversely, the most modern vehicle with the latest radio frequency based remote lock and key (aka "Smart Key") -- is more "secure" (has more of a "security boundary")...

...but if so, is that asserted "security boundary" really an actual security boundary(?)

If the security boundary or "security boundary" -- is opaque in how it functions; if it is a "black box": (https://en.wikipedia.org/wiki/Black_box); if no one (other than potentially a few people who work for the manufacturer, or exist at the company subcontracting to build their Smart Key component (if the Smart Key is subcontracted/outsourced)) understands exactly how it works, then is it really "secure"?

(If so, then that sounds eerily similar to the "obscurity is good" (aka "transparency bad") side of the "Security Through Obscurity" debate that the Internet had, like 5, 10, 20+ years ago: https://en.wikipedia.org/wiki/Security_through_obscurity#Cri...)

Why not read the following:

"Gone in 20 seconds: how ‘smart keys’ have fuelled a new wave of car crime":

https://www.theguardian.com/money/2024/feb/24/smart-keys-car...

And you tell me?

My conclusion:

Perhaps less "security" (less of an asserted "security boundary") -- is actually more actual security -- at least in some cases -- at least in the case of the Ford Model-T...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: