Hacker News new | past | comments | ask | show | jobs | submit login
Producing a trustworthy x86-based Linux appliance (mjg59.dreamwidth.org)
236 points by todsacerdoti on June 2, 2021 | hide | past | favorite | 138 comments



While I find this post and the ideas presented very interesting on the technical level, work in that direction ("remote attestation", making devices "tamper-proof") tends to give me a dystopian vibe - foreshadowing a world where there's no hardware left you can hack, build and flash your own firmware onto: Complete tivoization, to re-use lingo from when the GPLv3 was drafted. With that, really neutering all the benefits Free Software provides.

What good is having all the source code in the world if I can never put my (or anyone else's) modifications to it into effect?


You're not wrong, but unfortunately those of us on the side of free software aren't the ones driving most technology decisions. This technology already exists, and if people want to use it to lock us out of our own hardware, they can already use it to do so. Right now we're partially saved by remote attestation on x86 systems just being too complicated (and, basically, too privacy violating) to deployed in a way that could be used against users, but this is basically what things like Safetynet on Android are doing right now.

When the proprietary software industry is already using technology, I don't think we benefit by refusing to touch it ourselves. We can use this tech to lock down devices that are better off locked down, and where we're not violating user freedom in the process. We can use this to make it harder for activists to have their machines seized and compromised in a way they can't detect. Refusing to do the good things isn't going to slow down the spread of the bad things.


I am completely against any form of this technology, just like DRM, because it breaks the concept of physical ownership.

We can use this to make it harder for activists to have their machines seized and compromised in a way they can't detect.

This argument is often-made and I hate it because it advocates destroying the freedom of many just for the needs of a tiny minority --- and if a nation-state is going after you, it's pretty much game over unless you can create your own hardware.

Refusing to do the good things isn't going to slow down the spread of the bad things.

Maybe to you it's a good thing, but to many of us, that is the equivalent of giving Big Tech the noose and saying "don't put it around my neck!" (The saddest part is how many will happily work in these noose-factories, either oblivious to or convinced that what they're doing is "good".)


>it breaks the concept of physical ownership.

Am I missing something? This seems to be incorrect, this is explicitly a case where you, the hardware owner, controls the signing keys. It's nothing like DRM, that is a case where an outside person controls the keys.


The problem is now that this exists and is easy to set up, it's easy for the manufacturer to make a device where they're in control of the keys forever instead of the eventual owner gaining control.


So don't buy that device? Those devices already exist, that doesn't prevent you from buying other devices where you control the keys. If manufacturing an unlocked device becomes unprofitable and stops happening everywhere, then we can talk about what to do, but I don't think the existence of secure boot on Linux is going to make much of a difference either way.


>If manufacturing an unlocked device becomes unprofitable and stops happening everywhere, then we can talk about what to do, but I don't think the existence of secure boot on Linux is going to make much of a difference either way.

You mean... The last decade or so? Pretty much Mobile period sans the Librem 5 and I think maybe one other? Anything with an ARM chip that'll run windows must be secure booted and signed by Microsoft.

Or how about Nvidia(mostly)/AMD(to a lesser degree) video cards, where the entertainment industry increasingly relies on cryptographic attestation to constrain what people can do with hardware they bought? There is no "fully unlocked" buying option, and trying to divest yourself of Nvidia is impossible while being able to use your card to the fullest.

Or John Deere with their crippled hardware as a service model?

I'm all with charging for convenience. That's a value add. I'm not cool with intentional crippling, and extortionate practices whereby the manufacturer maintains ultimate control after first sale either legally or practically through privileged access to signing keys.


So.... don't buy those devices? I have a pinephone myself, I don't really use GPGPU, and all the farmers I know buy tractors from other companies.


It's a race to the bottom - or the top depending on how you look at it. Unless right to repair laws cripple their attempts, companies like John Deere will be able to use their lock in to wring ever more revenue out of their customers. That extra revenue, even if marginal in the grand scheme of things, will drive improvements in their hardware and software that competitors will have to match, most often by implementing the same tactics as John Deere (why not, when JD has already proved them? an MBA will say). Agriculture is already an industry forever on the razor's edge so any short term competitive advantage in yield or even up front capital cost will rapidly outweigh any medium to long term maintenance issue for farmers that live from loan to loan.

Just look at how fast the industry flipped over from dumb TVs to smart TVs, a similarly competitive and low margin market. I haven't been able to find a dumb TV at a big box store in years and the only options left are for commercial signage displays that command a large premium - largely made by the same brands that make the smart TVs.

I have a Pinephone myself but Android and iOS have already sucked all the oxygen out of the room - it's still nowhere near ready to be a daily driver and it's a decade plus late to the party. I got a small open source 3G cell tower for development years before the Pinephone even hit the drawing board.


That's just making their argument for them. What is the argument in favor of them making unlocked devices? Can it be done while increasing their profits?


More and more services require Android/iOS, either with Google services, or they even detect and block users with rooted devices/custom firmware such as LineageOS (for example Revolut). So far, one can go without them, with some inconvenience, but it's getting progressively worse.


There's some startup (maybe a ycombinator one) that "provides online meeting places for apartment communities." The managers of the building I live in have been pushing it pretty hard. I went to join it the other month because it's always good to socialize some.

Absolutely no web presence outside of (essentially) a brochure and email complaint form. I thought about complaining but I really don't care enough.


Sorry, but the future of computing is secure attestation of everything the CPU runs -- from the boot firmware to end-user applications. In the open source world we have two options -- we can either get on board with this or we can fight it and lose.


Development work is expensive. A cheap turnkey way of building a locked-down, remote-attesting distribution is going to make the bad things cheaper and more common. I'm sure proprietary developers would get there eventually, but this is one class of software where I think publishing a stable, bug-free implementation that anyone can use does more harm than good.


It's been already done many times. You can lock down your own machine from scratch in a couple of days with no prior knowledge. There's really nothing to hide and all elements of it are useful in their own right.


How? You can use this to secure your own devices from tampering. Lots of (cheap) devices are already locked down like this, would it really help to deprive yourself of the capability to secure your own devices too?


Many people want contradictory things and loudly ignore the contradictions.


I personally don't dislike the concepts of trusted computing. As much as I love to tinker with things, the last thing I want is some data appliance being remotely exploitable.

I think all the devices that provide more security by being heavily locked down should basically have a tinker switch. If you really want to write your own firmware for your phone or your dishwasher, flip it to tinker mode which locks you (maybe permanently) out of the software it shipped it and let you flash whatever on to it. The manufacturer gets to waive all responsibility for your safety (digital, physical, etc.) from that point onward.

Bonus points if it just blows away the keys to the onboard software so you can use the security mechanisms for your own code.


That's still very consumer-hostile. How am I supposed to write a replacement firmware for my dishwasher if I have to brick it first? I have dishes to wash! I'm not gonna buy a second dishwasher for development purposes. Why can't I test it against the native firmware?

(And by the way - remotely exploitable dishwashers? What the heck is the world coming to?)


I was just thinking of the extremely litigation happy culture in the US. We literally have thousands of highway signs in cities saying that if you were in an accident, call us and we'll sue the person who rear ended you.

All I imagine is someone running their own firmware on an appliance, doing some unseen damage, and then reverting to the onboard firmware and getting hurt. It would be a field day for lawyers.

I completely agree that this sucks and it's the same reason robotics has proceeded at a snail's pace. God forbid making a Roomba with a more powerful vacuum, what if it ran over someone's toe.


Yes, and you only need this at the root hypervisor level, once peripherals can be abstracted in a new way (maybe DMA at a different privilege level, certain hardware features would be required).

I am not super mad if I have to run my custom kernel in a VM. It substantially reduces the surface area exposed.


Heh, Safetynet is consistently fooled by Magisk's hide-root option, so is it really doing that?


Yeah, on devices without hardware attestation. Which is now the new normal on all phones sold. When the software route inevitably gets disabled and you can no longer fool google into believing you dont have hardware attestation you are done for good.


I think the technology itself is great to have in the open source sphere. There are many valid reasons to want to have a system that is both open source AND cryptographically proven to run exactly the software you think it runs.

For example voting machines should be done in this way. Open source software such that outsiders are able to verify + a secure boot process such that anyone can verify that the machine is really running the code it is supposed to run.

Of course we should all still be very careful of what we accept in terms of control of our hardware. And I agree with you that things are not moving in the right direction there, with locked ecosystems everywhere.


But nothing here is cryptographically proven. Remote attestation ala Intel SGX is an opaque black box that comes down to trusting Intel.

I think most people would prefer no voting machine software at all, seeing how most people can not "verify that the machine is really running the code it is supposed to run" but can indeed verify a paper ballot.

And of course signing a huge code bundle is the farthest possible thing from "run exactly the software you think it runs". Console manufacturers keep learning that. You really wanted to run that WebKit version that turned out to instead be a generic code execution widget? Think again.


TPM-based remote attestation doesn't involve SGX at all. If Boot Guard is enabled then you're trusting the ACM to perform its measurements correctly, but that's a not overly large amount of code that can be disassembled and verified.


Sure, you can prefer paper ballots, but that's just one example.

The reason I bring it up is that one of the benefits of open source that is often mentioned is the ability to verify that it does what you think it's doing. Doesn't matter whether it's a voting machine, a self driving system or an ATM or whatever. It's still good for open source to have the capability to do this kind of proving in cases where you want it.


> I think most people would prefer no voting machine software at all

The majority of people, with normal sight and no mobility impairment, may be fine with paper ballots. But for some of us, an accessible voting machine is more than a convenience, as it enables us to independently (and therefore privately) cast our votes.


Keep it as a secondary option then? Or, at worst, have every state or county independently write the software for their machines so they won't all be compromised. A scalable way of breaking an election is dangerous.

Even a mobile app to guide blind users on the ballet would be more secure.


A machine could fill out the paper ballot in these cases.


If you don't trust the CPU vendor, a solution there would be to buy multiple CPUs from multiple vendors, run the same thing on all of them, and compare the results. You would still want them all to have the equivalent of SGX.


I have yet to see a digital voting system that a grandma with 0 digital literacy can dream of trusting. That's my standard for digital voting.

Basically excludes any black box machines, block chain, cryptography and any existing computers.


Why can’t we have both? General-purpose computers for whatever-we-want, and single purpose devices from a trusted source which we can be more confident are untampered with?


Copyright owners will push for the later, maybe even take a page out off Googles Play store licensing and outright prohibit hardware manufacturers of even dreaming about producing a device that doesn't enforce their draconian requirements. So hardware manufacturers get the hard choice of either going for the general market that expects things to work or going for a tiny market that is happy with some of the most popular services not working by design.


You can still do both. Just allow the user to blow a fuse, and from then on the locks are removed.


> So hardware manufacturers get the hard choice of either going for the general market that expects things to work or going for a tiny market that is happy with some of the most popular services not working by design.

Software is already here, particularly in social media (mastodon/SSB vs Facebook). That hardware eventually gets there seems to me an inevitability (arguably we're already at least partially there, as evidenced by the fact Purism/Pine64/etc exist).

I still don't see it as a problem, though, because an individual can have different technical interfaces (devices, OSes, etc) for different purposes.

Generally, I put my personal stuff on systems I understand/control.

For some things, like watching TV, I'm okay with going to Netflix because that transaction is expected to be transitory. If Netflix disappears or declares themselves a new world order tomorrow, I can simply unsub and no harm done.

Where things get problematic is when so much of someone's life is wrapped up in a mono-corporate cocoon (e.g. Amazon shipping things to your house and running your servers, or Google serving you search results + mail + maps).


> For some things, like watching TV, I'm okay with going to Netflix because that transaction is expected to be transitory. If Netflix disappears or declares themselves a new world order tomorrow, I can simply unsub and no harm done.

So much for your $1000 TV that had Netflix and only Netflix builtin, and will refuse to boot when the cryptographic check fails becaused you changed the string that points to http://www.netflix.com to http://www.notnetflix.com


Or when Netflix refuses to continue supporting their app on your device and so you are forced to upgrade, despite your device still being fully capable of running video streaming (like the Wii, and probably soon the PS3 and Xbox 360)


> I can simply unsub and no harm done.

But your TV manufacturer still wants to provide Netflix to other users and Netflix decided to require all their devices to run its trusted code if they want to provide Netflix to anyone, whether you in particular want it or not. So your choice is to trash your existing TV and track down a manufacturer that doesn't have any support for Netflix, Hulu, Youtube, Amazon Prime, etc. at all to buy a new TV that doesn't ignore your choice. With TVs you might be lucky since there is a large market for dump displays that avoid any TV related functionality anyway, of course there might be restrictions in the license between Netflix and the TV manufacturer to close that loophole too, maybe limiting sales of dumb displays to specific types of users.


I've not thought this through extensively, but couldn't you just flash signed firmwares onto those "single-purpose trusted devices"? If they were open to flashing that is.

What more to want from a security perspective? In-device protection from flashing? Sounds similar to security through obscurity. I'd prefer easy ways to check what a device is flashed with. Something like a checksum calculator device. Not sure if that's a reasonable idea.


There's a bunch of cases where you want these devices to be resilient against attackers who have physical access to the system. That means it has to be impossible to simply re-flash them given physical access, unless you can detect that this has happened. That's what the trusted boot side of this is - it gives you an indication that that reflashing has occurred.

Out of band firmware validation is a real thing (Google's Titan sits in between the firmware and the CPU and records what goes over the bus, and can attest to that later), but that's basically just moving who owns the root of trust, and if you don't trust your CPU vendor to properly record what firmware it executes you should ask whether you trust your CPU vendor to execute the instructions you give it. Pretty much every option we currently have is just in a slightly different part of the trade off space.


> There's a bunch of cases where you want these devices to be resilient against attackers who have physical access to the system.

I looked into TPM stuff a few years ago, and it all seemed pretty useless to me.

First of all, the entire key-protection house of cards relies on the assumption if you've booted the right OS, the keys can safely be unsealed. But the TPM does nothing to protect from security issues beyond that point, which is the vast majority of security issues.

Second of all, if you're worried about someone snatching your laptop or phone, full disk encryption where you type the password at boot gets you 99% of the protection with much less complexity. And the much lower complexity means many fewer places for security bugs to be accidentally introduced.

Third, if you're worried about evil maid attacks where someone dismantles your laptop and messes with its internals without you knowing then gives it back to you, then the TPM isn't sufficient protection anyway. They can simply put in a hardware keylogger, or get direct memory access, in which case it's game over anyway.

And fourth, the TPM doesn't have a dedicated hardware button (making it a shitty replacement for a U2F key) and doesn't have an independent clock (making it a shitty replacement for TOTP on your phone) so it's not even a good replacement for other security hardware.

About the only use I can see for this stuff is if you're some huge multinational company, and you think even the authorised users of your computers can't be trusted.


Just some small notes/nitpicks...

>the TPM does nothing to protect from security issues beyond that point, which is the vast majority of security issues.

I hear this type of thing often but it's the wrong mindset to take when dealing with this stuff. Security holes in one part of the stack are not an excuse to avoid fixing security holes in other parts -- if you do that, you now have multiple security bugs that are going unfixed.

>And the much lower complexity means many fewer places for security bugs to be accidentally introduced.

This doesn't seem to make any sense, avoiding securing the boot process does not mean the boot process is any less complicated or somehow has less parts that can be compromised. TFA is just describing how to secure the parts that are already there.

>They can simply put in a hardware keylogger, or get direct memory access, in which case it's game over anyway.

I'm not sure how this is related, building a tamper-proof case seems to be outside of the scope of this. This seems to cover only the software parts.


> avoiding securing the boot process does not mean the boot process is any less complicated

Of course it does: Not only does secure boot add an extra point of failure, it's a point of failure that's specifically designed to be highly sensitive, and to fail locked, and that hardly anyone in the kernel development community is testing with.

> I'm not sure how this is related

From a computer owner's point of view, the TPM's secure boot functionality exists only to protect against attackers with physical access to the device. After all, if a malicious attacker making a remote attack has the ability to replace your bootloader or reflash your BIOS, they've already got everything.

In other words, secure boot is there to protect against an evil maid [1] removing your hard drive, replacing your bootloader with one that logs your full disk encryption password, then subsequently stealing your laptop and password at the same time. Or something of that ilk.

However, the TPM is insufficient to protect against such attacks.

As such, secure boot fails to provide the one thing it claims to provide.

A serious system - like the xbox's security system - (a) has the functionality on the CPU die, and (b) has the hardware for full speed RAM, bus and disk crypto, all with keys that are inaccessible to the OS.

[1] https://en.wikipedia.org/wiki/Evil_maid_attack


I don't understand what you mean extra point of failure. There is a boot process, you can't get rid of it because then you can't boot the machine. So you either secure it or you don't. I get the concern that the hardware implementation could contain bugs, and that's a real concern, but your system is not going to be less secure by having this -- at worst, it seems it can only be as insecure as it would be without it.

>However, the TPM is insufficient to protect against such attacks. As such, secure boot fails to provide the one thing it claims to provide.

I don't think anyone is saying TPM or secure boot alone is going to prevent against such attacks. It needs to be combined with some other physical protection measures, e.g. a tamper-proof case of some kind.


Likely what they are referring to is how UEFI has greatly complicated the nature of a computer's boot process by essentially inserting a firmware runtime into what was a simpler to understand chain of POST->Hand off to program at address.

I had issues wrapping my head around this as well with regards to things like Network Boot etc, where I could not for the life of me understand or justify a boot process having a runtime capable of doing all this extra cryptographic/network nonsense when all I bloody wanted was my OS, up, now.

Not to get nostalgic, but that magical era for a user around Windows XP with a <5 second boot was just that; magic.

I know all the oldtimers will come out of the woodwork with horror stories of competing, sloppily specified BIOS implementations, the pain of malware hiding in CMOS, the threat of rootkits, etc... And the admins will chime in with "How do you expect me to power on the thousands of servers in my datacenter without network access during boot"?

Those are valid and real situations in isolation I can stomach. I cannot, however, stomach a boot process whereby a non-owner arranges things in a way where it is guaranteed that they get the final word in deciding how hardware you paid for is used, which requires the composition of those services.


> It needs to be combined with some other physical protection measures, e.g. a tamper-proof case of some kind.

xboxes and iphones don't need tamper-proof cases.


Note the term "appliance" in the submission title, and "single-purpose trusted devices" in the comment chain you replied to. General end-user desktop devices like your laptop indeed aren't that high on the list of use cases, at least not without further development in other areas. (although I think you are somewhat skipping over the part of being able to protect secrets stored in the TPM against being requested by an unverified system)


People will not willingly buy a locked-down device over an open device, all other things being equal. So general purpose devices will not be made available, so that locked-down devices will sell.

edit: the only people who think that being locked-down is a feature are rationalizing technologists who indirectly profit from that arrangement. It's not even more secure. The methods used to control locked-down devices (namely constant network connections and cloud storage/control) are the most vulnerable attack surfaces we have, and the source of virtually all contemporary security disasters.


I sorely wish you were right, but the success of companies like Apple seem to indicate otherwise. I won't buy a locked-down device, but for every person like me there are thousands who don't care.


This is a perfect example of survivor bias. OSes are secure now, so attacks that succeed attack the sxurit system.


Being able to verify that your program runs on authenticated firmware does not mean you can't modify the firmware or the software running or replace it with something else.

It just means that you can be sure no one else has tampered with your device.

To me it seems very silly to not follow this line of thought just because someone in the future might use it lock out hackers. This is like leaving bugs unfixed because someone might have a use for it.


The user being able to verify things isn't the issue, the issue is somebody else being able to verify things, perhaps even requiring verification. This can even be extended (and already has been) to where binaries can be supplied in encrypted form and run on a processor that never reveals the unencrypted code to its user.


Who cares? Then you just wont hack that device


The question with secure boot is who has the keys. As long as that's the end user it is awesome.


Which is why every trusted computing platform with free software on it should have an owner override. (The total count of such platforms is, as far as I know, unfortunately zero.)


Pretty much every x86 system with a TPM and UEFI Secure Boot can have the secure boot keys swapped out by the owner.


I recently went to write a kernel module on my Ubuntu system, only to discover the boot loader now defaulted to "secure boot" nonsense and I couldn't insmod a non-signed module.

I tried to simply disable "secure boot" in the BIOS settings and then the boot loader just did absolutely nothing. Hot fucking garbage.

Apparently, if you have "secure boot" available during the install it will use "secure boot" without any way to opt-out.


Did you try disabling it in shim-signed instead of the BIOS (method 2 on this page [1])? I'd expect that to be more consistent and/or reliable since BIOS quality can vary a lot from vendor to vendor.

You might also try signing the kernel module yourself (the manual method at the bottom of that page)?

[1] https://wiki.ubuntu.com/UEFI/SecureBoot/DKMS


I don't want to do any of that. I just want to insmod a module like I've been doing since 1995.


While the use case of secure boot is often anti consumer/end-user, there are many applications that make sense to have attestation (yubikey type embedded projects, etc)

Without free software implementations of secure boot et al. all this would just happen behind closed doors. At least with this the field progresses and you'll have the tools to secure your own applications when the right project comes.

> What good is having all the source code in the world if I can never put my (or anyone else's) modifications to it into effect?

Well, it'll be more difficult to get pwned, for one.


I know this is going to be an unpopular opinion, but if a service provider wants to support only specific clients which have a rigorously-managed supply chain of their hardware/software, then that's up to them.


I buy a computer I own that computer there should be no strings attached.


Yes; sure.

But you don't own (for example) Netflix. So Netflix can exclude you if you use your computer in certain ways, right?

i.e. if you refuse to use a secure boot infrastructure and do remote attestation of the trust chain to Netflix, they can refuse to provide you service - obviously based on the assumption that this was all made clear to when you signed up.


The 'ole "they're a private business, they can do what they want" defense.

I try not to whip that out because it's all fun and cool til they do somethimg you don"t like.


> Complete tivoization

Ironically TiVo is long gone from living rooms.

The iPhone is locked down, consumers buy 5.5 Android phones for every 1 iPhone. But rich users buy iPhones, and they also buy software, so...


It's interesting that there's no attempt to solve the actual problem here - telling the difference between the owner of the device (who should be able to do what they like to their stuff) and an attacker (who must be prevented from making changes to the device).

Both are presumed to have extended physical access to the device, any requisite knowledge, tools, etc.

The normal solution to this is to have a password that only the owner knows. I'm assuming that that hasn't been used in this case because the intention here is actually to lock the owner out and only allow the manufacturer access to change the device. Is that the case?


Where do you put the password? If you're just protecting the firmware setup, then I can just rewrite the secure boot key database without using the firmware setup. If you're using it for disk encryption, I just buy one myself, dump the disk image after it's decrypted, modify it so it'll boot whatever password you type in, intercept your device while it's being shipped or spend some time alone in a room with it, and drop my backdoored image on there.

Please don't get me wrong - I would love a solution to this problem that wasn't isomorphic to a solution that can be used to lock out users. But I've spent the last 8 years or so of my life trying to do so and haven't succeeded yet, because the hardware we have available to us simply doesn't seem to give us that option.


Is it really that easy just to overwrite the secure boot key database? I set up recently secure boot on a Lenovo laptop according to these instructions https://nwildner.com/posts/2020-07-04-secure-your-boot-proce...

...and deploying the keys involves setting UEFI in the Setup mode, which is protected by the firmware setup password.

Granted, I didn't verify where the keys are stored and in which format once deployed. But it would be pretty disappointing if they were just copied to another place without any encryption or signing authorized by the password.


The UEFI variable store is typically just a region of the flash chip that stores the firmware. It's not impossible that some systems perform some sort of integrity validation, but I haven't seen that in the wild.


Isn't Intel Bios Guard supposed to protect against this very attack ?


Well, you're comparing hashes to ones online, can't you put the password hash online? (sorry, I am very ignorant of the situation here and am asking stupid simple questions)

I mean, surely the same problem exists for any data stored on the device (keys, hashes, whatever)? If there's a way of storing a key chain securely on the device so it can't be modified by an attacker, can't a password be stored there instead?

> ... the hardware we have available to us simply doesn't seem to give us that option.

Is that because the manufacturers don't give the option, or because technically there isn't a way of giving the option?


> Well, you're comparing hashes to ones online, can't you put the password hash online? (sorry, I am very ignorant of the situation here and am asking stupid simple questions)

If the system is compromised then it can just report the expected password hash. You can't trust a compromised machine to tell the truth.

> I mean, surely the same problem exists for any data stored on the device (keys, hashes, whatever)? If there's a way of storing a key chain securely on the device so it can't be modified by an attacker, can't a password be stored there instead?

Various bits of TPM functionality can be tied to requiring a password, but it's hard to actually turn that into something that's more freedom preserving while still preserving the verifiability. What you want is the ability to re-seal a secret to new values only as long as you have the password available, and I don't /think/ you can construct a system that does that with the features the spec gives you.

> Is that because the manufacturers don't give the option, or because technically there isn't a way of giving the option?

Unclear. There's little commercial incentive for people to come up with elegant solutions to this, and I can't immediately think of a model that would be significantly better than the status quo.


> If the system is compromised then it can just report the expected password hash. You can't trust a compromised machine to tell the truth.

I think the GP's point is that you assume the system can't tell the truth so you do the validation server side rather than client side. Sure, the system could send a different password hash, but as long as you don't publish the correct hashes it's not going to matter if the client sends alternative hashes since that validation happens server side and thus the client wouldn't know what hashes are valid.


Thanks for the clear answers and info :) It's an interesting subject!


This distinction is only useful if additional conditions are satisfied:

The user must not be asked to enter that password repeatedly to do everyday tasks, otherwise it is easy to trick them into entering the password into a malicious input field.

More important, the user must not be expected to routinely waive all protection for required tasks. For example, the user is often expected when installing software on a computer to give administrator privileges to an installer script which comes from an untrusted source. The user is expected to "make sure the source is trusted". This does nothing but put the blame on the user, who cannot do that in a meaningful way at all, yet is expected by other factors to install and use the software.

The user must not be forced by other factors to enter the password and take measures which are not just unsafe but are actually known to attack the system, exfiltrate data or similar.


I agree totally.

But we haven't come up with any better methods of doing this. Some tasks require making changes to the machine that could be evil, so we have to ask the user for permission to make those changes (to stop evil people making those changes with no permission). The more parts of the device we protect, the more we have to ask for permission, and the more routine the granting of that permission is and the less effective the whole permission-granting mechanism is.

Coming up with a decent, workable, solution to this would be great. As you say, in an ideal world the onus would not be on the user to verify that the software they're installing is not malicious (with no way of effectively doing that).

hmm, sounds like a problem that could be lucrative to solve...


> I'm assuming that that hasn't been used in this case because the intention here is actually to lock the owner out and only allow the manufacturer access to change the device. Is that the case?

No the intention is to lock out functionality. Either programming code that can't be decrypted without secure boot actually booting security, or access to remote networks. That second one is where the controversy comes in because it means if the owner of the device is Netflix (or FB/Youtube/HBO/...) and the network is owned by Netflix you cannot change anything on the device in your house and still watch Netflix.

Because of this locking out functionality it is referred to as "rendering your device useless". It can of course still do everything it can do, at the owner's request, just not with Netflix' data.


Trusted computing and TPMs by extension are treachery on a chip without an user override. And the fact that even the most tech savvy of us don't care (looking at security researchers with macs) makes me super pessimistic about the future of computing. Can't wait for the time when i wont be allowed to access a website because I have unsigned drivers running...


You're conflating trusted computing with there being an OS-manufacturer monopoly over TCB boot keys.

Trusted computing is great if you're an IT admin (or even an "IT admin of one") and you order devices with an open/unsealed boot-key store from your hardware vendor. You can install your own boot keys on your fleet of devices, seal the boot key store, modify OS images/updates to your heart's content, and then sign those modified images with those same keys. Now only the images you've created will run on those computers. People won't even be able to install the original, unmodified OS on those machines; they'll now only ever run the version you've signed.

This isn't just about employees not being able to futz with the MDM. Even the OS vendor won't be able to push updates to your managed devices without your consent. You'll truly have taken control over what code the devices will (or more importantly, won't) run.

In any situation where the person intended to have physical access to the device is not the same as the owner-operator of the device, this kind of thing is essential. Obviously, public-use computers in libraries et al. But also, ATMs. Kiosks. Digital workplace control panels and dashboards. On all of those, nobody's around to monitor the hardware to ensure that someone doesn't just open the back and swap the hard drive out for their own. With TCB, swapping the hard drive out just makes the device not boot.


>You're conflating trusted computing with there being an OS-manufacturer monopoly over TCB boot keys.

Because the two go hand in hand in the wild already. Just look at all android phones. Where you have to make a pact with the devil to get your unlock keys (if possible at all) and that still doesnt give you full override capabilities and also marks your device as "tainted" so you can say goodbye to banking apps and increasingly more stuff.

I'm not concerned about enterprise deployments because companies had a lot of tools to (rightfully) lock down devices given to their employees since the dawn of computing.

However I am concerned about the philosophy coming over from phones to the desktop space with announcements like https://news.ycombinator.com/item?id=25191319


You and I have very different definitions of “in the wild”, it seems. To me, “the wild” is embedded civic-infrastructure / industrial systems. This is a place where people will encounter trusted computing involuntarily.

Consumer electronics — phones, game consoles, etc. — these are places where people choose to buy the dang thing in the first place, despite the restrictions the manufacturer imposes. Trusted computing isn’t the problem, it’s a tool used by an abuser against their Stockholm-syndrome victims. The abuser themselves — and society having no social norms that protect against developing this particular Stockholm-syndrome — are the problems.

The stuff people want from these devices — the reason they buy into these locked-down ecosystems — is almost always just the platform-locked software that runs on them, not the distinctive hardware (that the software doesn’t usually even take advantage of!) But that fact is great for the end-user: it means that, as long as one person figures out how to defeat the platform’s software DRM, and another person figures out how to write an emulator for the hardware, then there’ll never a reason to lock yourself into these ecosystems. When software is the USP, users can just use an alternative conformant implementation of the same platform (from the software’s perspective) that isn’t locked down, to run said software.


> Can't wait for the time when i wont be allowed to access a website because I have unsigned drivers running...

If that happens, it will create a market opportunity for websites without DRM or such checks. If you fuck with the ergonomics, you necessarily always create a market opportunity for competitors IMO. That being said, I also would rather use open computing platforms where I can easily install whatever OS, drivers, hardware or userland software I please.


Will it though? From what I see anecdotally, people will just accept it as the new normal sooner or later. Just like when Android rolled out a feature that enables apps to prevent you screenshotting them. At first it was annoying but now nobody cares.


I tend to agree. This ended up longer than expected, sorry.

There's the theory of how incentives should work in free markets, and then there's the practice of exactly how savvy consumers can really be, and whether non-consumer interests can organize themselves in a way that easily overpowers the consumers.

I've thought about this recently regarding hardware DRM in Android phones. Google has Widevine which has different levels of support, and Netflix, for example, will only send high definition streams if your device supports L1 Widevine which means it will only be decrypted in "secure" sections of hardware and the user cannot access these areas. This is intended to stop user access to the unencrypted media.

This hardware is widely available in Android devices already, so why would Netflix* do otherwise? And if you want to stream HD from Netflix then you'll get a device that supports it because Netflix require it. However, how did our devices end up with this technology to begin with? If consumers acted in their own best interest, why would they pay to have this extra technology in their devices that protects somebody else's interest? If this technology wasn't on our devices already, do we think that Netflix wouldn't be offering HD streams anyway? Basically, if consumers could organize as effectively as corporate interest, would this technology have made it to our devices at all?

It's possible that it would have. Perhaps overall people would deem it worthwhile to acknowledge and protect corporate rights holders so that they can continue to produce the media they want to consume and stop people consuming it for free. Personally, I would not have accepted this bargain and I would have left it up to the media companies to manage their own risks and rewards, and I strongly suspect that they would have found profitable ways of doing so that would include non-DRM HD streaming. I think it's tough to say what an educated consumer would think on average because so few consumers think about these things and those that do may have a strong bias that led them to research it in the first place.

* I'm saying Netflix here because it's easier, but in reality I'm sure a lot of the content they licence will require DRM so it's not entirely up to them


The widevine situation puzzles me even more because music and film rights holders will be at a perpetual disadvantage as long as you can point a camera at the screen and plug your drm device into the line in slot on your motherboard.


I don't know...websites refusing to play video in Firefox because I have DRM disabled is accomplishing something I want right now.


My generation was taught: "Once you have physical access, the game is over." I believe this is still true. Pretending otherwise feels like snake oil.


This idea came about in the days when we kept disks unencrypted. Ripping the disk and editing /etc/shadow or pulling data was trivial. Physical access was the _only_ requirement to do this.

Disk encryption then became practical, and defeating it requires a running, vulnerable machine (for cold boot) or tampering+user interaction (for evil maid).

Secure Boot makes those even harder - you will have to compromise the TPM itself.

All of this is to say it is still possible to attack a machine with physical access, but you now have to enage in further security breaks. It's not really "game over" anymore as there are further defensive to defeat.


Disk encryption is only really practical when you never need to reboot. Found that out the hard way, like most others did at some point :-)


Yup. Can't remote manage a system with encrypted boot disks.



Doesn't look like it. Those are some complex solutions, and I might have missed something, but do note that I said 'boot disk'. From what I can tell, those solutions still have to leave /boot unencrypted, which was my point.


It's merely a rationalisation for the companies to maintain control and lock users out of what they should rightly own.


This only applies to parties with significant resources.

The same way that locking your front door or being home or parking in a guarded parking lot will be efficient in deterring opportunist thieves and even most regular thieves.

Absolute security doesn't exist but that doesn't mean security measures are futile in general.


Maybe not exactly futile, but too many security measures can become a serious problem in certain scenarios. Imagine you are being chased and need to get inside your front door as quickly as possible, the more locks you need to operate, the higher chances of not making it inside.

Now I'm going to propose a more likely future scenario. Imagine you are on another planet and there is a software flaw that needs to be patched. Do you want to have to beg some billionaire millions of miles away for the privilege of modifying the code, or do you want the ability to fix it as soon as possible? If its a time-sensitive issue consider the consequences of having to fumble around with deploying signed patches, or installing a new key and updating everything to work with the new key.

None of these trust devices are perfect, they might leak data that can be used to infer keys through side channel, or can be physically inspected, etc, I think we can all agree they are not perfect. So if the security model breaks down with physical access then the only adequate solution is physical security. Putting your faith into some crypto function that will be obsoleted by time is not a winning defense. It's also possible the keys may have been obtained by a malicious adversary, rendering this mathematical security layer ineffective at best. If I can install new keys, then an attacker can also install new keys. If the imperfect security measure causes more problems than it solves, then its worth rethinking.

What problem would it ACTUALLY solve in this scenario? Maybe it could be used to mitigate a breakaway civilization by only allowing patches through remote update channels, keeping new outposts fully dependent on Earth. Potentially at the cost of real lives if that update channel is broken somehow. Though someone always figures out a way to break the lock :)

Please I don't want to hear about some hypothetical dissident of a tyrannical regime, they should have destroyed the device before capture if they had any wits. The regime would probably be forcing its citizens to run fully validated stack anyway and would be all up in their business for (red flag) installing their own key instead of government approved key... Corporate security? Lol do you think your petty secrets are worth restricting humanity by normalizing imperfect security measures? Some of these corporations need to be kept in check somehow, they have too much power already.

The only future I'm on board with is one with a robust well tested software ecosystem controlled by its users/operators/owners, not a careless conglomeration that prioritizes the well being of it's shareholders, or crushing its competition.


Hypotheticals aside, there are very real and practical reasons for wanting to protect firmware and prevent tampering; and none of them are nefarious in nature.

One such reason is simply compliance: if a device is only capable of safe operation when running within a certain spec and dangerous otherwise, regulatory frameworks might require manufacturers to ensure said specs are honoured.

While a manufacturer can and must provide the "robust and well tested software ecosystem" you yourself desire, Mrs random tinkerer might not. A device running with modified firmware might therefore lose its operating license and - best case scenario - get damaged in the process, or on account of it being dangerous if operated outside of spec endanger the life, health and/or property of its owner/user/operator.

There's no need to dream up hypothetical scenarios on distant planets when all it takes is a typo to turn an MRI machine into a waffle iron for people or an Insuline dispenser into a super effective suicide device.

Not every device is intended- or safe to be tampered with by unqualified laypeople.

It's also pretty arrogant to assume just because you know how to program to also have sufficient knowledge to be able to understand and make meaningful changes to every conceivable device in existence that uses some kind of firmware.

There's a reason certain jobs require qualification and there's also good reasons certain devices are not to be messed with by unauthorised/unqualified people either.


It's true you shouldn't tinker with many types of hardware without extensive knowledge and experience. You're going to want to staff your off-world outpost with competent engineers and scientists that can develop and test new task-specific software or patch existing software, not some random entry-level programmer or tinkerer. IMO it would be extremely arrogant for a manufacturer to claim their remote updates "must" be tested adequately in every conceivable environment, up to and past the operating requirement extremes because some imaginary regulation dictates it. From {m,b,tr}illions of miles away because AFAIK the best they can do is simulate what might happen, not actually test operation in a real environment as their life actually depends on it. If I cannot audit the encrypted bytestream it sure as hell is not going to be piped into any of my spacecraft, life support, communication systems, or anything TBH. With me, trust starts with being able to read the source code, and modify it when it needs to be fixed. Giving a specific crypto function too much trust could leave you vulnerable or even lock you out of fixing critical flaws that "the manufacturer" sees as perfectly fine and within spec.

Maybe I've just had too many horrific experiences with software updates and that is causing some bias. Or seen too many science fiction films.


What a pile of authoritarian paranoia-scaremongering BS. That's the same argument used by John Deere and such to destroy right-to-repair, yet the automotive aftermarket survived for literally a century without that shit.

"Those who give up freedom for security deserve neither."


The frustrating flaw in these setups is disk integrity. It’s pretty consistently {speed, integrity, mutability}, choose two. Dmcrypt, dm-integrity, and dm-verity cover all the pairs, but none of them completely solve the problem. If you have fairly static configuration, I imagine you can set up a blend of, say, dm-verity (for binaries/static files) and dm-integrity (for mutable files) and get something workable.

Caveat: I seem to recall dm-integrity being somewhat flawed and vulnerable to rollback.


You could possibly use ZFS with sha256 checksums for that purpose? You would have to somehow sign the merkle root each time you write it, not sure how easy that would be. Perhaps write it to another partition and hope it's atomic enough? Or ZFS encryption would probably do it already if you don't need the system in cleartext.

https://blogs.oracle.com/bonwick/zfs-end-to-end-data-integri...


The tricky part with modifications is described in the article: You would have to have the signing key available on the system which usually means it could be extracted from that system and then looses all protections.


You then have write amplification from the merkle tree. Ignoring performance, something like this should be possible though. For atomicity, there’s going to be some clever journaling based solution.


Relevant, HEADS firmware: https://github.com/osresearch/heads

Definitely worth reading the Wiki: https://osresearch.net/

Can be run on a variety of laptops, including a ThinkPad X230. Ships by default on Librem laptops. Uses the second-to-last approach described by the article (TOTP-based).


> Let's say you're building some form of appliance on top of general purpose x86 hardware. You want to be able to verify the software it's running hasn't been tampered with. What's the best approach with existing technology?

Why can we not use something like Guix by declaratively setting up a system [0] and for extra safety have it run in a container [1]?

[0] https://framagit.org/tyreunom/guix/-/blob/99f47b53f755f0a6cb...

[1] https://guix.gnu.org/en/blog/2017/running-system-services-in...


You can't. The post is still somewhat valuable, but should really not use "trustworthy."


Dealing in absolute terms does not help security. It depends entirely on your threat model.

If you consider NSA or similar agencies a problem then you are in a world of pain anyway and using an entry level guiding blog post is certainly not appropriate.

For everyone else, this puts already quite a big defense layer to your arsenal even if not unhackable in absolute terms.


I know. Thus I said:

>The post is still somewhat valuable, but should really not use "trustworthy."

The posted article is dealing in absolutes, not me.


Why can't you? The article explains how.


Depends on your level of paranoia and the age of the CPU.

The ME has had many security vulnerabilities and probably more to come. For an appliance some old CPU might be good enough, but it does not get security updates. Some claim the ME might contain a NSA backdoor. That the ME can do networking certainly doesn't give confidence. The US government can order CPUs without ME, but nobody else can. Does not raise confidence either.


Please don't call it "paranoia": a whole lot of vulnerabilities have been found in CPUs together with plenty of undocumented functions that look just like backdoors.

On top of that, it is well known that governments research or buy 0-day hardware and software vulnerabilities and keep them secret to be used as weapons.

ME is just a fraction of the attack surface. When I read the title of the article I thought "trustworthy" was about mitigating hardware vulnerabilities.

At this stage it's practically impossible. :(


What's ME? The article doesn't seem to mention this.



Every intel cpu past 2008 contains a coprocessor which runs at a higher permissions level than the normal CPU and therefor the OS. Its primary function is DRM for video and theorized backdoor access for governments.


It should be noted that AMD has an equivalent management engineZ


But ARM hasn't. Or have they added something to their server range of designs?


The short version is "It's complicated". Most ARM cores have a feature called TrustZone. Effectively, there's a set of system resources that are allocated to TrustZone and not accessible from the normal world. Various events can trigger the CPU to transition into executing code in this "Secure world", at which point the core stops running stuff from the normal world and instead starts running an entirely separate set of things. This can be used to do things like "hardware" key generation, DRM management, device state attestation and so on. Whether a specific platform makes use of TrustZone is largely up to the platform designers, but there's plenty of room to hide backdoors there if you were so inclined.


Hmm, I have never seen Trustzone as comparable to ME.

Trustzone is a secure execution environment, mostly isolated from normal CPU operation. Wasn't it so that it cannot even access main memory???

ME is really more privileged than the CPU?

I have not heard about Trustzone doing networking. But ME can supposedly do even WLAN while the CPU is not running.

Disclaimer: I am not a hands-on expert at that level, more like an armchair pilot...


TrustZone is a CPU mode, hence it is not fully isolated from normal CPU operation. The CPU chooses to enter it and the current CPU state gets saved/restored. It contains the highest exception level, so it is able to access all memory. It does not usually have networking because that would invite complexity, but there is nothing to stop a vendor from putting a full network stack in there and assigning a network peripheral. Typically, it would rely on the main OS to send and receive packets.


Console and phone manufacturers have chased this dream for decades and each and every one has been hacked to run arbitrary code and applications that are supposed to ‘only run on trusted hardware’.

You can make it difficult but defeating an attacker who can touch the hardware is for all intents and purposes impossible.


Where are the hacks that let you run arbitrary code on an Xbox One running current firmware?


Do you think they will never exist?

edit I found that Microsoft did the smart thing like Sony did with the original PS3 and allowed people to run their own code (but not XBox games) on their consoles, removing a large incentive for people hacking the console.

That doesn’t automatically make the security watertight though.


"Never" is a strong word, but given that they're already previous generation devices and haven't been properly compromised yet, it wouldn't surprise me.


It's right there in the article : " In the general purpose Linux world, we use an intermediate bootloader called Shim to bridge from the Microsoft signing authority to a distribution one. "

So you need to trust Microsoft for the first keys :)


We do that for convenience, so you can boot Linux without having to hunt through firmware menus to reconfigure them. But every machine /should/ let the user enroll their own keys[1], so users are free to shift to a different signing authority.

[1] Every machine I've ever had access to has. If anyone has an x86 machine with a Windows 8 or later sticker that implements secure boot but doesn't let you modify the secure boot key database, I have a standing offer that I'll buy one myself and do what I can to rectify this. I just need a model number and some willingness on your part to chip in if it turns out you were wrong.


I have been trying to improve the usability of secure boot key management on Linux for the past year by writing some libraries from scratch and sbctl. I have even started writing full integration testing with tianocore/ovmf!

https://github.com/Foxboron/sbctl

It should hopefully end up being an improvement on efitools and sbsigntools. Tried posting about this on HN but somehow it's a topic with little to no interest, strange world!


Most Surface Pro x86 devices do not let you enroll user keys through the firmware. In fact the original Surface Pro doesn't even have the UEFI MS key, so it can't even boot Shim. Following Surface devices do allow you to enroll the MS UEFI key through a firmware update (requires Windows), and starting from Surface Pro 3 iirc the UEFI MS key is builtin (but still no option to enroll your keys through the firmware).

However, they all do have the option to disable Secure Boot entirely (and you get a permanent red boot screen for the privilege).


“ Dan would eventually find out about the free kernels, even entire free operating systems, that had existed around the turn of the century. But not only were they illegal, like debuggers—you could not install one if you had one, without knowing your computer's root password. And neither the FBI nor Microsoft Support would tell you that.”


Source: The Right to Read, a short story by RMS, 1997.

https://www.gnu.org/philosophy/right-to-read.en.html


You should not do that since there is no reason to disallow the user from doing what they want.

But if you really want it, writing a custom boot ROM and OS is probably the only way you can have an actually secure system (you might need a custom CPU as well).

Given the lack of skill and discipline of most programmers the whole TPM/secure-boot/UEFI/grub/Linux/dm-verity stack is likely full of holes, so just the assuming that it works as you'd expect will probably be disappointing.


> You should not do that since there is no reason to disallow the user from doing what they want.

For desktop computing ("personal computing", if you like), anyone here will agree.

But the article is specifically talking about securing appliances, and generally when talking about appliances, you're talking about rackable machines sold to the enterprise. There, they don't care a jot about being able to muck around with the machine - the whole point of an appliance is that you plug it in and go; that is more or less manages itself.

And for many of these customers, and of course any operating in a high-security environment (e.g. defence), this level of security is a feature, not a hindrance.


If one can measure the whole boot process and verify the attestation "remotely", why would he need secureboot on top of that?


You need secureboot to be able to ensure that the boot process is the one you set up. Otherwise the attacker can observe it once and replace it with their own version doing whatever they want and saying "yup, here's your magic number, I totally generated in a legit way not read from a saved store".


"Unless you've got enough RAM that you can put your entire workload in the initramfs, you're going to want a filesystem as well, and you're going to want to verify that that filesystem hasn't been tampered with."

I have enough RAM. More than enough.

About 10+ years ago I was easily running the entire OS in 500MB of RAM on an underpowered computer, boot from USB stick, tmpfs (or mfs), no swap. Today, the amounts of RAM are even greater on a comparabley-priced equivalant, at least double, more likely triple or quadruple. I was not doing this to avoid some "anti-tampering" scheme, I just wanted speed and cleanliness, and not having to worry about HDD failures.

However I was using NetBSD not Linux. One thing I have learned about Linux, it is not nearly as nice in the way it handles memory exhaustion, or, in the case of this RAM disk setup, what we might call "running out of disk space". IME, NetBSD anticpitates and handles this situation better by default. I could routinely run out of space, delete the overgrown file at issue and continue working. I noticed an interesting comment recently from a Linux developer on resource usage under NetBSD: "Lastly, redbean does the best job possible reporting on resource usage when the logger is in debug mode noting that NetBSD is the best at this."

I like this RAM disk setup because it forces me to decide what data I truly want to save long-term. Any data I want to preserve I move to removable storage. The system is clean on every reboot. In a sense, code is separated from data. The OS and user data are not stored together.

Anyway, putting the entire OS in intramfs makes sense to me. Intramfs, or at least tmpfs, is a filesystem so the "you're going to want a filesystem" comment seems strange. Also I think the reason(s) one might want a more optimised, lower overhead HDD-based filesystem could be something other than "not enough RAM".


Trusted computing is almost without fail never about making the device trustworthy to the owner operator. Quite the opposite generally. It just gets marketed as that in the hope folks don't ask too many questions.


The next step to this would be to put the actual workloads in a Trusted Execution Environment (Intel SGX) to add another layer of integrity.


No one wants this. (the people who understand it don't want it and the people who don't care don't care.)


This seems obviously false? I want it and I understand it? Corporate ITs would want it to mitigate risk?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: