Hacker News new | past | comments | ask | show | jobs | submit login
Authenticated Boot and Disk Encryption on Linux (2021) (0pointer.net)
54 points by iscream26 5 months ago | hide | past | favorite | 72 comments



This is of course a slippery slope argument, but not neccesarily wrong: I think systemd is moving a FLOSS operating system towards one that has source but comes in binaries. You get an initrd that's not built by you, tucked in a kernel also not built by you, runs the userspace from an immutable image also not built by you, which can be updated with binary deltas also not built by you, etc...

This makes sense from the perspective of making a kitchen appliance for the home consumer market. It makes no sense at all from the perspective of making a Unix-like operating system with four freedoms from scratch and ensure it stays that way.


This mechanism makes perfect sense from the POV of every user (especially developers, who are often high-value targets) who isn't currently working on/actively contributing to low-level OS development. This is not a concern, unless your personal definition of computing freedom is equivalent with running Gentoo. If you want to tinker, there's always an escape hatch. Even macOS freely allows you to disable FileVault or System Integrity Protection (at your own risk).

Your freedom to tinker is not in conflict with my need to stay secure; in fact, when you're finally done with your tinkering, you too may appreciate the feeling of your data being secure against the most basic/common threats.

(I'm rarely in agreement with Poettering, but he's 100% on point here.)


I have no idea why having freedom would not include running Gentoo.

Gentoo, as a matter of fact, offers lots of freedom. Its package manager has built-in capability to distinguish licenses. You can choose between systemd or openrc. Musl or glibc. You can disable all sorts of configure options you don't want or need. You can use it stand-alone or inside another distro. You can specify cpu flags for the compiler globally and per package. You can drop in your own patches for any package (and yes, I use that too). You can more easily modify just about anything in the entire system than most distros.

Using Gentoo lets you build a useful system for whatever you do, from sources or binaries, tailored to your needs, without the burden of having to learn all of the different build systems, their dependencies, and weird quirks you'll come across as a package maintainer of any distro. Ever looked at the rpmspec of things you use? Or the patches in a Debian source package? Those details are all taken care of, but with portage still customizable on a high level.


I think the persons point was that for the average user freedom requires a lot of technical knowledge and fiddling. Gentoo is an example of a free system that needs a lot of technical knowledge and fiddling.


I picked on Gentoo because there's a vocal group of people who believe that unless you can trivially swap PID 1, your operating system is holding your freedom back. (And yes, I am saying this as someone who surgically swapped PID 1 to runit when Debian switched to systemd. I had more free time and less perspective.)

Let's put things differently. ssh-keygen(1) gives you the complete freedom to NOT have a passphrase on your private key, but asks you to provide one BY DEFAULT, which is the more secure choice. What you do with that choice is entirely up to you, but defaults matter, especially in security.


I don't quite get the arguments against the topic at all: if you don't want the added security, you can continue as you do now; and if you do want it, then you can compile and sign the entire software chain yourself; or get the precompiled one. Don't seem like there are any downsides here, or are there?


The downside is that one company holds the keys to the castle for this particular security scheme.

Also, saying freedom requires technical knowledge and fiddling is a non sequitur. Technical knowledge and fiddling is possible with freedoms 1 and 3. Without technical knowledge and fiddling you still benefit from freedoms 0 and 2. Thus, software freedom applies to everyone irrespective of skill level.


> The downside is that one company holds the keys to the castle for this particular security scheme.

And how exactly does that take away any of your freedom? You can still disable any or all parts of the verification chain at will, or enroll your own keys. No privilege has been taken away from you.

If you truly cared, you'd advocate for a way to make managing a self-signed trust chain less cumbersome, but you're instead advocating for the user to choose whether to compromise their security entirely. It's a lose-lose situation for a free software platform, ideally the user does not have to choose any compromises.

The tech world is full of mono/oligopolies. You're running an x86 CPU from one of two vendors, using a browser engine either made by Google or paid for by Google, etc. Not depending on any "one company" is as simple as not using a computer at all. Is that a compromise that you'd be ready to suggest?

> Thus, software freedom applies to everyone irrespective of skill level.

Only if your definition of freedom is as narrow as the fundamentalistic "four software freedoms". To someone else, their definition of computing freedom may go more like "I want to play my favourite computer game, but I only have one hour left this evening". At that point, "irrespective of skill level" is an utter lie: most games are significantly more difficult to run on free OS's.

Unless you mean Steam, but isn't that a platform owned by a single company?...


You're missing the point entirely and brought a plate of red herring to the table.

I could roll keys for my own computer, but freedom 3 falls flat on its face when everyone elses private key is kept secret by one company. People unknowingly trust one company for their "security", while in fact the "security" in this entire scheme boils down to securing stock gain. You can hardly blame the consumers for buying computers that come pre-compromised with vendor-specific keys as the change was touted as "more secure". Secure, again, in the sense that it secures even more money in already deep pockets. Those who can't change their OS or can't easily tick a box on a security checklist will stay on the prerolled platform.

Not being dependant on any one party is an effect of having freedom. Not a prerequisite.

And you conflate software freedom with personal freedom. The four freedoms you call narrow and fundamentalistic, apply to software. You argue no privilege is taken away from me, which is correct, but that also applies to the four software freedoms. I choose not to buy games that don't work on the OS I run. That's personal freedom. The software I write is free on its own to end up on anything from a roll of toilet paper to critical mission control systems. I don't care because it's free as in freedom on its own.


> I choose not to buy games that don't work on the OS I run.

That's your personal choice. All I ask is that you don't advocate for narrowing down the personal choice for others.

> I don't care [...]

Yeah, that's the real problem here. When your needs are met, you don't care.


You have it backwards. One company holding the keys hurts personal choice for everyone.

And again you conflate software freedom with personal freedom. The needs of any particular piece of software are outlined in its license. My personal choice is to prefer hardware that works fine without non-free software, because I need a different level of trust than you.


> One company holding the keys hurts personal choice for everyone.

I understand why you like to hate on Microsoft (they have a long track record of playing dirty), but the actual keys that are preloaded into hardware that ships with UEFI are ultimately the choice and responsibility of individual OEMs (Lenovo, HP, Dell, etc etc), and some of them are directly accountable for major screw-ups in this area - while others ship systems preloaded with a free OS, and go the extra mile to verify that you have the means to install your own. Microsoft could give zero fucks about cooperating, but rather than making this an impossible problem to resolve between every individual OEM and every individual distro/OS, they chose to sign a shim, so that everyone can play with everyone. I do not dismiss this as a possible threat vector, but please consider the wider picture.

What I don't understand is why you're hyperfixating on hating Microsoft (which, 13 years in, still haven't made an aggressive move in this area), while Intel[0] puts an entire dedicated core, with its own -completely opaque and unauditable- OS, network interface, and a long track record of security holes, into every CPU they've shipped in the last 15+ years, with no user choice/control over that whatsoever.

[0]: https://en.wikipedia.org/wiki/Intel_Management_Engine

> [...] because I need a different level of trust than you.

Do you trust your CPU vendor - Intel? AMD? Apple? Qualcomm? Broadcom? Any other piece of silicon (hint: PCIe) that has unrestricted R/W access to your entire RAM? (Or did you even check if your system has an IOMMU, let alone who made it, how it's configured?)

I'm not dismissing the issue you're hyperfixated on, but the points you're raising are irrelevant in light of much more direct threats. You can't trust the software if you can't trust the hardware.

"Reflections on trusting trust" by Ken Thompson[1] is a 40yro classic, we are a looong way from that even if you dismiss hardware entirely and only consider trivial software-only supply chain attacks[2], and yet all you can see is the source code.

[1]: http://genius.cat-v.org/ken-thompson/texts/trusting-trust/

[2]: https://research.swtch.com/nih

My own need for trustability includes the need to continue trusting my laptop after I've left it unattended for one minute. SecureBoot&co is currently the most practical way to even detect boot chain tampering. Evil Maid[3] has been described 15 years ago - this is centuries in the black hat world, and free software developers (yes - you and me) are the most valuable targets, because of our work's potential far-reaching impact on the community.

[3]: https://en.wikipedia.org/wiki/Evil_maid_attack

If you develop software, and dismiss this class of problems, you become a liability to your users and/or employer - they can no longer trust you.

> And again you conflate software freedom with personal freedom.

I do not conflate them, I recognise software freedom as an aspect of personal freedom - but ultimately it is your own personal choice, which freedoms do you value the most. The vast majority of people using FOSS are anything but interested in compiling their own bootloaders/kernels, because we don't do boot-chain development work and instead we want this part of the OS to be stupid, simple, reliable, and secure, so that we can be free to focus on our actual work.

The "stupid, simple, reliable, and secure" part is the very thing that's missing from the entire Linux ecosystem and why I'm usually a vocal opponent of everything-Poettering, choosing to run OpenBSD where I can - their FDE[4] is orders of magnitude simpler/easier to audit than the bloody mess that is UEFI-shim+GRUB+Linux+initrd+cryptsetup. Again, if you actually cared, you would be advocating for software that is easier to audit. Source code that you can't read/comprehend is no better than a binary blob.

[4]: https://www.openbsd.org/faq/faq14.html#softraidFDE; the entire disk decryption code fits directly into the bootloader, thus even the kernel is encrypted.


You're free to empty your wallet to the corporation you mentioned, while really it's just pissing into the ocean.

The point I'm making is that software freedom can be hurdled by so-called "security" measures. When a bootloader can reject something you built yourself or a friend on the basis it didn't come from a large corporate software vendor, the computer places more trust in its manufacturer than its owner. This is especially problematic with smartphones and tablets.

There was a time when computers weren't pre-programmed to judge what the user is doing. You could load up any program and it would execute it. You could say this is insecure, but that depends on how you look at the problem. SecureBoot has been proven to be ineffective at securing the boot process, but effective at thwarting attempts to replace MS-Windows with Linux. Apple and Google are even more hostile, with the former openly admitting they isolate their users by calling attempts to bypass their scheme "jailbreaking". It doesn't matter which multi-trillion dollar company is doing it, they're all hostile towards software freedom in my opinion.

Your argument on Intel is just red herring to me. 2 wrongs don't make a right. It's like saying corporate greed is okay because there's always bigger corporate greed out there. Whereas I'm de facto against all moves that hurdle software freedom.

I haven't taken the time to read the documents you linked, but it appears you're making a strong case against supply-chain attacks. For now, I think there's still plenty of room for disagreement much like how computers in military zones can be made deliberately insecure by our petty citizen standards, simply because their threat model is something else entirely.

I don't want blobs because as you allured to, there's a real need to be able to inspect hardware and software for correctness. Any blob that gets in the way of that, I want it gone forever.

Even if you don't do bootloader programming, I think its good practise to build as much as you can yourself. Gentoo happens to be a good fit for this on the GNU/Linux side as you can build everything but also intervene only for those few packages you really want to have fixed a particular way.


> I haven't taken the time to read the documents you linked [...]

Then I also don't have the time to read and address your response. There's no further discussion to be had where one side is no longer willing to display basic courtesy.


Basic courtesy such as being honest? You think you're smart because you can copy and paste a bunch of URLs from a search engine or your bookmarks file?

I didn't even ask for a discussion. I think I've made my point clear by now but feel free to keep being a jerk on the Internet.


The other person is debating, giving you references, not "copy-pasting URLs". It looks to me that you don't bother with counter-arguments and yet repeat the same thing over and over - I don't think you'll convince anyone new like that, and won't learn anything new yourself either.


Thank you for clearing that up, OP.

Thing is, can you send any random person a bunch of documents and expect that said person will read them at your whim? In their spare time? I can't afford to XKCD 386 ;)

I wish the both of you well.


AFAICT, you would be able to build and sign everything yourself.


If you're interested in this topic, the 5th "System Boot and Security" LPC microconference" is on Sep 18, https://lpc.events/event/18/sessions/201/#20240918

  Developing trustworthy Linux-based systems in an open-source way
  Common git repo for hosting Boot-firmware
  Accelerating Linux Kernel Boot-Up for Large Multi-Core Systems
  Leveraging and managing SBAT revocation mechanism on distribution level
  Using U-boot as a UEFI payload
  Measured Boot, Secure Attestation & co, with systemd
  Secure Launch - DRTM solution on Arm platforms
  no more bootloader: please use the kernel instead
  OF != UEFI


> no more bootloader: please use the kernel instead

This had a post on HN before and I didn't find the arguments terrible compelling. I'm curious what security advantages they might be able to say exist though.


IIRC from a presentation the main point behind NMBL is to not reimplement an entire OS in the bootloader like GRUB. Instead you should use the kernel with an Initrd instead and should kexec if you wanna boot into a different kernel. That way you only really need to take care of the existing kernel and userspace security.

The problem with that is that it starts to muddy the TPM PCRs (read: makes the PCRs that should be predictable not predictable) if the kernel gets kexec'd and it just makes the boot processes just needlessly more complicated. Not to mention when the kernel/initrd fails to boot you are kinda SOL since you can't really do any meaningful boot count logic if it fails as it could even be a faulty kernel and not even reach the initrd.

I also haven't been able to be convinced that NMBL is better than a simple EFI bootloader that chainloads a kernel.


The last two paragraphs are my thoughts exactly. NMBL acts like it's solving a problem but I just see it creating more.



Have you been able to do a review of the code?

If yes: please share.

If not: why not?

Thanks!




(THAT, cite of tptacek, fulfilling someone's request, downvoted - with.. such empty*, hypothetical argument ?? - so far it's a libel - at least make some effort to proof otherwise in that case Mr FSFdiscreditor..

*"easier to hide backdoors" - so you don't need source to check the _real_ code - only more effort)

-

vs attack surface ? (having no reasons for powerful adversaries)


> Instead of stealing your laptop the attacker takes the harddisk from your laptop while you aren't watching [...] makes a copy of it, and then puts it back.

I've never understood why people keep making this incredibly weak argument for secure boot.

Secure boot makes sense for a college computer lab, where any disk encryption is better than nothing, and you can't give everyone the password or it'd defeat the point.

Secure boot makes sense if you're a Microsoft-only company, as it's a closed-source OS anyway and Microsoft have the code-signing keys. It means your users only have one password to type in - and helpdesk can reset it remotely if a user forgets.

Secure boot makes sense if you're making something like an xbox or tivo where you want disk encryption but you can't give the owner the password, as they're the adversary you're trying to protect against.

And yet people instead ignore these benefits, and go for this spy thriller nonsense as if people are going to be crawling through the air vents and abseiling from the ceiling to interfere with my computer? If you're going to pretend to be James Bond you'd better also be learning ballroom dancing, kung fu, skiing and foreign languages.


What do average people do when something happens with their secure boot? Search the web, and apply whatever they find, hoping that their system boots again. They want a solution under 1 minute, and don't care whether they expose their system. Secure boot is utterly complicated, a mess, and badly documented and people doesn't know shit about it.

And we want this to be default for users? I like lennart's work, but this further complicates things A LOT. What happens in case of hardware failure? If parts of the drive becomes unreadable and you need to retrieve as much data as possible? Oops you forgot to enroll your recovery key...

What will people do to avoid data loss and to avoid learning how the system as a whole works? Create backups and those will be stolen by nefarious entities instead.

Linux is mostly not so complicated. But this latest post... if this becomes the norm, oh god, unnecessarily complicated way to protect against imaginary threat. How widespread these hard disk removals are in the wild? I know maybe 1 case in the last 10 years that was publicised.

People are paranoid about things they can't control and don't understand at all, and these measures calm their nerves. Whew, I'm so important, my data is so important, now I'm protecced. While the ones who really want your data already waltz in anytime they want into your system and you can't do shit against it, because you are expert at max in one domain. The threat modelling already tells you that the compromise you have to take is that there are peepz you can't defend against.


In a data loss situation you image the drive and decrypt it with your recovery key.

That has nothing to do with secure boot. You won't lose access to the drive, the issue is that you want to mostly not use that recovery key all the time.


With normal full disk encryption, every user has memorised the secret needed to recover the disk, because you get reminded of it every time you boot.

The TPM is intentionally designed to make sure this is no longer the case.

Seems like a downgrade to me, from a disk recovery perspective.


Storing infrequently used private documents safely is something everyone in the modern world has familiarity with.

Very few people have any familiarity with the risk model of encryption, even if they need or should have encryption (with should have including: providing cover for people who need encryption by making encryption common). And even more people write down passwords rather then remember them.

For example: disk encryption keys basically never change, even if you change the password. So intercepting an image of the encrypted disk at time point A, and then intercepting the user typing the same password in at time point A+N gives you the password to decrypt the disk. You can also reverse the order of this.

If you boot your laptop up from a cold boot in any public area and enter your encryption password, then it's high probability a local security camera has just taken the password. So the attack model can be "get a shot of someone typing on the keyboard in public" and then later "image the drive and crack at your leisure".

If someone gets a copy of your drive image at an earlier point in time, then you change the password, then you mention what your old password was (because it's now "safe" right?), then you've just given them the ability to decrypt the old disk image, and probably the current one too (since they still have a copy of the encryption headers and thus the master keys, which didn't change).

With TPM based factors, these attacks become worthless: the drive separated from the computer, even if you know the user's password, can't be decrypted. The user changing their day-to-day password on the drive is a secure event because the password only works with the computer it's attached too, not independently.


Put a security system with a camera on it that sends an alert if someone gets close to it. Surely it would be a bizzare situation to have the highest level of cyber security and then not even be able to tell if someone broke physically into your house?

Quick survey. How many of you cyber security people out there "would not" be able to tell if someone broke into your house? :-) I'm betting a lot.


> Secure boot makes sense if you're a Microsoft-only company

Yes, the author is actually working at MS: https://en.wikipedia.org/wiki/Lennart_Poettering


Makes sense. Big Tech are the only ones pushing for Secure Boot because it gives them control over our machines. With that they will be able to garden-wall our PCs the same way they do with our phones.


Fortunately, my phone (Librem 5) doesn't obey the Big Tech. Neither does my Laptop.


And the author worked at Redhat when he wrote the article in 2021.


Yes, but there were suspicions that he was involved with MS at that time, I heard.


> And yet people instead ignore these benefits, and go for this spy thriller nonsense as if people are going to be crawling through the air vents and abseiling from the ceiling to interfere with my computer?

CBP and other countries' "border control" routinely forces people to let them examine their devices. That's bad enough, I'd at least be happy if there were an attestable way these pigs don't install malware on peoples' devices.


I'm surprised that the author doesn't mention Pureboot [0] or even Heads [1], the most user-friendly [2] way to use TPM on Linux and authenticate the boot process along with /root, /boot directories.

Also, there is no Microsoft involved in my laptop, i.e., the author's statement

> Microsoft's certificates are basically built into all of today's PCs

is wrong. I enjoy the coreboot with Heads on my Librem 14 with my own keys.

[0] https://docs.puri.sm/PureBoot.html

[1] https://github.com/osresearch/heads

[2] https://puri.sm/posts/pureboot-101-first-boot-first-update-a...


He's generally (I suspect p>99, probably a 9 more by volume) correct with his statement.


You are right, however the existence of alternatives is extremely important and should always be mentioned.


If you're Lennart, the existence of alternatives is a nuisance, so no need to give them free publicity.


I wanted to avoid making this a "it's Lennart" point.

Even then, IME he's building software for 99% of users, which this covers. It can be quite annoying when he makes life hard for the remaining 1% (or fraction thereof), but I'm not as antagonistic to him as others.

Also, he kinda mentions it in the "Anything Else?" section. Not the firmware that doesn't ship with the MS keys at all, but the ability to insert your own keys and distrust the MS ones.


Haha, I didn't realize the article was from our beloved friend.


Good point. He's trying to widen the extent of systemd (and Microsoft?) yet again.


Why is this downvoted?


The way to do this IMO is:

Have the bootloader boot automatically into an encrypted guest OS, and have it obtain the key transparently from the TPM. This way the hard drive can not be read outside of the machine. The guest OS allows easy login, can be used to let people borrow your pc in a trusted way, and can also serve as plausible deniability when asked to log in in front of authorities or otherwise being intimidated or forced.

Then configure the bootloader to boot an alt OS or show a boot menu for a specific key combo, and enter a passphrase to boot into the real, 'hidden' OS.


Lennart is technically doing good work. While his tools are less complicated than the current hilariously convoluted standard boot process, they are still too complicated to use well.

He also misses the point with the attack scenarios. If you luks encrypt your data and choose a good passphrase, the brunt is done against theft. Protecting against bad passwords is futile in the long run. (Will elaborate if requested.) That someone images your drive for offline bruteforce or manipulates your boot binaries is rare. The true benefit of signed boot chain is to have security patches work reteoactively, "compromise recovery". Automated attacks and malware from the internet side are way more common.

Imagine one of your daemons is compromised. As long as it does not escalate privileges, it can only gain persistence via corruptable data files or config accessible to itself. Now a patch comes along that closes the hole that reinfects the daemon. The malware will not start on daemon restart.

With signed booting you can bring that to the kernel and root.

Signed booting with rollback protection into a known good state. As long as the malware is not part of that system it won't run on launch.

But who signs my stuff, especially my own scripts and automation? Me of course, if I had good tooling.

If that became normal malware would just steal the key.

A TPM or other keybearer device lets you conditionally unlock a signing key.

So to sign, you can boot your system into a runlevel / target / ... that does not run auxiliary scripts from writable locations. If that state is measured by the TPM, you can sign.

With good enough tooling this is workable.

If implemented well, this even helps maintenance of the system.

In the state of things now, its a horrible convoluted mess that doesnt give extra security but 10 more points at which you can break your boot.

+ UEFI itself is again a complexity monster full of holes on very many machines. The whole x86 preboot stack amd or intel is a horrible complexity monster.


> But who signs my stuff, especially my own scripts and automation? Me of course, if I had good tooling.

There's already a mechanism, provided for DKMS - you enrol a 'Machine Owner Key' which only root can access, and any time you update your kernel (requiring you to recompile a kernel module), it gets signed with the MOK. Which of course means any malware that gains root access can sign itself too.

An alternative is that any time you update your kernel and reboot, things like the nvidia drivers would get disabled until you perform some special ceremony. Not that great for usability, we want people to install updates in a timely manner after all so we don't want to make it too inconvenient.

Another alternative is to only load code blessed by a Microsoft-approved Linux distro - the Ubuntu Core approach. But this requires abandoning the open source ethos.


Shoutout to Heads for even attempting to make this. But even that is far from complete. At least the UX is a little better than standard.


> the attacker takes the harddisk from your laptop while you aren't watching

> You'll never notice they did that.

Won't you be safe if you put a colorful nail polish to your laptop screws and take picture of its pattern? Then you regularly compare the actual pattern with your picture.


No. Anyone trying to tamper with your laptop that had that would do the same as you and take a photo to restore.


It's practically impossible to repeat a pattern with tiny colorful particles.


I realize now you are talking about using the nail polish to detect if a screw has been removed as opposed to checking if the screws had been taken out and put back in a different order.

In that case, I would say 1) Nowadays with high res photos and various types of printers, I do think a pattern could be printed back onto a screw head, 2) there is no way you would be checking this every time the laptop was out of your site, let alone reapplying the polish, 3) there are numerous significantly simpler methods that achieve a better result.


> I do think a pattern could be printed back onto a screw head

I've never seen anything like that and don't believe it's practical. 3D-printed patterns will not look the same.

> there is no way you would be checking this every time

This entirely depends on your threat model and how much you suspect a tampering at specific conditions. In principle, you could even (automatically?) take a picture of all screws regularly and compare it with the original using some other, trusted device. In the worst case, you will find out about the tampering later, but it's a very different case than not knowing at all, forever.

> there are numerous significantly simpler methods that achieve a better result

What is simpler depends on the threat model and a person. But I don't disagree. For me, Secureboot is not a better method anyway.


> I've never seen anything like that and don't believe it's practical. 3D-printed patterns will not look the same.

I'm not talking about 3d printers specifically, just high precision printers. It's absolutely practical.

> This entirely depends on your threat model and how much you suspect a tampering at specific conditions.

I was talking about you personally, who I assume is a pretty average developer that doesn't have state actors after them.

> What is simpler depends on the threat model and a person.

No, it doesn't. This screws method you describe is inferior for all threat models and persons. It's basically security theater.

> For me, Secureboot is not a better method anyway.

You might not prefer it, but it is objectively a superior method.


> just high precision printers

2D printers just won't cut it: https://i.pinimg.com/originals/90/7a/2e/907a2ece23d412d28b66...

https://3.bp.blogspot.com/_BkvigWu1n1A/S8YrMyV_kTI/AAAAAAAAB...

(and so on)

> I was talking about you personally, who I assume is a pretty average developer that doesn't have state actors after them.

This is why I wrote below about eventual discovery of a possible tampering and low priority of checking it in principle.

> This screws method you describe is inferior for all threat models and persons. It's basically security theater.

This is a strong claim without any evidence. You didn't show how to overcome it.

> You might not prefer it, but it is objectively a superior method.

It isn't: https://forum.qubes-os.org/t/discussion-on-purism/2627/187, and https://forum.qubes-os.org/t/discussion-on-purism/2627/158, and https://news.ycombinator.com/item?id=41072929, and https://news.ycombinator.com/item?id=41071708, and https://news.ycombinator.com/item?id=35843566


> 2D printers just won't cut it:

Not your off the shelf consumer stuff, no, but there are printers that could do it, for sure.

What's more, I really have no idea what the point you are trying to make by linking those images is showing. Printing designs on nails doesn't require the level of resolution your screws idea would, so it isn't really relevant.

A quick search shows an especially high resolution 3d printer released last year in May, that can print at a 20-nanometer resolution, the D4200S[1]. That's basically cutting edge, and way, way overkill to print at the resolution required to fool you after tampering with your device.

> This is why I wrote below about eventual discovery of a possible tampering and low priority of checking it in principle.

It's a given that how often someone would check something like that (not that it would be used in practice) depends on their threat model, but you used yourself in the example originally. The point was you wouldn't be doing this, and in the context of the original comments and conversation it didn't make sense as a suggestion.

> This is a strong claim without any evidence. You didn't show how to overcome it.

The problem here is your assumption that the screws are not easy to reproduce, except they are. It's a false assumption. I showed capable printers exist, in addition exist the level of precision the worlds best counterfeiters can work at and are capable of, and yes, state actors have access to such people.

> It isn't:

It absolutely is.

All your arguments, or the links you gave that imply the arguments you didn't make, are limited to using preexisting keys which is not a requirement, or existing flawed implementations, which are not a requirement. Secureboot is a standard, and you are free to use your own keys, and own implementation - if you can't write or manufacturer your own, there are still open solutions you can trust like those from pureism, and software like coreboot.

It would really be better if you make an actual argument and reference urls rather than just spamming a bunch of links FYI. I shouldn't have to open 10 tabs to understand your reasoning.

[1] https://3dprinting.com/news/nano3dprint-launches-highest-res...


> 3d printer released last year in May, that can print at a 20-nanometer resolution, the D4200S[1]

This is impressive indeed. I agree that if you expect that your adversary spends this much resources on you, nail polish wont' be sufficient.

> Secureboot is a standard, and you are free to use your own keys, and own implementation

Show me a FLOSS implementation of this standard and you will have a point. At the moment, I would have to trust a megacorporation obeying NSA, so I don't see it as a good defense against real adversaries. Your threat model may vary.


> Show me a FLOSS implementation of this standard and you will have a point

I've had a point from my first comment and it hasn't changed in validity. It's just taking time to convince you, but I think I'm making progress :)

I referenced several open implementations in my last reply, an a cursory search reveals more [1] [2]. Besides, this still doesn't help you trust the hardware, even if that hardware is entirely open like some sort of RISC chip. Can you verify every step in the supply chain? At every stage of assembly? No? Or, assuming a trusted device, can you be 100% confident something wasn't added, a simple keylogger? Most keyboards can be removed from laptops without leaving a trace, so can screen casings, speakers, batteries, etc. Plenty of places to hide something tiny.

> At the moment, I would have to trust a megacorporation obeying NSA,

That's less likely than the software you use having been compromised, for example by introducing an obfuscated bug, or MitMing as you perform a software update (many software update mechanisms have notoriously weak security, search some defcon talks on the subject).

> Your threat model may vary.

No, what I'm saying applies to all threat models, and I challenge you to name one to disprove that.

Secure boot is an open standard and can be implemented in a trustworthy and secure way, you just need to put in the work to do so. It's entirely possible to do so.

Of course if you are putting in all that work, if you are that at risk, you would need to switch your software stack entirely as well and use something like seL4 as a starting point.

[1] https://github.com/prplfoundation/prpl-secure-boot

[2] https://www.coreboot.org/


I have a laptop with a soldered in disk! Check mate mofo! All for our safety of course!


Hello, this is a security auditor. Could you please confirm items on the following checklist?

1. A BIOS password exists, with at least 8 characters, not based on a single dictionary word or keyboard-run sequence, and not easily guessable in other ways.

2. Booting an OS from any non-default device requires entering the BIOS password.

3. GRUB entry for bringing up the firmware configuration does not bypass the password.

4. GRUB itself has a password defined, with similar password strength requirements.

5. Editing the kernel command line or accessing the raw GRUB command prompt requires the password and, likewise, cannot be used to boot kernel/initrd pairs from external media.


I don't trust bios passwords. There's probably some jumper on the board that bypasses them or a previously planted hardware keylogger. I always boot off separately stored immutable rescue media whenever the machine has been out of my custody and checksum the whole boot device. Your move.


Sorry, this is not a valid answer - you can checksum the boot device as much as you like, but I have not seen any procedure that ensures that you know the correct checksum. What could have helped you is not just a checksum, but a signature used with dm-verity.

What I do on my laptop is:

* BIOS password, of course. Note that this also prevents the attacker from resetting or turning off Secure Boot.

* Secure Boot enabled, with my keys only (no Microsoft keys).

* No GRUB, I use systemd-boot (part of systemd) and UKIs signed with my own key. Although, as I don't dual-boot, this might be an unneeded step. In any case, with Secure Boot enabled, systemd-boot does not allow editing kernel command line arguments at all and so cannot trick my UKI into doing anything else than what it is supposed to do.

* The main SSD partition is encrypted (with the passphrase that I have to type).

* The USB storage driver is not in the initramfs, so the storage device found by the initramfs is guaranteed to be my internal SSD.

* The Secure Boot keys are stored inside that encrypted partition and are used to dynamically sign new sd-boot releases and UKIs. I guarantee that I don't sign anything except UKIs that ask for my SSD password, and any shell-out possibility is treated as a bug.

* There is a separate set of keys for signing the custom rescue media, which is also a big UKI.

This way, the attacker cannot boot anything other than my UKI (first, because of the BIOS password, and second, because Secure Boot won't allow anything else), cannot obtain a shell by running something before I enter the password, and, therefore, cannot clone or overwrite my disk.

Note that this setup has also been criticized as insecure ("you don't use TPM, so you can't be secure, it's a theorem"), but I don't understand this argument.

As for the hardware keyloggers, you are, of course, right.


> * The USB storage driver is not in the initramfs, so the storage device found by the initramfs is guaranteed to be my internal SSD.

Do note, if you are using systemd-gpt-auto-generator with the DPS[1] it only searches for a root partition on the drive with the booted from kernel[2] (and any other DPS partitions are searched for from the drive with the root partition, so if you somehow specify a different drive than the one with the boot loader it will search on that different drive)

[1] https://uapi-group.org/specifications/specs/discoverable_par...

[2] https://www.freedesktop.org/software/systemd/man/latest/syst...


:DDD And you gonna check it every time? Nah, humans are not wired to this.


Yes, if I expect such an expensive, targeted attack on me. This is a Snowden's case, not for normal folk.


And that is 1 user in a million. So we force people to do everyday things in a bulletproof vest, because of that 1:1000000 use case?


If you don't expect such an attack, then with your threat model you don't need to do that.


Is this for real? Is the initramfs not signed and authenticated? if that was the case it would be a very serious and obvious flaw I would have thought.


Since 2018, heads builds and ships with its own initramfs: https://heads.dyne.org/news/2018/03/release-04.html




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: