I am not a huge fan of relying on this, I cannot recall numer of times where I had to recover my files from live USB or by plugging my drive into another PC - both of which will get difficult to do when my true password sits in TPM on the (potentially bricked) motherboard. But I guess that if it can be used in addition to regular password (e.g. use regular password when TPM is missing, use PIN if it works) then it's fine
Also, correct me if I'm wrong, but TPM shifts the password input prompt from traditionally LUKS during boot, to your XDM or whatever display manager you're using?
And I seem to recall a recent vulnerability where people were able to bypass the display manager login on some graphical desktop Linux distro.
I just don't feel comfortable shifting the threat to this part of the OS, because desktop Linux has not had the extensive testing required yet. It reminds me of when I was in school and you could bypass NT4 or Windows 98 logins by mashing the Enter key or doing weird user input combinations. Basically manually fuzzing.
If TPM only adds convenience then I'd prefer to continue entering my LUKS password, and my login password, separately.
You could bind your disk unlock to also include a PIN entered at boot-time (TPM+PIN).
This gives you the benefit of system integrity verification at time of boot, but also requiring your input to release the keys (and meaning you can't get to the system lock screen without the PIN).
Side note, does anybody know why TPM schemes use a numeric PIN for pre-boot unlock rather than a full alphanumeric password? For a given length (say, 8) a password would presumably be more secure than just a PIN. It’s not like this is a phone where you need users to be able to quickly enter the code on a limited input surface.
With systemd you can enroll any string you want as "PIN" for tpm. There are no restrictions. Can be long, can be alphanumeric, contain weird chars, up to you.
I don't think it is that much more secure, given that in this case "something you have" is bolted onto the machine you're attempting to protect and that a sufficiently long password provides enough space to make brute forcing impossible anyway.
In my opinion all the TPM achieves in this case is ensuring you lose your data if the machine dies (or if some OS update fucks up and doesn't properly ensure the TPM acknowledges the new version as valid).
That said it does help against the so called evil maid attacks, given that it would lock itself out if anyone modifies the OS, so if that's part of your threat model then it is useful, I guess.
Even with a TPM the disk is still fundamentally encrypted with a key that you can make a copy off and put in your drawer for recovery purposes. It just offers a way to do FDE with no or just a low entropy passcode. This protects against most data loss incidents (laptop getting stolen) without producing massive overhead.
Because there are different kinds of keyboards out there, with different layouts, so if you switch keyboard you could find yourself unable to input the password. Imagine typing a "non-English" letter in the password and then switching to a US layout keyboard without that letter. Sure, a rare scenario, but with hundreds of millions of users you will hit it.
This is just the default configuration BitLocker used for a long time, there is a no hardware constraint that causes this. However, pins are easier to remember, usually guaranteed to be available at the pre-boot time the tpm needs unlocking and as the TPM ships with anti-hammering abilities they offer sufficient entropy to protect against attacks.
There is no limit to the password. Usinging a 4 digit numeric PIN is just easier to remember, and because of the bruteforce resistance it doesn't decrease the security.
> Also, correct me if I'm wrong, but TPM shifts the password input prompt from traditionally LUKS during boot, to your XDM or whatever display manager you're using?
It doesn't "shift" it, it's not the same password. It doesn't work the way it does on a mac, where your user password unlocks the drive so you don't have to enter it again to unlock your user session.
Assuming you don't have autologon turned on, without this scheme, you'd have to enter two passwords: LUKS first during boot, then your user password in the XDM. With this scheme, you skip the first password.
But yeah, if you don't enable any other protection, like TPM+PIN, your PC will boot on its own all the way to the login manager. So if you don't trust your login manager, that's an issue.
If you rely only on TPM for key storage, yes, the disk is unlocked automatically and any sufficiently broken userspace application you can get your hands on will let you access it. You can still combine TPM+passphrase/PIN though, at the cost of having to enter it at boot.
With properly functioning secure boot and no bugs in the entire software stack, it doesn't matter if the disk is decrypted automatically, since you can't access the system without OS-level authentication. If you tried to replace system files to let you get in anyway, the secure boot measurements would no longer match up and the decryption fails entirely.
Then again, an attacker can read the decryption key from RAM (freeze and remove the modules, then dump the memory on another system) and decrypt the disk offline.
So, data on a stolen laptop which has an unprotected TPM (no PIN to boot) can be considered compromised.
I use a very long and inconvenient password for LUKS, and a simpler one for login and root. My lock screen is more a convenience in a trusted environment and not security. The TPM only solution sounds like it would require my very long password every time I leave my desk to get coffee.
Relying on no bugs in the entire software stack makes the attack surface quite large.
If a laptop is stolen the thief can wait sufficiently long for some vulnerability to be discovered somewhere in the stack. With LUKS only the LUKS encryption has to be good and full disk encryption protects the data.
> You mention cost, but what is even the benefit of encryption that's unlocked by just booting?
Ideally, your login screen is secure and allows no bypasses into a shell or similar, so you cannot really access any files on the hard drive.
And if you modify some system files or boot another operating system to get around this, you are required to know the disk encryption password to get to them.
Ubuntu's implementation will also allow you to export a backup passphrase in case the TPM gets zapped (by a firmware update for example) after which you can re-enroll.
Annoyingly, this is a random 16 byte value that's translated into a numeric string that only Ubuntus's cryptsetup wrapper seems to use. You need to convert it into an actual key if you want to re-enroll your TPM or add a key file/passphrase to another slot.
If you don't back up the recovery key, your data is lost.
systemd has a similar logic, i.e. a recovery key concept, but we made sure you can type it in wherever a LUKS password would work too, even on systems where systemd is not available but LUKS ist. The recovery key is output in yubikey's modhex alphabet which means you can type it in on many keyboards even without setting a keymap first, and will work. We also output it as qr code, in case you want to scan it off. All on all it should be as robust as a recovery key could be.
Yes, a tpm2 enrollment takes up one slot, the recovery key another, a fido2 yet another, a pkcs11 key yet another and a password yet another in any combination/subset you like.
That's a highly unusual attitude for systemd. Most of the systemd architecture requires you to run systemd for everything if you use it at all. What changed?
With modern file systems like BTRFS you could take a(n incremental) snapshot and send that over the network using btrfs send.
Filesystems without snapshots can't be disk cloned that easily, but tools like Timeshift will try to replicate snapshot behaviour using hardlinks, so I assume backups of Timeshift snapshots should also suffice.
Timeshift can be configured in various ways, including making daily snapshots. I mostly use it for periodic snapshots and snapshots taken before the package manager runs (as a poor man's System Restore alternative).
I don't think Timeshift supports automatically sending snapshots just yet. For that, you'd need to set up a cronjob or systemd timer.
Last time I tried to replace my `rsync` backups, `btrfs send` had grave limitations - something like "you can't actually use the destination filesystem for anything other than the one computers backups" I think?
btrfs receive has some limitations for sure. I don't think having the volume mounted is a problem, but during the transfer the snapshot is read/write. If you modify the volume, subsequent differential updates will fail. If you have some other process messing with the snapshot being received, you also risk messing up the send/receive. There are also limitations w.r.t. the way the volume is mounted; if you alter the default mount point or mounted a subvolume somewhere deep in the chain, receive will also fail.
If you have a BTRFS volume on a server that exists purely to receive backups and separate the subvolumes sufficiently (i.e. using specific subvolumes for the different backed up devices), I don't think send/receive should be a problem.
I wouldn't back up to the root partition of a running system, I think a dedicated data volume makes a lot more sense design-wise.
TPM chips are not opensource, end user cannot simply check them for backdoors. Manufacturers of these chips are under heavy scrutiny of intelligence agencies. Governments around the world try their best to prevent people from using strong cryptography (eg. limiting effective key lengths in various ways).
Why should we trust such chips more than purely software based approach?
Using a TPM for remote attestation of device state is basically worthless in the generic PC UEFI world because there is pretty much unlimited surface area for holes to leverage, from motherboards that falsely report secure boot states to firmware bypasses to using your own TPM to just exploiting a vulnerable kernel driver that has a valid windows codesigning signature.
In practice places that want "attestation-like" functionality, like Riot do for anti-cheat, just load mandatory kernel modules instead and require e.g. Type 1 hypervisor-based security in Windows to be enabled. Or they just obfuscate everything and run blobs. These can still be bypassed but it's still difficult and in contrast fully controlled by their software stack, which is more usable for them for more players. It's good enough, in other words.
To be fair, firmware-based TPMs which are part of the CPU/SoC should be invulnerable to that?
However, TPM-backed DRM on general-purpose OSes is impossible in the current world and is fear-mongering. The TPM can attest to a remote party that you've booted a certain OS, but this OS would need to maintain that chain of trust all the way for it to be effective. That's impossible due to the number of device drivers (which need kernel-level access by design) and various other privileged components a general-purpose OS requires (security software, etc), not to mention the attack surface of all that.
For a hypothetical TPM-backed DRM to work, you'd need Netflix to ship you an entire OS image that you can boot (which the TPM will prove to them that you've booted), and that OS image needs to magically have all the drivers for every potential PC their customers might have, and you need to convince people to reboot their machine every time they want to watch it.
This is all unnecessary considering the current status-quo of DRM is good enough, Widewine does not require a TPM and relies entirely on security by obscurity and it's considered good enough by the industry.
> Widewine does not require a TPM and relies entirely on security by obscurity and it's considered good enough by the industry.
Widevine's highest security level, L1, does not rely on security by obscurity.
But it doesn't rely on a TPM either; rather, it relies on a trusted execution environment, like ARM TrustZone or Intel CSE[1], that is more highly privileged than the OS itself. Microsoft PlayReady is similar.
Apple FairPlay, in contrast, is worse: it's disabled if you turn off Secure Boot on Macs.
> However, TPM-backed DRM on general-purpose OSes is impossible in the current world and is fear-mongering. The TPM can attest to a remote party that you've booted a certain OS, but this OS would need to maintain that chain of trust all the way for it to be effective
Why don't you think that can work? Windows already limits drivers to ones signed by Microsoft - or else you have to be in "test mode" where - among other things - DRM-protected video playback is disabled!
And Windows already contains a "protected media path" component which protects encrypted DRMed video data.
Sure, there will probably be bugs in a drivers sometimes allowing for a signing bypass and then a DRM bypass if the DRM is done in software. Once they're discovered, those drivers will be blacklisted, and yes, that will mean you can't use that hardware. And the masses will blame the pirates.
Because these things are all much, much harder than just passing the decryption to the graphics card which does not permit a userland OS to see what's going on, avoiding the OS entirely. And that already happens. And has happened for years.
A TPM is useless for DRM on a general purpose computing platform because it is significantly inferior to existing widely deployed solutions.
Right now, 720p video is generally not protected because not everyone has a Hollywood-approved graphics card and monitor. If DRM becomes a software thing, they can protect all video.
> Windows already limits drivers to ones signed by Microsoft
I don't believe driver signature enforcement restricts what the driver can do, intentionally or as a result of a vulnerability. Maybe your driver intentionally allows user access to privileged kernel memory, or has a vulnerability that allows the same? Also, signing keys leak every so often.
fTPMs are invulnerable to interposing attacks, but if you add a second TPM and program it as Windows (eg) would and redirect all remote attestation requests to that, the remote site has no way to know that this has happened.
My understanding is that TPMs have vendor-embedded certificates/keys, so I guess eventually all discrete TPMs' keys will no longer be accepted by remotely-attestable services to protect against this attack - fTPM will be the only way.
This is all theory though - workable TPM-backed DRM would require so much vendor cooperation, secure programming and break legitimate use-cases that it won't happen any time soon on conventional hardware. It would be cheaper for the media industry to just sell streaming boxes or exclusively target more locked-down platforms (iOS, Android) than try to make it work on generic Windows PCs.
The TPM allows Hollywood to verify that you're running an Approved(TM) operating system. It's also very useful for WEI, which is web DRM - it allows Google to verify that you're running an Approved(TM) operating system and browser.
Linux will never be approved, unless it's heavily locked down, by the way.
NOTE: This proposal is no longer pursued.
Thank you for all the constructive feedback and engagement on the topic. An Android-specific API that does not target the open web is being considered here.
The android specific proposal is only adding support for it to WebView, which developers actually could already by combining the WebView and play integrity APIs, so that as much as I don't love it that doesn't seem too terrible if it is just saving developers from writing some boilerplate code to connect the two. Here is the recent discussion about the WebView changes https://news.ycombinator.com/item?id=38118627
TPMs have been widely deployed for around a decade. Widevine doesn't use them - it makes use of the protected video path hardware built into the GPU or motherboard chipset.
Windows 11 was the first release to enforce anything at the OS level, but Windows client sticker certification has required them since 8.1, released in 2013. As a result almost all Windows client systems in the wild have some form of TPM by default (in most cases running on the ME or the PSP)
Yeah, but that goes to show you how limited in know-how people complaining about TPM are, and they keep spreading FUD like this and others keep buying it.
And this is a supposedly tech focused platform where the tech literacy is at its highest. I wouldn't be surprised to read comments that TPM is used by Bill Gates to give you Covid.
My rule of thumb is that I don't listen to anyone who can't write exploits for modern software stacks (OS kernels, browsers.) And even then, you can still ignore 50% of those people too, as they are cranks. In contrast, most of the people here in every one of these threads couldn't overflow an integer if you let them write the for loop.
> I wouldn't be surprised to read comments that TPM is used by Bill Gates to give you Covid.
One time on here, someone linked an academic paper describing an HTTP PCIe accelerator. One of the (eight) authors was from Tsinghua, and so naturally someone in the comments started freaking out about how this was a plan by the Chinese Deep State to fund this for mass surveillance. When I asked him how this would work and what threat he expected, he described an elaborate Tom Clancy plotline where these PCIe cards will actually be snooping every key exchange, keeping them in memory, and they would be secretly equipped and manufactured with short wave radio devices that would allow Chinese agents to "exfiltrate private keys" (whatever that means) by posing as janitors and technicians in the datacenter and beaming those radio messages to them.
That was his threat model for "thing you literally plug into your fucking server and put on a shared memory bus."
Now, how this is expected to work when most HTTP accelerators don't do key exchange (and never ever see long term keys), or how these keys would benefit them when presumably many encrypted comms do not go through cables controlled and spliced by the nefarious, evil-loving CCP -- well, that's left as an exercise for you!
The TPM is a black box that has no business in hardware. It simply takes control away from you. Make excuses for 'only when watching a video' or 'only when you play a game', the overwhelming trend in technology is that any practical means to lock things down WILL be explored by capital, to maintain social dominance.
MS already has TPM and secure boot requirements. What's stopping them from colluding with the Copyright Cartel and embedding keys in the firmware that, if removed, disable functionality on the device or on property networks?
You are giving FAR too much trust to entities proven to have an interest in limiting computing freedoms.
> MS already has TPM and secure boot requirements. What's stopping them from colluding with the Copyright Cartel and embedding keys in the firmware that, if removed, disable functionality on the device or on property networks?
That Nvidia, Intel and AMD already put a better black box with a smaller attack surface in the GPU years ago for them to use.
Depending on what the definition of unencumbered is. Clicking on some link and getting your computer encrypted sounds like it's own means of encumbrance.
Is it a free market if 99.9% of companies copy the single most profitable approach to stay afloat (see smartphones), or the entire market is dominated by content rights holders that would make you pay license fees to use the toilet if they could?
There is more DRM free video available in 2023 than literally any other point in human history.
That doesn't entitle you to a specific video unprotected. But the video market is pretty unquestionably not dominated by content right holders (if by that we mean major studios).
It's not the licensing, but rather theat copyright is literally a government enforced monopoly on intellectual property that makes it not a free market.
Because the vast majority of content is owned by a tiny number of copyright owners (studios, IIRC, but it doesn't really matter who, just how many), who then impose artificial constraints. Netflix didn't decide to implement DRM on its own, it passed through requirements that would be imposed on anyone else too.
LUKS or eCryptfs work perfectly fine for the "stealing your laptop" use case, and are purely software implementations.
There's a poster below that provides a more reasonable use case for TPM. Headless server where asking for password on boot is undesirable (eg after a power failure)
Only problem of software implementations is that they are reliant on the strength of the user-supplied passphrase, so there's a inverse correlation between user experience (ease of typing the passphrase) and security (easy passphrases are also easy to bruteforce).
A TPM provides hardware-backed bruteforce protection which means even an easy passphrase can be made secure as the number of attempts is rate-limited by the TPM (to a level much slower than even the hardest hashes).
The advantage of a TPM embedded in the platform is that it is aware of the platform's state - the hashes of the firmware, option ROMs and the bootloader/UEFI binary that you're loading. Discrete TPMs are vulnerable to bus interception attacks but newer firmware TPMs are embedded in the CPU/SoC and are invulnerable to this so it's pretty secure.
A YubiKey/external TPM in comparison has no way to know whether it's being fed true PCR readings from the host or fakes from a malicious attacker, so at this point it will be no different (in the context of full-disk-encryption) from just having a dumb USB storage device with your LUKS keyfile on it.
Why is a Yubikey more trustworthy? You don't have the source code, you don't have the hardware design. In addition, a Yubikey is given no information about the system boot state, so is in no position to identify that the system has been tampered with.
Yubikey gets paid by people who want to do security things. Board manufacturers get paid, indirectly, by Microsoft allowing their boards to run Windows. The incentives are different.
It doesn't need to be the NSA; if your chip is vulnerable and exploits published online I don't think it's too far fetched that a resourceful thief could try to unlock your disk if he thinks it has valuable content (bank accounts, cryptocurrency, passwords, etc.)
Why take unnecessary risks when you can just use GRUB+LUKS and type the passphrase at boot?
GRUB+LUKS can also have exploits published online? If that happens, you upgrade to a newer version.
TPM is the same - any known vulnerabilities would generally get patched as part of firmware updates released by your vendor.
If the TPM is known vulnerable and a patch doesn't exist then fair enough you can stop using it depending on your threat model, but the same can be said for software implementations.
> when you can just use GRUB+LUKS and type the passphrase at boot?
This is a user experience drawback and opens you to other avenues of attack like passphrase bruteforce if you don't choose a strong (and inconvenient/slow to type) one. It's also impossible for servers or embedded devices which you want to be able to boot unattended.
It's not really: LUKS is an open standard and GRUB is a free software that
implements it. With a TPM you're likely to get a chip that is 100% a black box,
with the exception of a few people who signed NDAs and researches that have
knowledge and tools to poke into it.
> generally get patched as part of firmware updates released by your vendor
If it can even be fixed with a softwar update. Also, you're very lucky to get
maybe a couple of years of firmware updates, then your're on your own.
> with the exception of a few people who signed NDAs and researches that have knowledge and tools to poke into it.
This is good enough for the 90% of people who currently operate without full-disk-encryption at all. It would be a huge improvement.
Obviously, it's up to each individual user to evaluate their threat model and proceed accordingly. Nobody is forcing you to use a TPM, you can still use LUKS/etc and completely ignore the TPM.
That's not really true all of the time, some AMD fTPMs have physical vulnerabilities for example, that allow compromising the entire thing with some tools and a few hours of time. Software can't fix this since it's a flaw in the hardware itself, you'd need to buy new hardware to fix this.
No, the true question is why should we switch to tpms, and what are we giving up if we do.
In my experience, tpms are worse than worthless since they prevent rescuing garddrives from bricked systems. Nontpm luks can be unlocked on any system.
You can enroll a backup key if your full-disk-encryption system supports it. Both LUKS and Bitlocker do, so the TPM key can be one of many (the second one being a backup key stored away securely).
It'll only prevent you from recovering a hard drive if you configured it that way - ie it's doing its job as designed.
How is the TPM protecting my if my laptop with the TPM chip is stolen?
I understand that I will be protected from removing the HDD/SSD and putting it into another machine to read the data, but does it really protect me from anything else?
Preventing your storage from being put in another device is what it's meant to be protecting you against. If it's still in the original device then an attacker is still blocked by your login password.
No ASIC is ever going to satisfy your desire to audit. Even if they publish the RTL, you can't verify that this is what they synthesized, that this is what the fab actually produced, etc. You have to trust hardware because it's your everything. Even with relatively obvious attacks like backdooring the bootrom (which can be done by modifying just the metal layers!), practically speaking no one is going to ever catch it because modern processes are simply too fine for all but the fabs themselves to physically inspect. This isn't even to mention more clever attacks which attack the doping of the silicon [1] for individual transistors (which cannot be generically detected post fabrication by any publicly known technique).
Embrace the suck :) Sure, you can't truly trust hardware, but there's nothing to be done about it anyhow and so you may as well reap the benefits of trusting it.
We need the ability to create computer hardware at home just like we can create software at home. Until we can make chips in our garages, we'll be at the mercy of giant corporations and the governments looking to subvert our security through them. Decentralization of production is key to our technological emancipation.
There is no layered security approach that can fully mitigate malicious hardware. Even if you have a perfect cryptosystem where everything is running fully homomorphic encryption so that plaintext literally doesn't exist, you're still susceptible at the boundaries and to metadata leakages.
On real computers you're also susceptible to the hardware simply not executing your code properly and a hundred other things.
Since you can't fully mitigate malicious hardware, you have to state a more limited threat model for there to be any useful discussion.
What security boundary can you hope to hold if the CPU is running malicious code straight out of reset (such as through a bootrom implant)? No amount of layers and defense in depth can paper over the fact that your hardware is physically compromised.
It's the whole trusting-trust thing. We don't really have a way to solve it, so it's best not to burn yourself out on it.
For some reason in these threads nobody seems to bring up or realize that modern AMD CPU's fTPM is completely broken and not fixable via ucode updates. The AMD TPM exploit is exploitable by a "regular" person too, it doesn't have to be a state level actor, just some techy person with a few tools and physical access to the system.
We can't audit these things at all and I don't think we should just trust that they are safe, and we know for a fact that many of them aren't safe, even against "regular" threats.
This is a massive overstatement of the situation. Sure, a TPM might not protect you against the NSA or the Chinese government stealing your computer. If configured correctly it probably will protect you against anything less than a major nation state stealing your computer.
TPM based encryption is hugely useful in embedded devices, where you really don’t want to have to despatch someone to type the passphrase in. By driving it via the TPM you’re protected against someone removing the storage and being able to read the decryption key off it, and you’re also protected against someone modifying the components required to get to a decrypted disk to inject code you weren’t expecting - if the kernel or bootloader is modified then the device simply won’t boot.
At least the embedded devices I work on don’t expose any (reasonable) method to persuade them to do anything other than their intended job, so sure, steal the device if you like. With enough effort you can probably turn it into a general purpose computer, but you’re not getting any of the secrets off it.
I sell a device. There are lots of them in hands. You still aren’t getting my device key off of it. Yes, you can use that device all you like, but you aren’t going to clone it and make more.
> DRM is safeguarding other people's keys on your device.
I think you misread the GP. Your definition of DRM is exactly what GP described as his use of the TPM.
>>I sell a device. There are lots of them in hands. You still aren’t getting my device key off of it. Yes, you can use that device all you like, but you aren’t going to clone it and make more.
What the GP describes is not safeguarding the nominal owner of the devices' keys. GP is restricting the nominal owner of the device on behalf of himself as the manufacturer. This is leveraging the TPM for DRM.
Other possible less nefarious uses of the TPM are irrelevant to this point.
Oh, I indeed read that as "I [re]sell a device". In the case of them selling devices with embedded non-extractable keys, it could really be DRM.
Or something else! Payment cards work like that – they hold keys that nobody is supposed to be able to duplicate; certainly not third-parties, but also not the legitimate account-/cardholder.
There are plenty of uses for trusted computing other than DRM; they're just usually not as disruptive/frustrating to legitimate customers as DRM (and as a result are not as contentious), so they don't get as much attention.
What moral right do they have to put data on devices you own?
TPMs jeopardize the ownership of a device. Consumer devices are already sold where you can't (easily) inspect or change anything.
Even a user-controlled TPM can be used as storage for keys used to usurp control of the device. The existence of IME and PSP enables manufacturers to have more control over computing devices than the people who buy and operate them.
There is no moral defense for sabotaging ownership.
I genuinely wish things were this binary. I’m a hacker (using the original definition) at heart, I want to turn random devices I own into other things, which would be so much easier if it weren’t for encrypted storage and signed boot chains.
That’s all very well until you start talking about hardware that does things like providing video of the inside of someone’s home. I’m still behind being able to modify them and use them as you please, but if you do so that should be self-evident, because otherwise *someone else* can do so, and you won’t know until video appears on the internet if you walking back from the shower.
TPMs don’t necessarily mean that you can’t modify a device you’ve bought, it just means that things like the API key which connected it to the backend will be invalidated and you’ll have to set the device up some other way. That’s a safety mechanism, and even I want it to be present in hardware I buy.
Likewise I don’t want someone to be able to take a doorbell from the front of my house, extract my wifi credentials from it, and pivot from that to compromising the entire network.
I’m sure some people will say “well just don’t have those things”, and I’m inclined to agree, despite having worked on such devices I’m not sure they’re actually hugely useful to most people. People are buying them though, they’re in the wild, let’s make them as secure as possible.
Convenience, faster boot. Or if you have a headless server with disk encryption, but you want it to come back online without intervention after a reboot or power failure.
In the scenario of headless, if I’m not missing something this only prevents drives from walking off. If the whole machine walks off you are in just as much trouble as not being encrypted at all if you don’t really trust your OS permissions systems?
If you take some basic precautions - disable interrupting the boot process, serial console, etc. then bypassing that requires significant effort. As an attacker you need to know the versions of the software working on the server, know some exploit and then have the experise to use it.
For example I know that the police in my country use off the shelf disk cloning devices and then some basic forensics software for analyzing the disk image. This can be done by an average computer technician, and such a TPM scheme would totally prevent them from extracting data. Of course for bigger cases they can invest some more effort, but they would have to be sure that there is some important data there to justify the cost.
When someone steals your device and try’s to guess a user password a tpm can just throw away it's keys. (Or something similar)
Try it on a secure boot and tpm2 enabled Bitlocker encrypted drive, the only way to get in is via the recovery key after a configurable number of attempts.
The whole machine walking off is more detectable though. You can use TPM as one factor (among many, such as the presence of the machine on the expected network and no unexpected downtimes) to obtain storage keys from a separate trusted server, using TPM remote attestation to assert the machine hasn't been tampered with in-place (by merely booting it off a compromised OS).
The separate authentication server can be configured to only hand out the storage encryption key on expected reboots, so if the machine unexpectedly walks off and then powers back on that server would refuse to hand out the key, thus the stolen machine is now useless.
Also, if it's able to phone home in this state, it can be managed remotely including remote erase (whether of the storage drive or the TPM keys) if needed.
Eh no that's not true. If a device uses a discrete TPM and isn't using authenticated sessions it's trivial to sniff the bus and extract the encryption key. If it is then it's less trivial because you need to interpose the TPM but that's still in "Can solder SMT parts" territory, not major nation state.
You can only extract the encryption key if using the unencrypted APIs. You can establish and require an encrypted channel with the TPM if you know which TPM you are expecting.
The majority of people don't want to type a long passphrase every boot (or would pick a trivially-bruteforceable one) and FDE-backed TPM (even if vulnerable to state-sponsored attacks) is better than what it would otherwise be which is no FDE at all.
I dislike TPM personally (for a very different reason), but...
They reduce the attack surface. Not only in terms of the actual interface, but in terms of who has the ability to subvert the security. Using TPM may or may not be effective security against governments, manufacturers, and the like, but they are effective security against a whole host of other bad actors.
It could be argued that's still a worthwhile improvement. You can engage in other layers of security regardless of whether TPM is being used to help cover the areas TPM may not.
They're dangerous because the same effective security can be turned against the legitimate owner of the machine. The TPM effectively belongs to whoever configured it first.
TPM just stores crypto keys. It does so in a way that the user can't get them, which is my objection to the scheme (because it lets entities set up encrypted channels that I can't monitor).
However, that's the only way it can be turned against the user, and the other user security gains are real.
Slight segue, have you had any luck with distros besides PureOS? I want to make my own LFS for my Librem but I'm not sure what my custom system would need to support the EC firmware and pureboot, etc.
I'm using Qubes OS. AFAIK Pureboot should work fine with any distro, but I don't know the details. There are also EC firmware installation instructions for Qubes on their forums.
I've considered that one as well, especially once I upgrade to 64GB of RAM. Aside from hardware-accelerated needs like streaming, gaming, browsing, etc, I could do most of my software experimentation inside VMs and keep my system clean(ish).
I'm very happy with Qubes OS, using it as a daily driver for many years. It does indeed helps to organize your digital life and gives a great sense of security and control over your computer. I tried to list its advantages here: https://forum.qubes-os.org/t/how-to-pitch-qubes-os/4499/15
Because the alternative, not using SecureBoot, is strictly worse in all scenarios.
Just like HTTP vs HTTPS with LetsEncrypt. Even if an attacker can compromise a web-server rendering HTTPS protection useless, it's still better to always use HTTPS.
You're framing this as a choice between SecureBoot and InsecureBoot, but it's actually a choice between SecureBoot and DRM, or InsecureBoot and no DRM.
Besides, SecureBoot isn't strictly better in all scenarios. The most obvious one is the one where you forgot how to log in, and want to retrieve your data.
> The most obvious one is the one where you forgot how to log in
SecureBoot is separate and does not imply full disk encryption. If you choose to use full disk encryption, yes, you can be locked out, with SecureBoot or not.
> I've had SecureBoot enabled for 10 years and I didn't have more DRM because of this.
Maybe stop pretending to understand what the parent comment is talking about.
> SecureBoot is separate and does not imply full disk encryption. If you choose to use full disk encryption, yes, you can be locked out, SecureBoot or not.
There's no purpose to Secure Boot if I can just put your hard drive in a different computer without Secure Boot.
Some TPMs are FIPS-140 compliant and go through 3rd party audits and extra screening. If you purchase a device that is compliant you can then retrieve a certificate of compliance. Some TPMs can be made compliant with the appropriate firmware update.
If you don't trust FIPS then you could support open source TPMs and help audit them.
Other than that if you care enough about it and had the prerequisite skills and money, you could develop your own TPM (the interface specifications are online) and run your own trusted and audited Linux system.
Because it isn't a silver bullet but rather a countermeasure against rootkits, bootkits and certain physical tampering by regular and insider threat actors.
This feature isn't really for people like you and me. It's for big companies like Red Hat and Google to utilise on their servers. Google have their own TPMs they can trust. Red Hat probably have the paperwork to say their TPMs are safe.
But worrying about your TPM being backdoored is pointless anyway. If they can backdoor your TPM, they can backdoor anything.
There are hundreds of components in a computer and the TPM would be one of the hardest to backdoor. Bear in mind where this stuff is manufactured. The manufacturer would want nothing to do with funny business like this because it could harm their sales, so it would have to be done under their nose, or they would have to be compelled by the government.
US intelligence agencies don't generally go around mass backdooring things. It's too risky. They carefully target their attacks.
It's... extremely easy for the government to compel mass backdoors in TPMs. Don't you remember Clipper? And the backdoor would be completely invisible until they used it, since TPMs aren't auditable.
But it’s easy for governments to compel backdoors into any component. Even open source for non trivial code is easy to introduce obscure bugs that are exploitable.
How would an end user simply check them for backdoors if they were open source? Security auditing is a skilled job, and the people best capable of identifying security flaws or backdoors are generally able to do that given a binary blob as well.
I'm curious if open source TPM would mean easier side-channel attacks. What's the reason everyone is super hush-hush around how their Hardware Security works?
I am not a big fan of systemd, but I am impressed in how they put this security feature all across the boot process. I also used some of the tools shown in presentation and they are pretty well made, kudos to the devs. (Also, most of them work fine without systemd running, they just happen to be in the systemd monorepo???)
The problem with systemd isn't that it's not packed with impressive things, it's how inextricable the things are from each other. systemd distros are practically whole new distros on the inside, designed by committee. Every part of systemd really wants to integrate with every other part, so while you can technically compile and run just part of it, it really wants you to use the whole thing and throw away all the things those parts replace.
In other words the complaints are political, not technical.
I agree that there are some very cool components of systemd (or rather, very cool programs living in systemd repo) - networkd, resolved, repart, boot and stub come to my mind first, they have no alternatives that come even close to them.
Also, systemd made writing service definitions easy, hats off to them. With cgroups, namespaces and capabilities support out of the box - great.
However, every time I have to debug why some target wasn't reached (because some dependency of dependency failed), or why systemd mounted something wrongly, or just want to know how often and why some service restarted, I feel like the core part of systemd (which is, the systemd init and service runner) is working against me.
And let's not get started on journald, I declare it my enemy.
I've kept my eye on the project from afar. What's impressive about systemd aside from its ability to manifest conformity and avoid social accountability for how it gained influence?
Boot charts and NIH syndrome for system level stuff ain't it for me. Binary journals ain't it, either. Services suck, you're never sure the difference between units, services, sockets, and other similar INI style files. Extra BS like $HOME management? Not necessary.
In over ten years, I've not found a single good reason to lean into systemd or like it. It runs my current computer, but not by choice and I will remedy that when I build my custom distro.
This thing isn't really complicated, because all the hard-lifting is done by the design of TPM, not even TPM chips. Measurements and integrity checks are proven to be mundane tasks, and, for security, it's a good thing. Even without systemd, one can learn and integrate TPM in one or two months. But the systemd team has been working on this like 2 years now.
I'm quite curious what they've been doing. Security audits, maybe? Preparations for other integrations? I'm not criticizing here - I've been away from that scene for an year, and I wonder if there have been some new interesting developments or ideas that I'm not aware of.
Just a heads up for anyone using AMD CPUs fTPM, it is completely broken and cannot be used even for "someone stealing your laptop".
A few hours time and a few tools can fully compromise the TPM's secrets. This can be performed by anyone who has a little experience tinkering with electronics, someone who can use a soldering iron for example can almost certainly pull this off.
From my understanding all of the "zen" CPUs are effected except for maybe the very most recent models.
I think the topic of Lennart's talks usually is very interesting but his delivery assumes the audience knows much more about the subject than I do, at least. Most other talks I watch I have an easier time to follow.
The vidoes are mostly a condensed form of the blogposts he's been writing on these topics (filesystem layout / Boot Loader Specification / Secure Boot / Measured Boot / Unified Kernel Images) for a couple of years now, so you can read those if you prefer.
Do you have an example for stuff that the audience is expected to know? Or the other talks that do a better job? What kind of background do you have? Assuming an average full stack dev, of course he addresses a different audience, since this is a talk addressed to Linux OS developers at a conference with the description:
"All Systems Go! is a conference focused on foundational user-space Linux technologies. Its goal is to provide a gathering place for both contributors and users of projects that make up the foundation of modern Linux systems."
So, summing up: maybe you are not part of the target audience?
I think this one is especially worse, because TPM is a piece of hardware and is for security. The combination of the two alienates a lot of software engineers, like >90%. I've never met anyone in real-life who already knew how to use TPM, though many do know what it is and what it's for.
So, it's really a bad (or s**ty) topic to talk about in front of an unsuspecting audience, but certainly there are people who would appreciate the work, especially in embedded/robotics fields (which have been rising slowly for a long time).
I am not a huge fan of relying on this, I cannot recall numer of times where I had to recover my files from live USB or by plugging my drive into another PC - both of which will get difficult to do when my true password sits in TPM on the (potentially bricked) motherboard. But I guess that if it can be used in addition to regular password (e.g. use regular password when TPM is missing, use PIN if it works) then it's fine