Hacker News new | past | comments | ask | show | jobs | submit login
Intel x86 Root of Trust: Loss of Trust (ptsecurity.com)
486 points by bcantrill on March 5, 2020 | hide | past | favorite | 232 comments



The "trust" here is a complete misnomer. "Trusted computing" should be called "traitorous computing", where your computer has a module which might be controlled by (fundamentally antagonistic) remote third-parties. _They_ can trust your system to _betray_ you in their favor.

Traitorous computing should not exist and a pox be upon the heads of everyone who let such modules make it into our computers.


Who is "you" in this scenario?

I want to be able to secure my computer (an ATM, say) against people with physical access to it. A root of trust (that the original purchaser of the device controls) allows for that.

Or, to be slightly more dark, I, as an enterprise IT administrator, don't want the employees fucking around with the hardware I deploy, even when they have all day to poke and prod around. I'm the root user of those workstations, not them. I need to be able to enforce enterprise security policies on them, and I can't do that if they can "jailbreak" the company's computers. (They want to run arbitrary code for personal reasons? They can do it on their own arbitrary personal devices, then, for which I have conveniently provided them a partitioned-VLAN guest network to join.)


> I want to be able to secure my computer (an ATM, say) against people with physical access to it. A root of trust (that the original purchaser of the device controls) allows for that.

Unless you're running something like a Raptor Talos II, You don't really have the root of trust. You have a branch off of the manufacturer's root.

That may be better for some enterprises, but in this modern age is that really enough? Consider how the PLA was involved in the hacking of Experian/Equifax.

Until you can review the code yourself and verify the binaries, you don't really have the root of trust. Someone else does. (I'm barring other types of shenanigans here, but it's the next logical step.)


I mean, for the kind of highly-trusted "ruggedized" scenario represented by an ATM, one would hopefully get their hardware from a manufacturer that exists under a political regime they have no enmity with, or are perhaps even allegiant to. (That's half the reason many US government officials and contractors used Blackberries: the US government could—given the political realities of the time they live in—trust a device whose chips were verifiably made in Canada.)

For the workstation scenario, though, you don't really care who has the "ultimate" root, just so long as you can get whoever that is to help you to stop a particular class of attacker (e.g. your own employees, contractors, and any "visitors" in the building) from getting root. It's fine if the PLA has root on the boxes, because the boxes aren't actually storing trade secrets or anything; the point of having pseudo-root on the boxes is, in fact, to enforce a security regime that ensures your employees don't store any trade secrets on the boxes!

See also: being an "organization owner" in an enterprise SaaS service. Sure, I can't stop Google from snooping my GSuite data—but I'm also paying them to host that data for me, and e.g. selling it would be a violation of the contract. Even though they can, in theory, do it, they're economically incentivized against doing so (and doubly so, because if they did it once and got found out, they'd never make any GSuite money again.)

> Unless you're running something like a Raptor Talos II, You don't really have the root of trust.

Mind you, there are "multiply-descendant root-of-trust" setups that are quite common these days. In modern Apple devices, you've got an Intel processor doing most stuff, but then the Apple-controlled T2-chip domain doing encryption stuff, with its own boot chain completely isolated from the Intel one.


You still are just a trust leaf, not the root. The root is a ROM you cannot read or change on the Intel side, so no trust control there (only delegated which for some people is no trust at all). With no ability to verify it, an exploit like this would not be something you can detect and as such breaks the trust chain.


If you're talking about SGX the root is really the Intel-run remote attestation servers, no?


Yes and no, the SGX flow models a dual-root method where two PKI roots must be trusted for success, and at the same time the SGX is only as safe as the microcode is, which can be overridden at this point. SGX with CPU-signature and Intel-signature can still be faked when you can compromise the CPU and SGX and still run the signing routines.

Edit: I wonder if there is any feasible way one could do this without trusting the CPU at all, but I suppose that completely defeats the point.


Hmm. I didn't realize the key, or at least the signing function, was in the microcode. So if the microcode can be changed by an attacker, they can have it sign <fake code signature> but go ahead and run <evil code> instead?


Yes, and that was supposed to be protected by CSME and ACM, but because those can be circumvented, so can microcode upgrades.

Normally a microcode update would be verified as well, but since it has been hacked in the past and CSME verification can now be bypassed it would probably be a matter of time.

It used to be the case that an internal ROM with a unique on-time programmable entry that can only be read internally was safe enough, but with decapping, gliching and breaking PKI chains that is getting weaker and weaker.


Except, these companies are made up of a huge range of people. So, you’re not just trusting Intel but Mike, Mary, Bill, Bob, ... etc who don’t really care about you the way these companies in theory should.


With the way things are set up currently, Intel is your root user, not you.


And in a monarchy, businesses exist at the behest of a writ from the monarch saying they can. That doesn't mean that the business doesn't "own" stuff. It just means that the king can capriciously revoke their asset-ownership, in much the same way that a tornado can capriciously revoke their building. It's a rare natural disaster that you insure against.


I'm confused, are you claiming it's OK that my x86 Intel platform (and whatever data passes through it) exists in a "monarchy" of sorts (along with other phone-home drones), and that I should find it acceptable?

("I" being a business, or individual, any proxy for society at large)

The fact that a tiny few human beings had the power of a "tornado" over others' lives ended fairly abruptly in some circumstances, with apparently good enough reason that it stayed that way.

Note: you're referring to absolutism, which is a mode of monarchy (also found in totalitarianism, dictatorship). By contrast, most monarchies still 'alive' today operate more in "symbolism" mode, in the constitution of their country.


I mean, monarchy wasn't really a key element of what I was saying. You don't have "root" on your own body in a rule-of-law democracy, either. You can't just decide to not go to jail, if a court says you must.

This is kind of a recapitulation of the argument that forked Ethereum into Ethereum Classic:

• There was a system, partially founded on a guiding principle of its participants having final say in what happens in their in-system interactions, through contracts they enter into the system. Those contracts were each supposed to "have root" over all state-changes made to their own private storage.

• Something went wrong with a contract, in a way that corrupted its private storage and made things worse for pretty much everybody, since it was a very popular contract. The maintainers of the system decided to violate the guiding principle in the name of making things better for everybody, by just reaching in and overriding the rules of the popular contract, so that it would retroactively have done the "right" thing, and not gotten corrupted.

• Some people thought that there shouldn't be any entity (consortium or otherwise) with power to override the rules of their contracts, even if those changes are "to the good", so they left and started their own alternative system, mostly the same other than the guarantee that they'd never violate the guiding principle.

• The market decided that the alternative system has about 1/10th the economic value of the original system. Most developer effort, userbase, etc. sided with the original system, and with the concept of there being a political entity with the power to overrule individual contracts. The contract creators themselves seem to want the "safety net" implied by this entity having the power to overrule them.

Interesting, no?


> You don't have "root" on your own body in a rule-of-law democracy.

Maybe if you live on that Face/Off prison where everyone wears metal boots on a magnetic grid. But in the real world rule-of-law is quite fragile because everyone has "root" over their own bodies. The obvious benefit is an ability of the public to revolt against a corrupt regime.

Regardless, it's a fatuous metaphor because nobody is advocating for mandatory metal boots nor the little Harkonnen heart plug thingies from Dune. Some people are, however, quite vehemently demanding that all software/hardware be designed by law to give root to a trusted third party. That's fatuous and dangerous and therefore requires different metaphors.


> Some people are, however, quite vehemently demanding that all software/hardware be designed by law to give root to a trusted third party.

My point with my analogy was that, in the real world, hardware/software already do give root "by law" to the state. Not explicitly, but rather implicit in the fact that the government can tell you what to do, and so, if they want to control your computer, they can make you do it. (What else do you call a National Security Letter?) That's kind of what "by law" means—the law doesn't need to say that the law supercedes X; the law can supercede X in any specific case, any time it likes. If a court rules you need to open up your computer, there doesn't need to be a law making that request legal. A court said it; it's legal by definition.

And even then, a government only needs (the public conception of the machinery of) the law on their side if they aren't all that motivated to deal with you. If they really care—think "motivated enough to go though the trouble of putting someone into a witness-protection program, but... the opposite of that"—then there are plenty of private places they can disappear you to, and paint over your life with fabrications of criminal behavior they caught you in, including several months of evidence from a parallel running manufactured narrative of your life.

If you think there's a difference between dying by having your heart unplugged, and dying by having a SWAT team show up at your door with munitions you can never hope to have parity with, I don't know what to tell you. Either way, you're going to end up doing what they say, or dying trying to get away. (And if they really really care, you won't even be able to do that. You'll just wake up in a straightjacket in a padded room with all your teeth pulled out, so that you can't even bite your tongue.)


You definitely make a very compelling and interesting argument. The ultimate market value is certainly a factor of many things, notably "first-mover" / OG value (chief of which Bitcoin versus all subsequent forks e.g. BCH), the sentiment (polarizing people, places, etc), etc.

I would tell you that this kind of choice [being root in your 'own' system, having a 'root mechanism' in a collective network: pick any, both or none] fundamentally comes down to ethical values, a personal hierarchy thereof. The case of cryptocurrencies, if we ignore the speculative aspect, is interesting insofar as it demonstrates quite fundamentally opposed political views — here a certain idea of "benevolent interventionism" (however dictatorial / democratic in legitimity, the very existence of the mechanism), versus a certain ideal of "the hand of God" (re Adam Smith), a 100% non-distorted entity (I reckon, from that to Conway's game of life is but a matter of vocabulary: it's a fascinating object for nerds indeed).

Again, if we ignore the speculative aspect — which distorts everything, makes all of the above possible on paper but absolutely not observed in real life so far.

On topic, I think there's a certain tension forking towards the "free" domain — think Linux and open source and free software but actuated: now it's RISC-V (big mover in the integrated sector, GPUs..), OpenPower, think also companies like Tesla or Amazon making their own 'forks' of everything in-house, not because of NIH (these are too efficient to fall prey to such emotional traps) but rather because they do squeeze some degree of efficiency (like Google with their TPUs and what-have-you). How all programming languages in 20 years have gone open-source and mostly collegial in governance, and most major projects by way of consequence; etc.

This [x86 RoT] is just one in a list of reasons that justify this much deeper trend, as I reckon; and there is indeed a certain idealism of "the hand of God" as in being root on your own system (God = me = root). I very much subscribe to this ideology myself, if only for the practical reason that a backdoor is a backdoor is a backdoor.


I really like that analogy, but I do have to point out that many countries have come to realize a monarchy robs people of their freedom, just as the untrustable trust module does.


> I want to be able to secure my computer (an ATM, say) against people with physical access to it. A root of trust (that the original purchaser of the device controls) allows for that.

Why not generate a unique key for each device on first boot? isn't it a fact that the original purchaser does not have control of the trusted computer platform. Trusted as in the OEMs trust they can control your computer.


Better not use a computer with Thunderbolt, FireWire or pcie ports then. This kind of protection is mostly smoke and mirrors.


There's nothing wrong with trusted computing as a technology. The problem is users don't own the keys, fundamentally antagonistic companies and government agencies do. It'd be great if we could install our keys in the system and use the technology to make sure only software we have signed ever executes.

The problem with trusted computing is in how Apple, for example, uses it to prevent users from running software they don't like on their own phones. The problem is the fact that Apple is in control instead of the user, not the technology it uses to maintain the control.


This is precisely the security model of Chromebook, and it's pretty great.


Trusted computing enables you to prove to a remote party what your machine is executing. This would be useful to cloud providers so they could prove to their users that their servers are only running their users' code without snooping on it. People would no longer have to choose only from well-known cloud providers to find a trusted host. You could imagine a marketplace where anyone could sell the compute power of their home computers (undercutting cloud providers' prices to make up for their lesser network connectivity) and use remote attestation to prove that they're not spying on or modifying their customers' compute workloads. The people selling their compute power like this can use sandboxing to protect their own system from the customers' compute workloads, and use trusted computing / remote attestation to protect the customers' compute workloads from their own system. I think it's extremely good for users when a technology removes the need for trust in big brands and allows anyone to compete.


Part of full ownership of a computer is reserving the right to "lie" to any code running on it about its execution environment. This could be for examining malware, testing, reverse-engineering, or anything else.

There are still plenty of people unconcerned with the economics of multi-tenant cloud hosting, DRM, or prevention of cheating in videogames.


There's benefits to a system where you have full control and visibility into everything that executes, and there's benefits to systems where that isn't true. I hope 100% open CPUs exist and that there continues to be a strong software ecosystem that works with open CPUs. But it's a fact of the matter that CPUs with stuff to support DRM exist, and we might as well see if we can get some strong pro-user / pro-privacy functionality out of it.


To me that's twisted logic.

"I don't want to have to trust my cloud provider."

"Ok, we'll absolutely pinky-swear by this API you can access that our machines are running a trusted setup."

"Ok! I'll just trust the API you provide."

Even if the API is an x86 instruction, even if you do timing checks and side-channel checks in your code, you're still just in an arms race with your cloud provider while they hold all the power.


I’m not a cloud provider. Nobody has a right to know what’s happening on box. If I want to mislead the servers I interact with, that is a right I have.

As a BE developer, I can tell you FE should always be untrusted. Always. Anyone who tells you otherwise is insane.

Yes, NetFlix wants to live a fantasyland where they control the display of content. Tough. At the end of the day, I can record my iPad and put it on YouTube.


Th cloud provider would be able to provide a message signed by Intel that says "this computer is running only code with hash XYZ at the top privilege level, and the code with that hash has produced public key ABC". The cloud provider would publish the code with hash XYZ, which would be open source and publicly studied to verify the claims that it runs the users' compute workloads securely without allowing other processes to spy on them. The code would create a keypair upon startup, keeping the private key private even from the cloud server operators, and therefore you can know that if you encrypt a message to the public key in the signed message, then only the server's code running with hash XYZ can read the message. You could encrypt your compute workloads (or messages to your own running workload) to that public key and know that the cloud server operators can't read the message.

This does require trusting the CPU manufacturer (Intel) though. It's possible that Intel could collude with a server operator to create fake remote attestation messages, and trick you into encrypting your compute workload to them instead of the securely-sandboxed code of hash XYZ. That's not any worse than the current setup where your submitted compute workloads can't at all be encrypted in a way that the cloud operators can't read. Currently, if I want to use a no-name cloud host, then I have to trust the no-name cloud host to not be conspiring against me. With remote attestation, I just have to trust that the no-name cloud host and Intel aren't working together to conspire against me. I trust Intel relatively, so it doesn't matter how shady the no-name cloud host is.


> I trust Intel relatively

This seems like a bad idea. Probably the number 1 reason for a host to go snooping in your VPS is because they have been requested to by a government. A government also has the power to ask intel to let the host do this. So no security has been gained here.


Without this scheme, the government or the host alone could inspect my VPS. With this scheme, the host acting alone is no longer able to inspect my VPS. That seems like a solid win.


You don't trust the cloud provider, you trust the hardware vendor. They are the root of trust in this scenario.

Of course, if that trust, due to malice or implementation defects, is misplaced, you're not better (but also not worse) off than without something like SGX.


I addressed that when I said "x86 instruction."

An x86 core with SGX can be emulated...

(Edit: SGX can't be emulated. I stand corrected. Perhaps a better argument would have been arguing that verifiable builds give the user software freedom by granting them the ability to run the same code everywhere. But trusted computing != verifiable builds.)


Actually it can't be, barring successful attack against the physical hardware, firmware, or underlying cryptography. SGX employs public-key cryptography to authenticate itself to the end user remotely (the same as SSH). The key it uses is signed by the hardware vendor, so you most certainly won't be able to emulate it.

That being said, I have serious misgivings about any hardware I own and use being explicitly designed _not_ to do my bidding. I can certainly see the utility of such an arrangement for a cloud provider though.


I really dislike this take (it's from RMS) — it focuses on a single use case (indie user wanting to have complete control over his/her platform), while ignoring countless other use cases where someone legitimately does need to ensure that the platform can be trusted (e.g. it can be authenticated and runs the code it is supposed to run).


Why just indie user? Companies need to have trust in their property, too.


They need the illusion of trustworthy hardware so they can operate as if it is. If it turns out people need actual control over their own things and information for some reason, like to prevent subversion of our society from the inside, from the peak of power and privilege... we might just be shit out of luck.


Depends. Farmers can't live with that illusion, they prefer something they can fix themselves (that is a recurring topic on HN.). It's possible more enterprises will end up like that as the current wave of outsourcing everything will ebb.


I would say the farm equipment DRM isn't even giving the illusion of control. It's plainly obvious it restricts the equipment's use to everyone who owns them, and not in some lawfully required way. Backdoored/coincidentally-flawed software and hardware and things like that which are pervasive in tech don't get in people's way so much. As soon as flaws become more widely known they get patched. Surveillance doesn't get in people's way, directly. DRM like EME stops some copying but goes unnoticed by most, and can be rationalized easier than the farm equipment.


> while ignoring countless other use cases where someone legitimately does need to ensure that the platform can be trusted

99% of the time this is DRM :/


99% of internet comments that say "99%" pull that number out of thin air.

In my work, it's mostly about being certain that our embedded devices run the code that we wrote and that the devices themselves are not rogue clones. In other words, about ensuring the security of the entire system.

I get the backlash against DRM, I hate it too. But I think these days the biggest DRM hoopla is largely over, and I am annoyed by the rather childish "treacherous computing" take.


But it really comes down to the same thing, that you want to deny users ability to run their own software, or make modifications to hardware, etc. In case you are really the owner of the hardware, why you need measures to prevent against reflashing with other software?


I sell complete systems with hardware and software. I want to make sure that the hardware hasn't been hacked into and that (for example) my cameras or sensors are sending me real data. I could also want to prevent hardware cloning.

There are lots of legitimate cases that do not boil down to "but you want to hurt your users". My users want a system that can be trusted, too.


This assumes the absence of a sandbox. Trusted computing can happen with our without a sandbox, much like "regular" computing.

If your system is running unsandboxed, untrusted third party code, that's pretty bad, regardless of the presence or absence of a trusted platform. As an example: FLOSS systems are definitely capable of running malware.

On the other hand, a reproducible build of open source software might well be what runs in (and relies on the attestation provided by) a trusted computing platform.

I do see one practical concern with integrating trusted computing on a general purpose computer:

If an implementation depends mostly on security through obscurity to achieve the desired attestation capabilities, this makes it much harder to audit it for vulnerabilities or backdoors. But I don't see how that is a fundamental property of a trusted computing system.


A reproducible build is very different than cryptographically attesting the binary state of especially the kernel.

If I can't produce a binary with the same "reproducible state" as the one you had because your _kernel_ was one that I don't run (especially because maybe I don't _want_ to run it), that destroys all the value of a reproducible build.

A reproducible build should not _undermine_ software freedoms, specifically those protected by a Free Software license. But trusted computing always undermines software freedoms: that's by design. It's intentional. It's all about locking 100% of the users into a single monoculture where there is minimal freedom.

And that's fine in a managed IT environment such as a corporation. But it's not ok when I buy hardware and the manufacturer refuses to hand over the certificate chain to me.


A kernel would not be something that you would run in a trusted enclave. It's way too big of a surface area (containing your entire operating system and application layer, after all), so what would be the point of attesting that to anyone?

This is the "old" way of using a TPM, and I agree, it does not make sense at all. After all, it never came to be, and that's not only because of the vocal protests against it. It simply does not make sense!

But I would encourage you to read up on how, for example, the Signal foundation is thinking about using something like SGX.


It was the ME firmware that was compromised with this CSME exploit.

Of course, few people notice if the ME firmware is updated, and Intel doesn't often update deployed ME firmware. But it verifies the BIOS, and the BIOS verifies the kernel.

And that's the point where people start caring. Hence I used the kernel as an example.

Signal is thinking about using SGX, but they also have other ways to grant the user reasonable security. After this CSME exploit, Signal may reconsider using SGX.

Either way, my point still stands: verifiable builds do not rely on trusted computing at this point. I hope they never do. It would be twisted logic to tell the user the only way they can have their software freedom is by asking the trusted computing infrastructure to verify it for them. The trusted computing infrastructure being absolutely as opaque and locked-down as possible. Trusted computing is not reproducible! Its designed-in purpose is to be opaque, to hide things from the user.


> verifiable builds do not rely on trusted computing at this point.

Of course they don't. They are orthogonal, i.e. one does not imply the other, but one also does not prevent the other.

The verifiable build serves you, the hardware owner, in knowing that the software does what its vendor claims.

The trusted platform's assertion serves the software vendor, allowing them to trust the environment that their software is running in.

> It would be twisted logic to tell the user the only way they can have their software freedom is by asking the trusted computing infrastructure to verify it for them.

Nobody is saying that. If you want to trust your computer to do what you think it does, you don't want trusted computing; you probably want reproducible builds, trusted boot etc. But trusted computing also does not inherently prevent you from doing that. The two are orthogonal!


> If you want to trust your computer to do what you think it does, you don't want trusted computing ...

I suppose this depends on precisely what's meant by the term. Trusted computing based on a root of trust you control is incredibly useful by providing guarantees to you about remote systems you own and operate. This can be realized at present using Secure Boot and a TPM, but more hardware support (ex VM screening or SGX based on your own keys) would be nice.

You seem to be using trusted computing to refer only to hardware that works against the owner. Instead, I've always thought of it as referring to a device that is capable of providing various guarantees about the code it executes - it just happens that current implementations are primarily designed to provide those guarantees to third parties instead of the owner.


> You seem to be using trusted computing to refer only to hardware that works against the owner. Instead, I've always thought of it as referring to a device that is capable of providing various guarantees about the code it executes - it just happens that current implementations are primarily designed to provide those guarantees to third parties instead of the owner.

This is also my take on it. As with many issues in tech, it’s not about the tech itself, but rather who the tech is working for.


I have an inexpensive Acer WinBook. The BIOS lets you nuke the preloaded public keys, and explicitly white list the current bootloader(s).

Now, if anyone tries to boot it with something other than Grub, it will fail and prompt for a password.

This seems like a reasonable tradeoff for secure boot, though it would be nice if Grub had a lockdown mode for this use case.


Grub has a _password_ and _verify_ settings which are what you want.

Or take a look at a ready-made configuration: https://github.com/CrowdStrike/travel-laptop


Good point. FYI asterisks can be used to italicize for emphasis.


Or spun differently, _I_ can run private, trusted code on an adversarial, remote EC2 instance without compromising my privacy and preventing the adversary (Amazon) from tampering with my secure execution.

At least in theory. IIRC, a number of side channel attacks are exploitable on Intel SGX, so the adversary could leak secrets but not tamper with execution.


It's not a misnomer. When you define trust you need to define who you trust.

If X's trust in Intel to provide a platform where the code runs verified doesn't agree with your ethical view, that meant you likely don't trust X and Intel. That's all - there's no betrayal, or traitors, or other ethical dilemmas here.

Trust is not universal and you cannot trust everyone.

You're probably more interested in trustless computing, where those modules are irrelevant.


Why would I want my computer to make decisions against my will because of a very carefully defined version of "trust"? When the alternative is software freedom which gives me the power (and yes, the responsibility too) to direct my computer the way I want?

And yeah, that responsibility includes checking for updates and downloading security fixes.

"Trust" is always used in "Root of trust" and "Trustworthy computing" to mean "deny software freedom to the user."

That's not even close to what the dictionary says trust means.


> "Trust" is always used in "Root of trust" and "Trustworthy computing" to mean "deny software freedom to the user."

That's not what "root of trust" means. Root of trust is normally a certificate which signs other certificates. If you trust that top level certificate to be valid, it means you can trust that the certificates signed by it are valid as well. That's all there is to it.

What someone does with the certificates - whether that's signing TLS traffic, or execution attestation is completely separate. I don't think you'd argue that TLS is used to "deny software freedom to the user" - right? (https://en.wikipedia.org/wiki/Root_certificate)

"trustworthy computing" is not a technical term, but a marketing phrase. (https://en.wikipedia.org/wiki/Trustworthy_computing)

You're probably referring to "trusted computing" which is a very specific use of signing and attestation of computing states. It uses the chain of trust in the same way it uses addition (or substitute any higher level concept you want) - you can't say "addition is bad, because it denies software freedom to the user" in this case.


You're probably referring to "trusted computing" which is a very specific use of signing and attestation of computing states.

If it is so specific, why is it present in every single Intel CPU? And why aren't end users able to delete the built-in root of trust and replace it with their own?

Trusted computing, as implemented by Intel, is actively hostile to the citizens of a free society. So in this case, I really can say that it denies software freedom to the user.


You can disable SGX in your Bios. It don't give you any new features then, it doesn't deny any freedoms either.


>I don't think you'd argue that TLS is used to "deny software freedom to the user" - right?

No, but if I could not make a website that was not signed by e.g. DigiCert, then I would argue that something was not right.


Yes, trusted computing is currently mostly used to curtail user freedom - generally in the name of security. And yes, implementations make use of a cryptographic root of trust to realize their design goals. It doesn't have to be that way though.

I'm fine with a root of trust I have complete control over, and a trusted computing chain built on that would be welcome. Such an arrangement would do nothing to curtail my freedom, since I could install my own keys and thus firmware. And it wouldn't require any additional trust in the hardware vendor beyond what I already place in them. It could even ship with the vendor's keys installed by default, the same way Secure Boot works today.

I'd still have mixed feelings about something like SGX (in its current form) being standard in consumer hardware though, since at its core it consists of the hardware working against the owner. I can see why such an arrangement is desirable for cloud providers, but widespread consumer adoption would allow a third party (presumably a content provider) to require its use as part of a DRM scheme.

That being said, even SGX could be made palatable if there was no vendor provided key in the hardware at all and it instead attested everything with _my_ key. This would still protect remote systems against myriad forms of physical attack and allow for screening VMs from each other, all without compromising the integrity of my system (from my perspective). The downside, of course, is that it would require actually trusting your cloud provider.


FWIW, SGXv2 enabled machines -- to my knowledge -- have a user-programmable MSR that you can stuff your own key into. So Intel is no longer needed at all for attestation. (I think they even released an SDK so you could more easily implement your own attestation services, too, like the ones they run.) The problem is that very few SKUs ship with v2 at all, and zero of those SKUs are in the Xeon lineup as of now.

Once the first set of patches for SGX support in Linux got rejected specifically because the user couldn't set their own keys -- I felt it was inevitable Intel was going to cede control on this, only a matter of when the chips appeared. It's just not how they roll; Intel's relatively good and well-prepared-in-advance Linux support is one of their good strengths, and they almost certainly don't want to keep patches around for a major feature like SGX any more than they absolutely have to. (The latest versions of the kernel patches, I think, only support SGXv2 style user controlled attestation. They have yet to be merged upstream.)


That's... slightly better? I think? Honestly SGX is complicated enough that I'm not entirely sure I have this straight. Am I correct in thinking that:

1. A user-programmable MSR would allow the device owner to falsify the attestation of enclave initialization (when using their own keys).

2. The ME (and other firmware) would still not be replaceable or owner controllable.

3. Due to 2, enclave compromise would have to be premeditated. That is, once an enclave is up and running IIUC it can (potentially) use MRENCLAVE to derive the sealing key. Without control of the ME that enclave would be unbreachable, correct?

So realistically, to maintain control of code running on my device I would have to set my own SGX keys and configure my system to actively compromise all enclaves at launch. From a security standpoint, there's still a black box (the ME) with super-root that could be compromised by attackers that I can't inspect, modify, or disable. And from a political standpoint, there's still a tragedy of the commons regarding DRM if the majority of providers decide to require SGX with Intel root keys as a condition of service.

Have I misunderstood anything?

(While responding, I stumbled across an interesting paper regarding using SGX to protect malware against researchers: https://arxiv.org/abs/1902.03256)

Edit: It appears I was overly optimistic - playback of 4k BluRay already requires Intel SGX. (https://old.reddit.com/r/Amd/comments/bw0cwq/will_ryzen_3rd_...)


I don't think SGXv2 is available anywhere. Do you have a link to an SGXv2 datasheet/purchaseable chip?


Flexible Launch Control (setting your own keys) is available in the NUC7CJYH, apparently: https://github.com/ayeks/SGX-hardware#hardware-with-sgx2-sup...

So even worse availability than I thought, I figured more than one SKU was supported. (I think you could at least buy either a NUC or a 1U from Intel for SGXv1 prototyping, but this is all I can find at the moment.)

Honestly though given all that it's still hard to find what actually defines SGXv2 in terms of features or errata, though. But FLC is the major requirement that will allow mainstream kernel support, from my understanding.


It looks like finally users may have complete control over their Intel computers without Intel having the final say. I, for one, am quite happy about this.


This sentiment seems to be rooted in a misunderstanding of what trusted computing is trying to achieve on a fundamental level.

The idea is not to "take control over people's computers", i.e. your trust in your own computer. It is rather to enable somebody to gain some level of trust in the computations that are happening on somebody else's computer.

Yes, this technology is commonly used for DRM, and that was one of its earliest applications. But it's not limited to that. Trusted computing can switch the roles and give you as a user certainty over the computations a third party provider performs in the cloud on your behalf. The Signal team is doing a lot of very interesting experiments there [1].

If your concern is a hardware backdoor or something similar, this is less of a question of trusted computing, and rather one of trust in hardware vendors. Your hardware vendor can screw you over entirely without TPM, TEE, secure elements and the like.

On the other hand, Intel's trusted computing platform being horribly broken does not magically give you FOSS replacements for all the firmware, ROMs and microcode running on the dozens of peripherals in your computer.

[1] https://signal.org/blog/secure-value-recovery/


That's all correct, yet it doesn't consider the politics. The people on this thread are concerned about a huge power imbalance between customers and companies; specifically, customers have zero bargaining power so they should expect trusted computing to be used "against" them far more often than it's used "for" them.


> Your hardware vendor can screw you over entirely without TPM, TEE, secure elements and the like.

Yes they can, but as I understand it they are using TPM to screw us over, hence people celebrating its being popped. No misunderstanding of trusted computing necessary: there aren't, in practice, other vendors to choose from here.


Could you elaborate on the ways that people are being screwed over by TPM?

I do see that the existence of both trusted and untrusted systems could exert some pressure on consumers to adopt the latter, due to the unavailability of certain services on the former (e.g. DRM, banking apps on rooted Android phones etc).

The danger here is a loss of "freedom to tinker", which I do appreciate very much, and I share that concern. But has that actually happened with TPM?


Certain 4K/ HDR content is only available on PCs if you use edge.


Thats not TPM.


What about the fact that the newest Intel CPU that can run Coreboot is from 2012 since all the subsequent ones have been locked by Intel? Isn't the TPM directly responsible for loss of that freedom to tinker?


Coreboot runs on the latest Intel CPUs (and work is underway for CPUs that haven't even been released yet) but it uses binary blobs. Those blobs have nothing to do with TPMs, the ME, or whatever.


As already mentioned the coreboot bit is false, but also:

the TPM is a PASSIVE component. It only responds to your requests and you can do cool things with it.

https://media.ccc.de/v/36c3-10564-hacking_with_a_tpm


Just a few months ago I bought a comet lake laptop that shipped with coreboot, so I have no idea what you’re talking about here.


I'm curious, where are TPMs being used to screw people over in your opinion?


I think we need to come up with solutions to problems like key escrow (i.e. in Signal's case) that don't require trusted computing because a single root of trust for hardware is a single point of failure and depends on trusting the hardware manufacturer.

There are a lot of possibilities with distributed computing


Do you have any narrative of how to do key recovery safely without an enclave or a human in the loop?

I’ve spent a lot of time thinking about this and I don’t really know how to do it without one of those two things.

Edit: like I hear you saying there are possibilities in a distributed computing world, but I don’t have any idea what distributed computing enables for key recovery (except possibly k of n schemes but that’s just replication, not safety).

Edit 2: also, presume that users suck at key management and can’t remember long password strings, 24 words, or be trusted to store a key for a meaningful period of time.


It would still be an enclave of sorts, but white box cryptography is generally trying to achieve a similar goal as trusted computing, without relying on trusted hardware.


I don’t think the enclave you’re describing exists, nor do I believe there is an enclave that is untrusted hardware.

Do you have an example of such an enclave and how it would operate without a remote attention service in the model where a user can trust that a distributed network they don’t control is safeguarding their key?


If homomorphic encryption advances to the point where it's usable, that would be an example of security on untrusted hardware.

(But I suppose that just proves your point.)


Yes. I think if homomorphic encryption existed in a way that was super fast, we would be using that. It’s real far as far as I can tell.


It's definitely real, but as you stated, incredibly slow (and noisy, though that's dealt with by refreshing). It has limitations currently, but there's good work right now lifting those, so it's definitely something to keep an eye on moving forward.


There is a solution: Fully Homomorphic Encryption (FHE). However, it's unfortunately not practical yet with current hardware (and it's unclear if it will ever be). Meanwhile some partially homomorphic schemes might address specific applications (say for instance a 3rd party provides an encrypted list, I believe there are nearly-practical algorithms to sort this encrypted list information leaks).

Trusted platforms are pretty interesting IMO in their ability to essentially provide FHE by means of tamper-resistance instead of mathematical security. Objections should be more directed at the control of keys being with Intel; maybe some other orgs should be in charge, maybe there could be a number of trust vendors you could choose, or at least veto (and allow external users of your external platform to choose). Something more in line with TLS authentication: we all need to trust 3rd parties to use the internet, and nobody protests -- with good reason. It's a well designed, open, decentralized system with good oversight.


> like key escrow (i.e. in Signal's case)

Is signal moving towards some sort of key escrow policy?


that sounds vastly overgenerous. if that were truly the case, then TEE wouldn't be built into every consumer system. if it were actually to protect against malicious cloud providers, then TEE would be only available for special (read: expensive) processors. see: intel and ECC memory. the goal of TEE is to benefit big corp and fuck the user, and just because it can in theory be used for other purposes is barely a consolation.


TEE is practically used for mainly two things in current smartphones: DRM and hardware key storage.

DRM lets users watch Netflix on their phones while on an airplane.

Hardware key storage significantly decreases the attack surface for malware trying to extract them, compared to storing them on the application processor.

How is the average user being fucked here, exactly?


> DRM lets users watch Netflix on their phones while on an airplane.

No, DRM exists to restrict users, it does not enable anything for them.


That is definitely true. In a way, the user of DRM is the content provider, not the owner of the playback device.

But this is exactly the idea of trusted computing:

"Prove to me that I can trust your hardware to run my software according to my specifications, and I will use it to compute things (for our mutual benefit) that I would otherwise only compute on my own hardware."

DRM is the canonical example, but wouldn't it be nice to be able to actually know that cloud service provider has to adhere to their terms of service, rather than having to take their word for it?

(The big "if" here is that the terms of service are expressible and enforceable in the context of some piece of software.)


> "Prove to me that I can trust your hardware to run my software according to my specifications, and I will use it to compute things (for our mutual benefit) that I would otherwise only compute on my own hardware."

As a user - why should I trust your software with my computer?

There is great imbalance of power, the companies aren't necessarily the good guys. It already got unfair with DRM, for example together with DMCA is effectively blocking fair use, like ability to make own backup copy of purchased medium, or purchasing it once and being able to play the content on multiple devices.

It also prevents one from being able to sell their copy to someone else which also is allowed by law.


so what you're saying is that without TEE, Netflix would shut down? come on. Netflix would clearly keep operating with or without DRM, all TEE does is make it harder for the user to access their legitimate (non-Netflix) content in anything but the most approved way. it entrenches mainstream operating systems and makes it harder to use FOSS. sure, I'll concede that Netflix is not the most damaging to user freedom, but that's not what OP is about. nobody would give a shit about this vulnerability if it was just Netflix, because Netflix is broken against hardcore attackers anyways. TEE proponents want to expand its use to more user-hostile applications. that's my concern.

hardware encryption is arguably a better use of TEE, but as far as I know, no actual implementations use SGX for that purpose. the TPM is used, but it's not fast enough for actual encryption. the OS loads the keys from the TPM and does the encryption in regular software.


Does this announcement mean it's finally possible to run FOSS firmware like Coreboot on modern Intel hardware? If so, this is a huge finding


Coreboot already runs on modern Intel hardware. This vulnerability doesn't eliminate the blobs or the need for the ME to initialize hardware before Coreboot runs if that's what you're thinking of.


Well, I think people care about whether this can be used to bypass bootguard. If yes, it would enable coreboot support on every laptop.


Could this vulnerability be used to bypass the ME? I'd like to run coreboot on my thinkpad X1 carbon gen7


The ME is irrelevant for you. It's bootguard that is preventing you from running coreboot.


That's what I am wondering too! Perhaps Libreboot can make progress and we could have completely free systems with more up to date hardware after all? That would be so great.


Looks like it could make it easier to get around DRM also?


"the scenario that they feared most", yet the scenario everyone was sure would happen.


I wasn't sure it would happen, but I'm sure happy it has!


> Intel CSME firmware also implements the TPM software module, which allows storing encryption keys without needing an additional TPM chip—and many computers do not have such chips.

And that was the real error. The TPM should be a TPM. It could be on die, but it should be an entirely isolated device with its own RAM, no DMA, no modules, and no other funny business.


Yeah, but that would cost money! Intel hasn't suffered major economic repercussions for any of their other security issues ... so why bother? At ~400 million chips/year, a $0.10 increase in component cost would translate to $40 million in lost profits.


That sure is an interesting decision, given that other big players (Apple with the Secure Enclave, Google with Titan) have been moving into the opposite direction.


Internal TPM is more secure for attestation. You can MitM the LPC bus with an external TPM, faking PCRs.


>You can MitM the LPC bus with an external TPM, faking PCRs.

not an issue if it's on-die, as the parent suggested.


You are right, of course. My bad!


Intel claims they were already aware of this vulnerability in CVE-2019-0090. ptsecurity believes there is more work to do here though.

To me it sounds like Intel is not thrilled with ptsecurity's work, and may not be awarding ptsecurity a bounty or recognition for this. But that's just my two cents.

------>8------ quoting from the article ------>8------

We should point out that when our specialists contacted Intel PSIRT to report the vulnerability, Intel said the company was already aware of it (CVE-2019-0090). Intel understands they cannot fix the vulnerability in the ROM of existing hardware. So they are trying to block all possible exploitation vectors. The patch for CVE-2019-0090 addresses only one potential attack vector, involving the Integrated Sensors Hub (ISH). We think there might be many ways to exploit this vulnerability in ROM. Some of them might require local access; others need physical access.

As a sneak peek, here are a few words about the vulnerability itself:

1. The vulnerability is present in both hardware and the firmware of the boot ROM. Most of the IOMMU mechanisms of MISA (Minute IA System Agent) providing access to SRAM (static memory) of Intel CSME for external DMA agents are disabled by default. We discovered this mistake by simply reading the documentation, as unimpressive as that may sound.

2. Intel CSME firmware in the boot ROM first initializes the page directory and starts page translation. IOMMU activates only later. Therefore, there is a period when SRAM is susceptible to external DMA writes (from DMA to CSME, not to the processor main memory), and initialized page tables for Intel CSME are already in the SRAM.

3. MISA IOMMU parameters are reset when Intel CSME is reset. After Intel CSME is reset, it again starts execution with the boot ROM.

Therefore, any platform device capable of performing DMA to Intel CSME static memory and resetting Intel CSME (or simply waiting for Intel CSME to come out of sleep mode) can modify system tables for Intel CSME pages, thereby seizing execution flow.

------>8------ quoting from the article ------>8------


It is telling that not a single comment here sees this as a bad thing. Maybe Intel should take the hint. Users want to own their hardware!


People commenting on this thread are a very self-selected group.


Any consumer who understand it what it is for would be against it.

TPM is essentially a device that takes control away from the computer owner, it is protecting some company's software from YOU.


As if that is necessarily a bad thing. It also means that it helps me defend against software imposing as me.


People wanting flawed TPM modules certainly are a very self-selected group too. Not really an argument. As long as the module can be disabled I have no problem with it. So I don't see the problem on the side of people wanting control here.


a potentially fair point. Please provide one or two reasons as to how this development is not absolutely great for users


See my comment further down. Not all trusted computing is user-hostile. Don't confuse the technology with its (primary early) applications.


Trusted computing is such an ambiguous term.

Intel alone controls the certificate chain for the CPU I own? I don't trust it, and it's user-hostile. Users won't know that it's because of Intel that, for instance, their legacy apps don't run any more. Or their Mac's NVMe drive cannot be recovered (though, yes, this is Apple's Trusted Computing chip, not Intel's).

I take it as the tech community's responsibility to clearly point out who violated their trust on this one.

Trusted computing could be "not user-hostile," or perhaps that's what "user-friendly" means? But to not be user-hostile the certificate chain must be surrendered at point of sale.

It's ironic that sysadmins for large corporations _are_ enabled by Intel's management tools, and _are_ aware of the purpose of these trusted computing tools. But end users _are_ _not_ enabled, _are_ _not_ aware, and are thus treated hostilely by Intel and cannot do the things they absolutely need to do with their own PC.


The original term, "trusted", is a military intelligence term.

It does not mean the ordinary sense of trust, which indicates complete confidence in the integrity and accuracy of the referent.

It means that you have no choice but to rely on it.


"Trusted" may be military intelligence jargon, but term "trusted computing" originated at Microsoft in the early 2000s. After several particularly nasty internet worms gave the company a (justified) reputation of terrible network security, the they launched the "Trustworthy Computing"[1] initiative to rebuild trust in their platform with several security improvements.

"Trustworty Computing" eventually became the "Palladium"[2] project with more ambitious goals including DRM. Palladium evolved into NGSCB ("Next-Generation Secure Computing Base") when Microsoft joined with other companies to form the TCPA ("Trusted Computing Platform Alliance") that later became the ("Trusted Computing Group").

The term has always been used by Microsoft (and later the TCPA/TCG) mean a trustworthy platform, from the developer perspective[3].

[1] https://en.wikipedia.org/wiki/Trustworthy_computing

[2] https://en.wikipedia.org/wiki/Next-Generation_Secure_Computi...

[3] https://www.cl.cam.ac.uk/~rja14/tcpa-faq.html


Microsoft redefines terms to suit them, what a shocker.

5200.28-STD - DoD Trusted Computer System Evaluation Criteria - August 15, 1983 - The Orange Book.


And well the whole Trusted Platform Architecture is simply about having some kind of root of trust implemented by external chip that maintains set of hashes of what the hell runs on the platform and has physical GPIO ports to ascertain user intent. Then somebody had the bright idea to implement that as a process inside the Intel ME architecture...


Kind of weird coming here after the Crypto AG read if you ask me.


By that standard, all hardware is "trusted" regardless of what Intel does. You have to rely on it, and if it misbehaves or stops working you're SOL.


In that sense, I don't trust Intel. I don't rely on their hardware.


User-friendly secure authentication mechanisms (like Windows Hello or fingerprint readers) was just broken. The TPM keeps the user's own data secure, too, after all.

How is that not absolutely disastrous for users?


This is a valid concern. If you disagree, at least comment when you downvote.


Users will buy hardware they don't have control over anyway so Intel doesn't have to worry about them!

I want nothing more than for Intel to stop acting as awful as it does, but the market doesn't care about what users want for goods that are almost mandatory.


>Maybe Intel should take the hint. Users want to own their hardware!

This reminds me of that change.org petition from years ago addressed to intel to remove management engine et al.

Which was ridiculously naive and disconnected from reality, in my opinion.


If this really does enable breaking the root of trust on the last 5 years of hardware then it is a terrible thing regardless of IME.


I'm curious if Apple's work on hardening their secure boot process on x86 affects this at all? For this unaware, this [0] video covers it over about seven minutes. Basically they claim to be enabling the IOMMU with a basic deny everything policy so that when the changeover to executing from RAM occurs and PCIe devices are brought up the IOMMU is able to deny possible malicious access to the firmware image.

It sounds from the end of the article that there are separate DMA/IOMMU processes for the CSME, but I'm not familiar enough with stuff this far down to know for certain.

https://youtu.be/3byNNUReyvE?t=124


It is their proprietary T2 chip that controls things like FileVault (fulldisk encryption) and Touch ID. So a vulnerability on Mac would not be nearly as severe as on Windows, where the this can eventually compromise the fTMP used for BitLocker encryption (dTMPs wouldn't be vulnerable, but their integrity protection can be bypassed by messing with their physical connections to the CPU).

The T2 chip has its own Secure Enclave and immutable BootROM, and it supposedly verifies the Intel UEFI ROM before it is allowed to load, and then the CPU reads this from the T2 over SPI. So it would seem that this boot process is not weakened by a compromise of the Intel key, as only Apple can sign UEFI updates to be loaded onto the T2 chip.

Source: https://manuals.info.apple.com/MANUALS/1000/MA1902/en_US/app... (long PDF)


Related: I’ve wrote a (maybe not 100% accurate) low level summary of the x86 secure boot model here a while ago https://osy.gitbook.io/hac-mini-guide/details/secure-boot


For my upgrade, due to having lesser vulnerabilities, I decided this year (after 20 years of only using Intel) to go with AMD. Had my doubts, but this article made me decide it's time to go AMD route.


Unfortunately, AMD has PSP. [1] ARM has TrustZone. [2] You'd have to get a system with a POWER9 [3] chip, such as the Talos II from Raptor. [4] That has quite a price tag though, on account of not being mainstream.

[1] https://en.wikipedia.org/wiki/AMD_Platform_Security_Processo...

[2] https://en.wikipedia.org/wiki/ARM_architecture#Security_exte...

[3] https://en.wikipedia.org/wiki/POWER9

[4] https://www.raptorcs.com/TALOSII


It should be noted that ME and PSP are both (a) a technology to implement a super-root over your entire system and (b) an implementation of said super-root environment that you do not control and cannot out out of. Trust Zone is only (a). Trust zone just defines a technology that may be used to implement such a thing, but it itself is harmless and does not actually do anything.

There are chips you can buy that do not come with any TrustZone code, and you may write your own to put in there, if you so wish


Thank you for these links, I'll look into them. And yeah, I do plan my upgrade to be quite expensive. For me, the work horse, has to be a beast to support virtualization, gaming and quite some editing - so in the end the price tag is not my main criteria.


I have not looked into AMDs efforts in this area recently: Is there an AMD-equivalent to, for example, Intel TXT?

If so, is it actually more secure, or has it simply not been scrutinized as much as Intel's version by security researchers?


Some time ago I had considered (and rejected, due to a bad review due to a freak bad sample, apparently) equipping my unit with POWER9 systems from Talos.

I am now reconsidering the idea.

https://www.raptorcs.com/TALOSII/


For people who know nothing about this and want a tl;dr in video form: https://www.youtube.com/watch?v=5syd5HmDdGU


And for anybody interested in the previous Hacker News discussion on the topic: https://news.ycombinator.com/item?id=14956257


BACKDOOR

5 years of Intel CPUs and chipsets

https://arstechnica.com/information-technology/2020/03/5-yea...


TL;DR: There is a tiny window during bootup when any hardware can DMA code or keys in/out of RAM. That allows complete compromise of all protections offered by the chipset, including secure boot, TPM key storage, etc. It is not fixable via a firmware update.

The researchers have not demonstrated a complete end to end attack, but it seems likely one exists.

While this could likely be pulled off easily as a local attack, in some cases it might also be possible to do as a remote attack depending on being able to program other hardware devices to exploit the flaw during a reboot.


This is great news! Undermining remote attestation is a win for the open web and free society. And perhaps it means we can get Libreboot on something newer than Ivy Bridge.


This was my first thought as well. It seems these management engine tools have only two uses in the real world: enterprise IT, and various forms of DRM.

Both exist to treat the user as a hostile entity.


Well it would be nice for cloud servers, so that you wouldn't have to trust the hosting provider. But given the choice of needing to trust (cloud: intel, home: intel), and (cloud: intel+provider, home: nobody), it would be foolish to not choose the latter!

At the individual level, remote attestation has a particularly terrible end game. Think of all those websites that attempt to enforce their desired business whims client-side, that we rightfully laugh at - browser fingerprinting, image save as, anti-adblock, etc. Now imagine they're successful!


Given the presence of 4k web-dls (original, not reencoded content,) somebody must have the key, or they must have managed to pwn the DRM on an even deeper level, like tapping the memory (which is even worse.)

Another possibility is still a source leak, where 4k content gets lifted off Netflixes own internal content storage.


Or a simple HDMI defeat and re-encode. It only takes one guy to put it out on the net. DRM is an inherently flawed concept.


The content is watermarked by the time it is available on HDMI. The guy who re-encoded it would get a knock on the door.


>The guy who re-encoded it would get a knock on the door.

Assuming there is a watermark, how would you track them down? It's not like you need to register your hdmi capture device.


No. HDMI/HDCP does not do watermarking or any other modification to content.

Cinema DCP packages, however, do - it's either watermarked at the DCP distributor or in the decryption module, but that stuff is out of reach for most warez crews.


That would seem to increase the cost of streaming quite a bit if the provider has to re-encode the content for each streamer to embed a watermark instead of just dumping pre-encoded bits on the wire. And the watermark has to survive a re-encode. All to shut down some guy's account in a foreign country.


You can encode two streams with some detectable difference in them, then switch between them at GOP boundaries. The stream choice per gop gives one bit of data. You only need 33 gops (33 bits) to uniquely identify everyone on earth right now.


> You only need 33 gops (33 bits) to uniquely identify everyone on earth right now.

But then only ~6 accounts to have >50% probability of seeing every combination of each bit and be able to combine them at random.


It's not HDMI/HDCP that watermarks it, that's not what I said. But by the time you get to an HDCP stripper, it has already been watermarked.


You could have the player software insert the watermark after decode.


> if the provider has to re-encode the content for each streamer

There's no need for that, the CENC standard has more robust watermarking support, but it's not really used in practice yet because it's not commonly supported by browsers and possibly other clients.


Even if HDMI doesn't do that, the streaming provider just might. It would be feasible to implement, and inconvenience potential pirates quite a bit.


Why? Potential pirates are just gonna steal someone's stolen cc details to open up a netflix account.


XOR frames captured via two different accounts.


Except if the watermark is terribly designed, that will not work. There is a lot of information that you can hide from the eye in a video.


Wouldn't you need three? Or, better still, do a bitwise 3 of 5 / integer median on the pixel values.


What would that accomplish? If you want to exclude differences, use AND.


I find it hard to believe that a HDCP stripper would then watermark the content and report who bought the equipment to the media cartels.


No, forensic watermarking would be done by the HDMI source (e.g. a PC in this case). I'm not aware of that being done in reality though.


Not if they are located somewhere like Russia or China where the authorities don't care


Even if it's some guy in Malaysia?


Just curious, how can one distinguish original content from a full-res re-encode without access to the actual bits of the original file?


This may not be exactly what you're asking for but every re-encode introduces additional noise (generational error). Over many re-encodings (even at same high bitrate/quality) the noise will accumulate in a predictable manner. See [0] for an interesting deep-dive blog post on the subject.

Now, as for whether or not you can distinguish the re-encoding from it's original source.... difficult but plausible in certain scenarios? Perhaps if the content was heavily re-encoded to the point where you can statistically determine the presence of the generational noise. With only a single re-encode it may be impossible to determine.

[0] https://goughlui.com/2016/11/22/video-compression-x264-crf-g...


Hash the file you have, then compare to the hash of the official version.


Cloud computing (SGX).


what about drive encryption? I'm a little un-versed in hardware related to security, but my understanding of the article was that, given the ability to essentially MitM the TPM, anyone could unencrypt the contents of an encrypted drive, potentially even remotely.

if so, that is definitely not a good thing.


The problem with drive encryption at the mainboard level is that it comes with a free side order of vendor lock-in.

Apple is a good example here. You can't even touch the internal storage whether you want it encrypted or not, because all access to the onboard storage is gated by their black box security processor.

I'd rather just forego the silicon and do it all in software, since there's no way to ensure that the OEMs aren't being bastards. Another reason is that the hardware vendor is in a great, centralized position to be backdoored either by hackers or the bad guys with badges.


Do you mean Coreboot? Because as far as i know Libreboot still only supports Core2 era hardware. Ivy Bridge would be a huge upgrade in comparison.


Oops, yes. What I actually meant is blob-cleaned coreboot (ie Thinkpad X230). FWIW Libreboot proper does run on IvyBridge-era Opteron 6300.


The blobs you have to include in Coreboot for modern CPUs don't have much to do with technical restrictions. A pwnage of the ME does not do the reverse-engineering-and-rewriting effort on the FSP for you.


I, too, prefer not to know when my box has been blue pilled.

Remote attestation is required for many privacy-preserving activities. It isn't just DRM.


I prefer my box to not be blue-pilled, rather than merely knowing if it has been double blue-pilled.

I didn't say it was just DRM. Remote attestation creates a vulnerability whereby remote entities demand that you attest to running a software environment that they control.

It's possible to do boot verification and attestation without baking in privileged manufacturer keys. If the attestation key were generated and installed by the owner, they could prove everything to themselves remotely, without being forced to prove anything to hostile parties.

If this were the case here, I wouldn't be cheering.


Remote attestation is often the only way to do secure computing on platforms with unknown security. For example things like i-voting would immensely benefit from a secure, anti-tamper computing environment. DRM is another thing however, yes.


Individuals being able to choose their operating system when accessing bank websites, and being able to keep secrets to themselves, has much more bearing on every day life than "internet voting".

Also note that voting etc could easily be implemented with smart cards (eg SIMs, credit card chips, etc). These are still trusted computing, but are at least limited in scope. Top-down control has no place in general CPUs if we wish to remain an open society.


Voting using computers is such a terrible idea even in principal that a trust breach like this is irrelevant.


I find the potential lack of voter supression would be worth exploring the field. I mean, I can see US's terrible governmental IT systems and where the scepticism comes from, but I think it/IT could be done well.


The post states:

>"This vulnerability jeopardizes everything Intel has done to build the root of trust and lay a solid security foundation on the company's platforms. The problem is not only that it is impossible to fix firmware errors that are hard-coded in the Mask ROM of microprocessors and chipsets."

Could someone explain what a Mask ROM is? This is the first I've heard of this. How is it different that a regular ROM?


A Mask ROM is read only memory which is build from the silicon mask which in very simplistic terms is the image of the chips to be printed to make the cpu. These 'images' are called masks, giving the Mask ROM. These rom's are cheap, simple and small but can in no way be changed since that would require changing the chip. Sometimes it includes also non masked rom which cannot be altered.

Regular rom can be many things, rewritable like (E)EPROM or flash, disks like cd(-rom) or chips like PROM (which is write once, read many).


It's not different. It's meant to disambiguate "regular" ROM from EEPROM or Flash firmware like the sorts of "roms" you would download for a mobile device.


There is no trust. Apple used to trust their phone with bootrom. And guess what, a buffer overflow and UaE bug called checkm8 was eventually found and is almost universal to all iDevices


Can someone please explain the implications in layman terms?


As I pointed before, lifting any "secret" key of any chip is quite trivial to a semiconductor professional.

It's part of a job of an IC engineer to be able to tap arbitrary metal layer on the device with microprobes to "debug" it, and this is something quite routine in a process of a microchip development.

Any such measures can only deter people without access to an IC development lab.


I think this is simply not true for modern processes. Can you show me any example of such key being extracted this way from a modern sub 50 nm CPU? I haven't heard of anyone actually succeeding.


You forgot the buses, the IOMMU and so on.


What about them?


So, my spouse was a CPU designer at AMD for many years and now does secure computing work for, well, the US government. I showed her your comment. She laughed. A lot.

This is all completely wrong.


Well, that's a bit of a sarcasm. Yes you have to have a quite serious lab for that, a level above what most fabless semi companies have, and skills on par with a process developer.

Yet, "firmware recovery" people in China use that regularly to make a living. Hardened/encrypted MCU firmware extraction costs under $20k here.


There are plenty of retrocomputing folks who would be heavily interested in ROM/firmware recovery from "hardened" chips, for entirely legal archival and/or interoperability purposes. $20k would be peanuts for this use case if success could be reasonably assured even in the "hardest" cases.


That sounds kind of nifty.

Any pointers to online info for people interested in finding out more, and/or setting up their own gear for this? :)


Contacting Mr. Steele below may be a good starting point, second after getting into process engineering studies


Not sure if I understand correctly, but are you saying secrets kept in hardware like console encryption keys (PS4 etc.) can be trivially extracted with the right tool?


not really trivially, you need to drill tiny (sub micron sized) holes with lasers down to the appropriate wires then insert probes (either using FIBs or directly) to pick up signals (we do this to debug bugs in chips)

Smart designers will put wires with useful information under many other layers which if broken will disable them.

So yes it's doable, you'll likely damage the chip in the process, it's certainly neither easy nor trivial


Correct.

My company (zeroK NanoTech) has developed and is now selling advanced focused ion beam (FIB) systems with enhanced resolution and machining capability that are well suited to these operations.

We did circuit-edits on 10 nm node chips with Intel and they have given talks about it at several conferences (e.g. ISTFA)


Never expected to see a man like you here :)

Surname Steele ringed in my head as something lab equipment related, and, yes, indeed you are


Sounds like the sort of resources that most governments could command but few criminals? But of course with criminals there's always just trying to bribe Intel employees.


Bribing Intel employees is probably expensive and might have legal risks. Instead, just hire one of the many skilled technicians in Shenzhen.

For a good discussion of this topic, I recommend Andrew "bunnie" Huang's talk about supply chain security:

https://www.youtube.com/watch?v=RqQhWitJ1As


Yes - but if you could extract (for example) some HDMI-like master keys (so you can pirate first run movies for resale), or eventual access to someone's billion dollar bitcoin stash, it might be worth the trouble. It's not something you'd do to get cheap netflix/etc

It is something a government might do to get someone's crypto keys or iPhone, or to hack into foreign network infrastructure (Huawei/etc) (after all in the past they've built special purpose submarines to do such things)


It’s bonkers that the DMCA has people labeling hardware crackers as criminals. What about the farmers and their tractors?


I think the parent comment was more likely referring to using these devices for personal privacy. For example can a criminal steal my personal information in my phone vs. can the government spy on me. Where the government might spend a million dollars to do this process to read the phone of a terrorist, but a criminal probably wouldn't to steal my personal information off of a phone or USB drive.


Those people are fine. I'm looking at this from the perspective of the malware that can survive across OS re-installs because Intel put this enclave in your CPU that you can't touch. I'd assume the NSA is using that to spy on people right now but the question is how many other groups.


Not the parent commenter, but I suspect it’s less the act than the motivation.

Criminals who anticipate finding a way to profit on the information would be far more likely to go through the trouble of bribing someone or investing in the resources to snag it.


Yes, but signing can't be defeated unless you modify the IC itself.


If you can get the key, can't you sign whatever you want, in a way that the IC will validate it? It will still check that it's correctly signed, but doesn't that defeat the usefulness of it?


the private key isn't on the chip, only the public key is


Cool! Out of curiosity, do these debugging tools keep pace with the recent process shrinks? I would imagine it's really hard to connect a logic probe to, for example, a processor built on TSMC's 7nm process.


The metal layer interconnects usually are _not_ that small. I can't share the exact specs for TSMC's 7nm process but here's an example that should give you some idea:

https://web.stanford.edu/class/ee311/NOTES/Interconnect%20Sc...


Gate size on 7nm processes is still 30nm, and even the lowermost M0 metal is way, way bigger.

Even if doing so requires destroying, and reconstructing some tracks around the probe, 7nm shouldn't be much different from how it was done back a decade ago.


> Any such measures can only deter people without access to an IC development lab.

That's a pretty tiny group, isn't it?


There are at least five eyes in that group, though.

https://en.wikipedia.org/wiki/Five_Eyes


Yeah, but it includes governments and other big adversaries.



Could you elaborate how how tapping a particular metal layer allows someone to extract the key?


From a practical standpoint, how easy would it now be to e.g. create a "signed" bootloader (say, custom GRUB) that will boot on those affected chips with the default secure boot configuration? Or is this just for information exfiltration?


im guessing not, but does this affect AMD CPUs/chipsets?


No. The ultimate potential of this attack is the complete compromise of all Intel signing authorities over affected models. Naturally, that signing key does not have any value on AMD systems, nor can this vulnerability in itself be used on them.


I figured as much. thanks!


update - it has now been confirmed that the T2 chip is vulnerable to the checkm8 vulnerability which made version-agnostic jailbreaks available for all iOS devices up to A11 CPUs. So, it would seem that Apple is only slightly in a better position.

AFAIK, the Secure Enclave stores the actual disk encryption keys and Touch ID data, so that should be safe. But the secure boot validation, firmware password, startup security policy, etc. can now be bypassed (once a full exploit to do so is written). Also, it is quite possible that the Intel ME and UEFI firmware validation can be bypassed by simply disabling that part of the T2's bridgeOS code.


IN addition to what's been said below, the early boot process is totally different on AMD. They've got a little ARM core called the PSP babysitting the main core complex(es).


That's not that different. The PSP is basically AMD's ME.


Yes, but the way it boots and is hooked into the system is completely and totally different than ME.

It fulfills the same abstract purpose, but that's where the similarities end.


So there is a different piece that can be inspected for it's own vulnerabilities, that probably does not get as much scrutiny because the hardware isn't as popular.

That's not a criticism per se, I am sure it's hard to design these things securely and without bugs.


Totally, although it's under a ton of scrutiny from the PS4 folks where the Platform Security Processor is known as SAMU, and holds most of the decryption keys for the rest of the system including all executables.

Right now the only attacks I know of treat it as a decryption oracle, but it'd be nice to not have to pre decrypt programs on a real PS4 for cases like archiving and emulation.


> it'd be nice to not have to pre decrypt programs on a real PS4 for cases like archiving and emulation

I'm no expert, but that would be the only legal way of archiving these programs, no?


Eh, the legality is pretty orthogonal.

Using it as a decryption oracle involves enough circumvention in the first place that you might already be running afoul of the DMCA if that applies to you.

Meanwhile, institutions that are given more legal carte blanche like Archive.org would probably prefer to have the decryption keys in case there comes a point where they have access to encrypted binaries, but PS4s to decrypt with have become hard to find, or keys are rotated to the point where new applications exist that require a firmware version that aren't subject to the same decryption oracle attacks.

<And, not a lawyer>


In my opinion, critical code like this must be formally verified.


Not sure how formal verification would have helped here. DMA access is allowed at boot up, game over.


In principle, you could consider validating the system, not just the software. It might reveal such a gap.

Note well: I am not claiming that the tools exist currently to do this.


A vulnerability has been found in the ROM of the Intel Converged Security and Management Engine (CSME).

A reference to the specific vulnerability would be nice. CVE? Conference presentation? El Reg? Sketchy blogspam? Maybe I've been living under a rock, but it would still help the reader out.


The article mentions CVE-2019-0090 and Intel acknowledges the author (Mark Ermolov of Positive Technologies) in their advisory. You haven't been living under a rock, this is a primary source and the first public suggestion of the grave severity of the vulnerability.

"CVE-2019-0090 was initially found externally by an Intel partner and subsequently reported by Positive Technologies researchers. Intel would like to thank Mark Ermolov, Dmitry Sklyarov and Maxim Goryachy from Positive Technologies for reporting this issue."

https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-0090

https://www.intel.com/content/www/us/en/security-center/advi...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: