Hacker News new | past | comments | ask | show | jobs | submit login
Arm Introduces Its Confidential Compute Architecture (wikichip.org)
150 points by Symmetry on June 23, 2021 | hide | past | favorite | 67 comments



I really like the idea of secure enclaves. I'd like to use them. The problem I have is that it's unclear to me:

a) How the hell I'm supposed to do so? It seems fairly arcane and in need of some higher level abstractions. fortanix[0] seems like it could be good here?

b) What the implications are. What's it like to maintain an enclave? What do I lose in terms of portability, debug-ability, etc?

It reminds me of GPU to some extent - it's this thing in my computer that's super cool and every time I think about using it it looks like a pain in the ass.


My team has built Conclave that might be interesting. https://docs.conclave.net. The idea is 1) make it possible to write enclaves in high level languages (we've started with JVM languages), 2) make the remote attestation process as seamless as possible.

The first part is what most people fixate on when they first look at Conclave. But an equally important thing is actually the second part - remote attestation.

The thing a lot of people seem to miss is that for most non-mobile-phone use-cases, running code inside an enclave is only really valuable if there is a _user_ somewhere who needs to interact with it and who needs to be able to reason about what will happen to their information when they send it to the enclave.

So it's not enough to write an enclave, you also have to "wire" it to the users, who will typically be different people/organisations from the organisation that is hosting the enclave. And there needs to be an intuitive/way to for them to encode their preferences - eg "I will only connect to an enclave that is running this specific code (that I have audited)" or "I will only connect to enclaves that have been signed by three of the following five firms whom I trust to have verified the enclave's behaviour".. that sort of thing.


Is a user necessary? I feel like one thing I'd use an enclave for is as a signing oracle for service to service communications.

Like I have service A and service B. A is going to talk to B, and has some secret that identifies it (maybe a private key for mTLS). I'd like for A to be able to talk to B without having access to that secret - so it would pass a message into the enclave, get a signed message out of it, and then proceed as normal.

Would that not be reasonable? Or I guess maybe I'd want to attest that the signing service is what I expect?


> Or I guess maybe I'd want to attest that the signing service is what I expect?

Exactly. If you have a threat-model where you want to limit access to your secrets from a limited code path, you need to attest that only specific, signed code is running within the enclave that can access the secrets. You might only need this to satisfy your own curiosity, but in practice it probably is something you need to prove to your internal security team, third-party auditor, or even direct to a customer.


Got it, ok. Yeah, I think that's reasonable, though I do also think that it's "extra". I would consider moving the key to the enclave without attestation to be a win, though I very much like the idea of having that level of authenticity as well.

Thanks for clearing that up.


I may not be fully understanding your scenario but it sounds like A needs to prove to B that it knows a secret but doesn't want to actually reveal/send the secret to B?

And the idea, therefore, is that A sends the secret to an enclave, which inspects the secret and, if correct, signs a message to say "I, the enclave, have verified that A does indeed know the secret". (Apologies if I've oversimplified or got this wrong).

But assuming the above is roughly correct then, without remote attestation, you have a problem, and it comes down to the question of who's running the verification code, I think.

If A is running the checker, why should B believe what it says? If A is running the code, they can just change it so that it signs the statement irrespective of whether it's true.

But If B is running the checker, then A will have just sent their secret to a service run by B, violating the requirement that A doesn't send the secret to B!

You could ask a third party to run it of course. But if you don't want to introduce that third party then this is where remote attestation comes in:

If A is running the checker in an enclave then RA allows B to verify that the "A knows the secret" message really did come from a codepath that has actually done the right thing. In this scenario, B is the "user" of the enclave from the perspective of reliance on Remote Attestation. (Aside: I know it's weird to think of an actor that doesn't interact with a system to be a user of it, so I'm probably using poor terminology when I say 'user'... it's more that, in this scenario, B is _relying_ on properties of the enclave such as its attestation)

And if B is running the checker then RA allows A to verify that it won't just turn around and reveal the secret to B.


Almost but not quite. A needs to prove to B that it's allowed to talk to B. So there's a signing service in an Enclave that A can access. It passes a message in, the enclave signs the message, and A sends the message to B.

The secret never leaves the enclave.

The goal here is that if an attacker can execute code within A's operating system that they can not exfiltrate the password. They might be able to get the enclave to sign on their behalf, but that's significantly better than an exposed secret - simply removing the attacker from the box would be sufficient to remediate, vs having to rotate the secret.

To mitigate impersonation, I suppose one could do a number of things involving a second key, but I think that this simple version demonstrates the value of having a signing oracle. This is actually not an atypical approach, just not using sgx - I know companies that keep their signing keys in separate processes, which are mutually seccomp'd such that they can only pipe messages to each other for signing purposes of apps before publishing. But in the sgx case you have a much stronger guarantee than just seccomp.

So to me the only problem that attestation solves here is if the attacker is somehow in the SGX enclave, but actually the much more likely scenario is that they aren't, and that they just ask the oracle to sign on their behalf - because B can't verify that A is the one asking to sign. Given that there is a single entity deploying both the software in the enclave and the service that interacts with it at least, that seems to be the case to me - like, in this scenario A, B, and the software in the enclave are all deployed by me, barring malicious action to interfere with that - but again, the most likely scenario is the attacker just owned the box and has a regular user on there.

But... also also, A can prevent impersonation via tricking the oracle by having another keypair shared between it and the enclave, and then it becomes a matter of protecting that memory from an attacker who can almost certainly scrape your memory - a hard thing to do.

So you end up with:

System 0, A: Key 0 System 0, Enclave: Key 1, Key 0 System 1, B: Key 1

Key 0 is used to 'auth' A to Enclave. Key 1 is used to auth A to B (via enclave oracle).

This is just my perspective on it.


That's helpful - thanks.

We don't presently support it in Conclave but SGX (which we use) does, I believe, support the idea of, in effect, packaging up a secret in a program and then encrypting it so it can only run in an enclave and hence keep the secret safe even when running on malicious hosts. I'd need to check but I suspect you're right that there are situations there where RA isn't required.

But to take your specific example (and maybe I'm still misunderstanding), does your scheme actually work in practice? Let's assume a simple model where the enclave runs on A's machine. So we can assume that requests to sign something come from A. This avoids us having to worry about A having to authenticate to the enclave, which just leads us to a circularity (how does A protect the key it uses for authentication, etc)?

And now we introduce the attacker, as in your scenario. As you say, eliminating the attacker removes their ongoing ability to interact with the enclave, since it expects to communicate only with locally running processes.

Except... if the attacker is on your box, they could simply take a copy of the enclave! And simply run it on their own machine. It's possible SGX contains the ability to lock an enclave to a specific CPU, in which case your scheme seems like it should work (to my untutored eye... I lead the Conclave team but am by no means an expert)... but I'm not actually sure it works that way. I'll look in to it.


Just checked... yeah... you can arrange so that an encrypted enclave can only run on a specific machine through careful use of SGX primitives. So I think your idea would probably work.


> Except... if the attacker is on your box, they could simply take a copy of the enclave!

Yeah this is the part I'm assuming isn't possible, perhaps out of ignorance. I believe that, at least in SGX's case, this is possible because SGX exposes per-CPU keys, and the ability to derive secrets from those keys. So if you moved the enclave (I actually have no idea how moving an enclave works either fwiw) it would no longer be valid.

But yeah, this all kinda goes to "I have no idea what I'm doing with enclaves" lol, this is just the use case I have - keeping a secret stored in one so that an attacker can not exfiltrate it.


I use the T2 SE on my Mac to generate/store ssh keys. The private key never leaves the enclave. That’s a pretty neat, functional example.


Do you know of a good tutorial or documentation on how to do that?



> The private key never leaves the enclave.

..

> While Secretive uses the Secure Enclave for key storage, it still relies on Keychain APIs to access them. Keychain restricts reads of keys to the app (and specifically, the bundle ID) that created them.

Doesn't that mean the keys leave the enclave?


The Keychain API is how you interface with the enclave key functions. It's just saying theres separate permissions for using the generated key.

https://developer.apple.com/documentation/security/certifica...

"When you store a private key in the Secure Enclave, you never actually handle the key, making it difficult for the key to become compromised. Instead, you instruct the Secure Enclave to create the key, securely store it, and perform operations with it. You receive only the output of these operations, such as encrypted data or a cryptographic signature verification outcome."


I think they meant to say "use" not "read".


Awesome, thank you.


I'm not sure if the push for enclaves on phones is really meant for _you_ per se. IIRC one of the more talked-about use cases is for DRM'd content (e.g. software that you can't crack, banking apps with more resistance to malware, etc).


While it is often useful to understand what the motivating goal was for the design of a new system like secure enclaves and what the practitioners primary concerns are, in the end it doesn't actually matter what the designers intender. Rather, what matters is what it actually does and provides. As long as the new functionality can be used for other useful purposes that are still within the design goals and parameters, it would be useful. Tool doesn't care what the creator intended - it's whether the tool is useful for whatever purpose it can be used for.

Thus I find most analysis and comments like this, based only on motivation and incentives, very lacking.

Add to that, to be more specific to this particular topic, secure enclaves are designed not only for DRM but for many other critical applications (that are actually far more important than DRM and are/were the key motivating use cases) - it, or the general concept, is the basis of the security guarantee for iPhone's fingerprint or face ID, or the confidentiality of the key materials in various end-to-end encryption, which allows things like the-phone-as-a-security-key.


The intention is quite important for estimating which workflows would be easy and what a response could be, given an atypical use case. (E.g. if this has a backdoor, when would it be used?)


Given the audience of this site....who do you think writes those apps?


True, but I'm assuming that this is all moving towards enclaves being more standard and cross platform.


There's a lot of projects working in this area to make enclaves easier to manage and deploy, e.g. Veracruz [0] which I work on.

[0]: https://github.com/veracruz-project/veracruz


> How the hell I'm supposed to do so?

You might want to check out Parsec (also from Arm).

https://developer.arm.com/solutions/infrastructure/developer...


This is probably just going to be used so corporations can deploy software onto your device without worrying about you tampering with it (stuff like DRM.)


I see that there's a lot of mindshare on privacy and "computing trust" these days. There are several approaches that I noticed. One is to not send data to untrusted places (e.g. a server), another is to have a trust chain & hardware enclave (either through SGX / trustzone), and lastly to encrypt data (e2e encryption for transport-only, or homomorphic encryption (slow) / distributed encryption) for doing operations on the data. Currently, verifiable OSS on the server isn't in vogue (in practice or literature), but could be another approach.


It's a societal dead end, so there's no technical excitement. The same technology that would let you verify a cloud server is running code you specify, would also be used by websites to demand that your personal computing environment conforms to their whims.

Imagine websites that required proprietary Google Chrome on Win/OSX on bare metal - no VM, no extensions, no third party clients, no automation, etc. Imagine mandatory ads, unsaveable images, client-side form verification, completely unaccountable code, etc.

Protocols are the proper manner of allowing unrelated parties to transact at arm's length, and technologies such as remote attestation would completely destroy that.


> Imagine websites that required proprietary Google Chrome on Win/OSX on bare metal - no VM, no extensions, no third party clients, no automation, etc.

No need to imagine, we already have that: it's called fingerprinting. Mainly via WebGL.


There is a major difference between something being difficult/annoying, and outright impossible.


Memory tagging looks interesting to me.

This paper from LowRISC outlines the possibilities pretty well: https://riscv.org/wp-content/uploads/2017/05/Wed0930riscv201...

Referencing that not to plug RiscV, but solely because it's a good explanation. I would guess Intel, AMD, and ARM have plans or some existing work going on.


Memory tagging is already been used successfully for years on Solaris SPARC,

https://docs.oracle.com/cd/E37838_01/html/E61059/gqajs.html

In what concerns ARM, all Android 11 and later versions are going to support it.

https://security.googleblog.com/2019/08/adopting-arm-memory-...

https://source.android.com/devices/tech/debug/tagged-pointer...

Unfortunelly Intel's MPX was a failure, for reasons, however the rest of the CPU vendors seems to be going into the C machine model, as only way left to fix the language.


Verifiable OSS doesn’t stop a malicious operator from just looking at your bits in memory


> I see that there's a lot of mindshare on privacy and "computing trust" these days.

This is the opposite of respecting privacy (unless you mean e.g. the privacy of DRM code), all of this stuff is "trusted" as in TCG.


https://signal.org/blog/private-contact-discovery/ seems like an application of SGX where net privacy is increased.


Or, that looks like a single point of failure that would make a good target for the government to issue a National Security Letter to Intel for.


It’s better than before, where all that stuff was available in plaintext


Signal didn't back up contacts and settings to their servers until they implemented "Secure Value Recovery", which relies on SGX to rate-limit guesses of a weak PIN. If SGX is broken and Signal's database falls into the wrong hands, encryption keys could be brute-forced pretty quickly (most people have 4-digit PINs).


Its not perfect, but the more people in on a secret the harder it is to keep. Its much harder to have to coerce both intel and signal (even if you hack intel, you would still have to modify signal's servers from what i understand) than just signal.


NSLs are a subpoena. The government can't order you to include a backdoor. Granted, they have other leverage they could use, like export licenses or purchase contracts.


> The government can't order you to include a backdoor.

Right, but it can order you to turn over your private keys, like it did to Lavabit[0]. Intel might instead voluntarily hand over a signed backdoored firmware, targeted at a specific set of CPUs, rather than the keys themselves, which could do more damage if the government lost control of them.

Similarly I don't suppose it would be much harder for the NSA to get access to the physical servers that Signal is running its software on. Perhaps you could say that merely having the access credentials wouldn't be enough information for the NSA to take control of Signal's servers without them noticing, but any cloud provider that wants to do business with the government surely has special backdoors that make this easy, when presented with an NSL.

[0] https://en.wikipedia.org/wiki/Lavabit


What would that accomplish?


Yep. Privacy is a heavily overloaded term; I meant it as a more industry-style "data access management". Some comments in the other thread do talk about non-DRM uses.


SemiAccurate has a more detailed explanation.

https://www.semiaccurate.com/2021/06/23/arm-talks-security-c...


Oh if Charlie is right about CCA enclaves sharing the same key AND there's no launch control to stop other enclaves from spawning, that's a devastating oversight indeed...


Seems like this and similar ideas are a "poor man's fully homomorphic CPU." The idea is that the CPU has some kind of embedded key that you can use to send data into a trust enclave, perform secret compute on it, and export that data in encrypted form back out such that nobody should be able to see what happened... assuming you trust the vendor and your adversary does not have extreme NSA-level hardware forensic capabilities and physical access to the hardware.

Honestly it's not bad until or unless we get actually fast FHE chips. I think we're a ways away from that.


Although it requires more trust, this also has more functionality than an FHE, in that it allows you to deliberately allow selected information out of the enclave. For example, you can check a signature. With an FHE, the boolean indicating that the signature was valid, can only be revealed to someone with the key of the FHE scheme. FHE is not a fully generic way of using third-party hardware in a trustworthy way.


It's not homomorphic in any way.

HE currently looks like a dead end anyway. >10^6 slower is going to be hard to overcome.


Is this going to be perverted as a way to grow DRM on the device and have it owned by the vendor instead of the user?


No.

In order for this to be "perverted as a way to grow DRM", it would need some other main function.

The main purpose of this is to allow vendors, rather than users, to control personal computers.

It will be great for finally allowing truly unskippable ads, ads that track your eyeballs to make sure you are looking, text that cannot be copied, etc.


Sounds like the way things are going, open systems are going to be effectively illegal soon. Tencent/Apple/Google (TAG) will arrange that if you're not on an approved 'secure' device you are 'suspect', and slowly find ways to integrate this in to governance "for social benefit". At that point, if you're not 100% on the system (read: surveilled 24x7x365 - voice assistant anyone? - with a credit card/bank account, state-ID and consumer profile associated) you'll be penalized: your views will be impossible to share (it's almost that way already), and you'll be incrementally excluded or dissuaded from travel and transport, financial services, educational opportunities, events, job markets, etc.

To resist this situation a device manufacturer could emerge with a privacy-first experience and solutions to proxy just the right amount of legitimacy to the outside systems (PSTN, payment, etc.) with an international legal structure optimised for privacy, akin to what wealthy people do already. A sort of anti walled-garden shared infrastructure. Technically I could see an offering providing mobile # to VOIP forwarding, data-only phone with mesh networking > wifi connectivity preference, MAC shuffling, darknet/VPN mixing, payment proxy, global snailmail proxy network, curated (well tested for pop sites) lightweight browser with reduced attack surface and a mobile network only as a last-resort connectivity paradigm (with dynamic software SIM and network operator proxying). Open source of course. Issue being, infrastructure would be expensive to offer and if it were ever popular there'd be political pushback. I guess privacy will increasingly be the exclusive domain of the rich.

We've already lost.


As long as FPGAs are available to consumers, it will be possible to have a computer running on open hardware. On the front page right now is this project: https://github.com/stnolting/neorv32

It's possible right now to have an FPGA running an open-source RISC-V implementation. The software isn't there yet, but I expect this to change as more RISC-V boards get on the market. God knows how useful this 'bootleg' computer will be, but it's a foundation to build on, at least. There's already work on porting Debian to RISC-V.

https://archive.fosdem.org/2019/schedule/event/riscvdebian/a...

And with a free, open ISA, it's not impossible for smaller, independent manufacturers to crop up and produce their own chips, especially given that the US is now investing in chip-fab. (if there is demand. If nobody cares that they don't own their machines, we were screwed from the start.)

I see a lot more hardware content on Youtube nowadays. Channels like Ben Eater are educating people on how computers work, and more accessible FPGAs may help create a more flexible environment for hardware.


This was pretty interesting read. Thanks a lot for your considered reply. It is so sad that licensing issues prohibit progress even at the academic level. We should have a public interest licensing-free zone around educational and academic institutions IMHO, this would align human interest with actual outcomes (free market for core ideas) instead of only respecting commercial interest.


Outlook and Teams on Android "copy" an error message if you try to copy text and forces me to install Edge to open links.


As far as I can tell, this is just using EL3 as another hypervisor level with a new, lightweight memory assignment/protection scheme, and (apparently single-key?) memory encryption.

In the end, you still have to trust the root domain, which means you still need to trust system firmware and the secure boot chain. I don't see how this is any different from the existing TrustZone stuff other than having more flexible memory management.

In other words, it just makes it viable to run random third-party code in "secure" mode, but it doesn't do much to increase trust. You still need to trust the device manufacturer and boot chain. You could achieve a similar level of trust on any system with a regular hypervisor part of the device firmware, and which has standard memory encryption (which we should all be using for everything by now, it's a travesty we aren't).

So e.g. if you're thinking of using this to run VMs in the cloud without having to trust the provider, you have to trust that the device manufacturer has implemented all this properly in the root monitor code, and that they have working secure boot, and that the person who physically owns the machine can't break this security.

Personally, I think this is a dead-end model. Real secure compute in the cloud isn't going to happen. If someone else owns the machine, they own the machine. For non-general-compute use cases, like securing portable devices against theft or seizure, or DRM, the solution is to put the secure stuff in a separate CPU. There's a reason Apple is using a dedicated Secure Enclave Processor to implement all the critical device security stuff, and why every game console DRM scheme has a security coprocessor handling the crypto.


> In the end, you still have to trust the root domain, which means you still need to trust system firmware and the secure boot chain.

You need to trust the hardware, and you also need the firmware. However, there is no need to trust the firmware, because the trust is provided by hardware and not by firmware.

And there are certificates that allow to check if the components are trustworthy. This means that a malicious user can affect the availability of the realm, but not its integrity and the confidentiality of your data. (Assuming that hardware is trustworthy)

That said there is not much information in the article I may be wrong, this is my interpretation


> However, there is no need to trust the firmware, because the trust is provided by hardware and not by firmware.

No, the trust is provided by the firmware. That's what runs in the root domain. Monitor firmware. It's right there in the article. The Root domain is the ultimate trust level.

> And there are certificates that allow to check if the components are trustworthy.

Unless there are very specific hardware-backed attestation features baked into the chips - none of which has been mentioned, and which also relies on the inability to compromise the system through hardware mechanisms, and a bug-free monitor, which is an entire difficult to solve problem - then those certificates only allow you to check that the certificate owner has, at one point, provided key material to be used by the system, not that the software running on it continues to be what was intended.

You break into the monitor, you steal the attestation keys, and then you get to keep "proving" your "trustworthiness" to everyone else. This is how on-line DRM is broken every time.


Where is the memory encryption done? The L2? The DSU?


Intel and AMD do it in the memory controller. The caches, by virtue of being entirely on-die, are assumed to be inaccessible.

It is unclear what AMD intends to do here for their TSV stacked mega-cache thing. Perhaps they'll declare the TSVs as not practically snoopable, similar to how on-die metal layer wires are treated now...


My concern is less about the die boundary and more about vendor boundaries. In an Arm-based SoC with potentially a bunch of different IP all sharing the coherent fabric do you want to bet that all that IP will correctly implement CCA? Probably not.


A legit concern. Basically everything that connects directly to the cache controllers has to get CCA right. Coherent GPUs are the big, complicated one.

Intel had an SGX bug where the GPU could access the last 16 bytes of every 64 byte cache line. If you need the GPU enabled, you have no choice but to design your enclave to only use the first 48 bytes of each cache line. Fortunately if you don't need the GPU, whether or not the GPU is enabled (among other things) is measured and included in the attestation quote, so clients can simply choose to refuse to talk to servers with the GPU enabled...


> ARMv9 will power the next 300 billion ARM-based devices

Stuff like this makes me nauseous. This is not a good thing. Stop it.


Why is it not good?


Maybe it is the unsustainable environmental impacts of never-ending new gadgets?

https://www.hpe.com/us/en/insights/articles/top-6-environmen...


It means >110 new devices per living human being.

That can't indeed be good.


I don't think it's as bad as that statement implies. It's not all smartphones or laptops or TVs or cars. A large number of those devices would be things like smartcards, LED light bulbs, USB cables, ID tags for shipping containers: tiny, low-to-no-power devices, if what you are concerned about is large amounts of electronic waste.


True if you look at processors in general. I'm not sure it's true of Arm-9 processors, though. I think they're a bit overpowered for a LED light bulb, a USB cable, or an ID tag.


Not necessarily. A new car can have 100+ processors in it (though I don't know how many of them would be Arm-9). One CPU != one device.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: