I'm guessing they want to insert themselves into the
entertainment content distribution systems and collect
some royalties.
AFAIK the initial request for SGX came from the paranoid-types at HFT firms who were worried malware would steal their algorithms or something.
The signing mechanism is important because SGX is useless unless you have a trusted way of getting your code into the enclave. The only reasonable way out of this is to have a certificate authority and that's what they're doing. If you think there's a different architecture for SGX that provides the same security guarantees, I'd love to hear about it.
Intel has much simpler mechanisms than SGX for enforcing DRM. They've been baking a ton of keys into their processors for many generations now; there was a proposal to stick a few keys in there that belonged to entities like netflix and do all the decryption in hardware.
If netflix is going to send us encrypted streams, this is probably the best architecture to handle it, because you can setup a high-performance power-efficient hardware streaming pipeline for decryption and decoding.
> The signing mechanism is important because SGX is useless unless you have a trusted way of getting your code into the enclave. The only reasonable way out of this is to have a certificate authority and that's what they're doing. If you think there's a different architecture for SGX that provides the same security guarantees, I'd love to hear about it.
I don't think so. Unless Intel stuck a back door in their Intel-signed-only features, the Intel key serves only to restrict which enclaves are permitted to run. The whole system is designed such that a malicious enclave can neither compromise another enclave nor can it obtain the symmetric key assigned to another enclave.
The remote attestation mechanism may depend on Intel keys, but that's about it.
IOW, the Intel key appears to be almost entirely a business thing.
Anyway, don't get too excited yet. AFAIK Intel hasn't released any signed blob whatsoever (except maybe to MS so they can test their code), so the policy simply doesn't exist right now.
Suppose you have a program that uses SGX. Perhaps this program requires some public keys which it uses as a root of trust. Presumably you've baked these public keys into your program, you load this binary with the code+public keys into the enclave and execute it.
Now, how do you know that malware didn't modify the public key sitting in your binary before your code was loaded into the enclave? You need hardware to ensure that it only loads your code and not the modified code. This is where Intel's signing process comes in. There isn't really any way around it.
Not necessarily. The enclave's symmetric keys are bound to its identity, which is a hash of the memory and permission bits before the enclave starts to run. If the malware modifies the public key in the binary before it is loaded into the enclave, the enclave's identity (and its keys) will be completely different.
The signing mechanism is important because SGX is useless unless you have a trusted way of getting your code into the enclave. The only reasonable way out of this is to have a certificate authority and that's what they're doing. If you think there's a different architecture for SGX that provides the same security guarantees, I'd love to hear about it.
Intel has much simpler mechanisms than SGX for enforcing DRM. They've been baking a ton of keys into their processors for many generations now; there was a proposal to stick a few keys in there that belonged to entities like netflix and do all the decryption in hardware.
If netflix is going to send us encrypted streams, this is probably the best architecture to handle it, because you can setup a high-performance power-efficient hardware streaming pipeline for decryption and decoding.