This technology is simple, relatively easy to work with, and only secure against the simplest attacks. The two major problems are that it lacks any sort of authentication and that it doesn't even attempt to protect CPU state. I'll quote myself [1]:
Someone will break it by replaying old data through the VM, either to
confuse control flow or to use some part of the VM code as an oracle
with which to attack another part.
Someone else will break it by installing a #UD / #PF handler and using
the resulting exceptions as an oracle.
A third clever person will break it by carefully constructing a
scenario in which randomizing 16 bytes of data has a high probability
of letting then pwn your system. (For example, what if the secured VM
creates an RSA key and you can carefully interrupt it right aftergenerating p and q. Replace 16 bytes from the middle of both p and q (32 bytes total) with random garbage. With reasonably high probability, the resulting p and q will no longer be prime.)
Depending on how strong [AMD's] ASID protection is, a fourth clever person will break it by replacing a bunch of the VM out from under the target while leaving the sensitive data in place and then will use some existing exploit or design issue to gain code execution in the modified VM.
Also, I really hope that [AMD's] tweakable cipher mode is at least CCA2 secure, because attackers can absolutely hit it with adaptive chosen ciphertext attacks. (Actually, attackers can alternate between adaptive chosen ciphertext and adaptive chosen plaintext.)
And did the SEV implementation remember to encrypt the guest register state? Because, if not, everything of importance will leak out through the VMCB and/or GPRs.
Does this, like SGX, require signing by AMD itself?
This is a major letdown for SGX adoption, making it essentially useless for anyone but maybe niche markets trying to protect IP on cloud services.
If a master key COULD be loaded by the OS early at boot time (and cannot be replaced until CPU reset), it would be incredibly useful to create software-based TPM services that provide trusted isolation where needed.
It seems as if intel/amd are doing this 'just because die space is cheap, and why not try "ip-protection-as-a-service"' instead of a truly generic solution.
>Does this, like SGX, require signing by AMD itself?
I don't know the answer to this question, but AMD does tend to be more "open" than their competitors (look at FreeSync vs Gsync). So maybe there is hope here.
I think Intel backed away from the documentation that implied all signed enclaves had to go through them. I think people can attest their own SGX enclaves.
Unless the docs changed from last time I read them, those MSRs aren't one shot.
Also, the fact that anyone at Intel calls the signing system a "root of trust" makes me think that Intel is deluding itself. It's a root of licensing authority, not a root of trust in the system. You could set those MSRs to a public key for which everyone knows the private key and everything would work just fine.
Nice tech, but not sure how quickly it will be adopted, Intel's SGX doesn't seem to have very high adoption (or at all) even tho it does similar things as it enables application/guest isolation and memory encryption and seems to be at least on paper even more robust (Intel claims complete security even when the OS/VMM/Driver and BIOS/Firmware are compromised).
https://software.intel.com/sites/default/files/332680-002.pd...
But given all the side channel attacks against VMMs these days it's pretty important that both major CPU vendors have hardware level countermeasures against these attacks.
Unfortunately, as usual, what Intel says and what actually happens are two quite different things:
> Shortly after we learned about Intel’s Software Guard
Extensions (SGX) initiative, we set out to study it in the
hope of finding a practical solution to its vulnerability
to cache timing attacks. After reading the official SGX
manuals, we were left with more questions than when we
started. The SGX patents filled some of the gaps in the
official documentation, but also revealed Intel’s enclave
licensing scheme, which has troubling implications.
> After learning about the SGX implementation and
inferring its design constraints, we discarded our draft
proposals for defending enclave software against cache
timing attacks. We concluded that it would be impossible
to claim to provide this kind of guarantee given the
design constraints and all the unknowns surrounding the
SGX implementation. Instead, we applied the knowledge
that we gained to design Sanctum [38], which is briefly
described in § 4.9.
> This paper describes our findings while studying SGX.
We hope that it will help fellow researchers understand
the breadth of issues that need to be considered before
accepting a trusted hardware design as secure. We also
hope that our work will prompt the research community
to expect more openness from the vendors who ask us to
trust their hardware.
That's not that unusual for Intel or anyone else, while SGX doesn't protect against all types of attacks it does protect against some, I'm not seeing anything in SEV that can explicitly protect against cache timing attacks, and there is potential for the keys to leak through various side channel attacks in almost every implementation including SGX/SEV.
Overall if you go through their paper page 38 has a good overview of various hardware security platforms / frameworks and what attacks they can protect against and what they are still vulnerable too.
SGX seems to be vulnerable to 2 cache timing attacks and a page fault attack, it however does provide mitigation / protection against malicious hypervisor / OS attacks, as well as co residing malicious applications, and if you use SGX + TXT (& TPM) in theory you are only exposed to container level cache timing attacks which while might not be great is still something.
I do not know about "Unfortunately" and "as usual", but surely Intel cannot be held liable for researchers getting their hopes up about a system they have yet to examine.
Defending against a software side-channel adversary was never a design objective of SGX, and Intel acknowledges so on slide 115 of the tutorial linked below.
I think it's a bit harsh to criticize SGX for not having very high adoption. It has only become available to the public this year, and it is a pretty new technology so it will take time for existing software stacks to adapt to it. There is a lot of interest in it in academia fwiw.
Am I correct in assuming that this paradigm would make it possible for hardware to, via encryption, arbitrarily segment/obfuscate any of the software sitting on top of it?
Someone will break it by replaying old data through the VM, either to confuse control flow or to use some part of the VM code as an oracle with which to attack another part.
Someone else will break it by installing a #UD / #PF handler and using the resulting exceptions as an oracle.
A third clever person will break it by carefully constructing a scenario in which randomizing 16 bytes of data has a high probability of letting then pwn your system. (For example, what if the secured VM creates an RSA key and you can carefully interrupt it right aftergenerating p and q. Replace 16 bytes from the middle of both p and q (32 bytes total) with random garbage. With reasonably high probability, the resulting p and q will no longer be prime.)
Depending on how strong [AMD's] ASID protection is, a fourth clever person will break it by replacing a bunch of the VM out from under the target while leaving the sensitive data in place and then will use some existing exploit or design issue to gain code execution in the modified VM.
Also, I really hope that [AMD's] tweakable cipher mode is at least CCA2 secure, because attackers can absolutely hit it with adaptive chosen ciphertext attacks. (Actually, attackers can alternate between adaptive chosen ciphertext and adaptive chosen plaintext.)
And did the SEV implementation remember to encrypt the guest register state? Because, if not, everything of importance will leak out through the VMCB and/or GPRs.
But I guess it's better than nothing.
[1] https://lkml.kernel.org/r/CALCETrWAP5hxQeVSwNx-XkO53-X3bX0La...