Hacker News new | past | comments | ask | show | jobs | submit login
Intel Software Guard Extensions – Memory Encryption Engine (drive.google.com)
81 points by mrb on Jan 15, 2016 | hide | past | favorite | 52 comments



Note that Intel signs [1] software which runs in SGX enclaves. Windows 10 uses [2][3] SGX.

  Intel got a pretty long patent on SGX a few years ago. 
  In it, they say that the launch enclave will only issue
  launch tokens after ensuring that the enclave's author has
  a business agreement with Intel. The patent also states
  that they expect enclaves to be useful for DRM, so I'm
  guessing they want to insert themselves into the
  entertainment content distribution systems and collect 
  some royalties.
[1] https://jbeekman.nl/blog/2015/10/intel-has-full-control-over...

[2] http://www.alex-ionescu.com/Enclave%20Support%20In%20Windows...

[3] https://software.intel.com/en-us/sgx-sdk/documentation


    I'm guessing they want to insert themselves into the
    entertainment content distribution systems and collect 
    some royalties.
AFAIK the initial request for SGX came from the paranoid-types at HFT firms who were worried malware would steal their algorithms or something.

The signing mechanism is important because SGX is useless unless you have a trusted way of getting your code into the enclave. The only reasonable way out of this is to have a certificate authority and that's what they're doing. If you think there's a different architecture for SGX that provides the same security guarantees, I'd love to hear about it.

Intel has much simpler mechanisms than SGX for enforcing DRM. They've been baking a ton of keys into their processors for many generations now; there was a proposal to stick a few keys in there that belonged to entities like netflix and do all the decryption in hardware.

If netflix is going to send us encrypted streams, this is probably the best architecture to handle it, because you can setup a high-performance power-efficient hardware streaming pipeline for decryption and decoding.


> The signing mechanism is important because SGX is useless unless you have a trusted way of getting your code into the enclave. The only reasonable way out of this is to have a certificate authority and that's what they're doing. If you think there's a different architecture for SGX that provides the same security guarantees, I'd love to hear about it.

I don't think so. Unless Intel stuck a back door in their Intel-signed-only features, the Intel key serves only to restrict which enclaves are permitted to run. The whole system is designed such that a malicious enclave can neither compromise another enclave nor can it obtain the symmetric key assigned to another enclave.

The remote attestation mechanism may depend on Intel keys, but that's about it.

IOW, the Intel key appears to be almost entirely a business thing.

Anyway, don't get too excited yet. AFAIK Intel hasn't released any signed blob whatsoever (except maybe to MS so they can test their code), so the policy simply doesn't exist right now.


Suppose you have a program that uses SGX. Perhaps this program requires some public keys which it uses as a root of trust. Presumably you've baked these public keys into your program, you load this binary with the code+public keys into the enclave and execute it.

Now, how do you know that malware didn't modify the public key sitting in your binary before your code was loaded into the enclave? You need hardware to ensure that it only loads your code and not the modified code. This is where Intel's signing process comes in. There isn't really any way around it.


Not necessarily. The enclave's symmetric keys are bound to its identity, which is a hash of the memory and permission bits before the enclave starts to run. If the malware modifies the public key in the binary before it is loaded into the enclave, the enclave's identity (and its keys) will be completely different.


How on earth would that not put them right back into a monopoly position and square in the sights of regulators?


Why is it a monopoly position? All the other SoC makers - Samsung, Apple and AMD - would negotiate similar deals with Netflix.


Thanks. It seems that signifies game over for software cracks and Intel holding the keys.

Please insert the text without the leading spaces, it's unreadable on my mobile phone.


I really mean both of those.

Regarding the second point, the good practice here is starting the quote with a single > or if you quote more than one paragraph, simply inserting the whole quote inside of the " " signs. The spaces in front of the lines are supposed to allow code in the comments, and they introduce scrollable boxes in the text and the text is effectively unreadable on smaller screens as it can't be reformatted to the display cell width.


Can someone explain to me why intel is implementing these features? Surely no consumer wants them, so I don't see how it gives them an edge over amd. plus they're leagues ahead of amd already. The only explanation I can think of is that supporters of DRM are somehow subsidizing intel's development efforts.


CPU manufacturers have too much chip real estate to know what to do with: The processor cores must be kept physically small to keep them fast. Multicore has predictably hit the software wall at 2-4 cores[1][2]. Caches also have diminishing returns (every doubling of the cache size gets 5-10% perf boost).

Intel has been adding features left and right where they can, and apparently the onchip memory controller is still simple enough to add more features to without performance hit. This is helped by memory being glacially slow compared to what happens in the CPU.

Biggest slice of Intel's x86 profits come from data centers (and that slice of pie is increasing), a major bottleneck of server workloads moving to "the cloud" is the lack of sound technical means to keep your data safe.

[1] Servers can use more cores but they can ditch the onchip GPU. GPU eats 4-8 cores worth of area in latest designs.

[2] Even in games, despite performance obsession and heroic optimization efforts, and pressure to squeeze most out of the ~8 cores in consoles: https://www.rockpapershotgun.com/2015/03/05/quadcore-gaming/


Interesting they haven't been able to conjure a desktop and mobile CPU which supports ECC RAM.


Most modern AMD processors support ECC. It's purely a artificial limitation as they want businesses to buy more expensive processors from the Xeon line.


> CPU manufacturers have too much chip real estate to know what to do with:

Agreed. Ultimately if Intel continues down this 'closed IP' route, their processor monopoly is going to be replaced with generic logic (FPGA-like processors) which will be dynamically programmable.


let's say that I want to do some secure banking, so I fire up the secure banking app and off I go.

problem: what if my whole operating system is owned, right down to the keyboard interrupt handler? everything I do with this secure banking app is compromised.

okay, so instead what if I could put all the keyboard I/O, all the network I/O, the code, and the data, of the secure banking app into a trusted enclave, where the trusted enclave is cryptographically signed and verified from the BIOS up? now the OS can be totally owned but it can't read anything that goes on inside, or comes out of, this little enclave that's doing my banking.

that would be cool, right? now what if you could do that for all kinds of apps, like your email, and your web browser, and your key agent, and so on. and they're all in different secure enclaves that can't interfere with each other, and the host operating system can't reach in and touch their memory.

wouldn't that be cool?


If I'm understanding this correctly, it doesn't provide any kind of protection against a compromised operating system. How could it? The encryption is transparent to applications, so the OS could just lie and say that encryption is enabled.

What it does provide is protection against hardware attacks like memory bus snooping, or chilling and extracting DRAM modules.


Not necessarily, depends on what exactly you put in the enclave. It's possible to design a system where the OS lives outside the enclave in untrusted memory. The assurance that your enclave memory is protected is provided by the hardware, and can be confirmed remotely by an attestation protocol. For more information I would recommend checking out a paper by Microsoft Research called "Shielding applications from an untrusted cloud with Haven" from OSDI' 14.


Sure, but that's a totally different mechanism than the one discussed in this link. According to the presentation, it uses ephemeral keys that are randomly generated at boot, so it doesn't help with things like remote attestation.


No, the whole point of SGX is that there is a unique key burned into each machine from which other keys are derived that allows you to attest the contents of each enclave's memory.


I thought we were talking about the "Memory Encryption Engine", which is what the link is describing. Slide 7 says the keys are "randomly generated at reset".


Ring -1?


The OS can't lie because it is unable to generate the same keys the hardware uses. Without those keys, the enclave can't decrypt whatever sensitive data it sealed earlier.


I call this the "black boxes survive plane crashes, so why don't we make the whole plane out of that stuff?" argument.

The more code that goes into the box, the more opportunities there are for exploits. That's why it's important to put as little as possible in the box and make carefully defined interfaces to it.

iOS is supposedly cryptographically verified from the boot upwards, to inhibit jailbreaking of phones and people sneaking past the platform holder's tax, but it's not entirely hackproof. So it has an enclave within the enclave: an ARM TrustZone secure enclave running the sel4 formally verified microkernel.

Similarly hypervisor systems are trying to divide a system into segregated apps that can't interfere, but are vulnerable to (rare) hypervisor exploits.

It can't be crypto-turtles all the way down, there has to be a root of privilege. And there's always the hardware; have you seen how long the Intel errata list is?


You should check out Genode on i.MX53: http://genode.org/documentation/articles/trustzone


It would be even cooler if Intel itself wouldn't own the private key to decrypt those enclaves remotely.


    Surely no consumer wants them
In real life almost everyone use services with DRM and proprietary software some of which contain "secret sauce" that developers want to remain secret. People that use such services and software already voted for technologies like this to be developed.


> voted for technologies

Some may have, but few - even in technical crows - are making fully informed decisions. Often, what is seen as a vote for modern technology is actually resignation* and felling powerless in the face of rapid changes.

https://www.asc.upenn.edu/news-events/publications/tradeoff-...


99% of them did not even know exactly what they "voted" for...


[ assume any use of the word "you" is "you (HN readers in general)" ]

Intel and Microsoft et al[1] have been trying to lock down the PC for a long time. This is just the latest version of what used to be known as "Trusted Computing". You may have heard about it in the early/mid 2000s by the name "Palladium"[2].

Why? It's partly a very-misguided approach to "security" (which is often defined as security for the software vendor, not the owner of the machine, aka DRM), and this crosses over into the attempts at extending copyright into some sort of property right trump card[3], but this is really best described as the main battlefield in the War On General Purpose Computing.

General purpose ("Turing complete") computers are incredibly powerful, and that power scares a lot of people that are used to selling goods with single/fixed purpose (e.g. an "appliance"), and people that are used to being arbiters of a scarce resource (e.g. pre-digital publishing). The best description of this War is probably Cory Doctorow's incredible talk[4] at 28c3. Contrary to Doctorow's optimism that we would win this war, developments like SGX and Intel ME suggest we are actually losing ground rapidly[5].

It's easy to understand the desire to enforce DRM, but the problem is significantly broader, because as John Deere has shown us, these technologies will soon be in "everything". It's one thing to say you are going to jailbreak a phone or work around DRM on a PC. You may be perfectly fine with boycotting MPAA films. You may be fine with installing Linux or BSD in response to the Windows 10 fiasco.

However, remember how hard it's getting to buy a TV without the "smart" junk. It's going to be a lot harder avoid these problems when it's your car. Again I suggest watching Doctorow's even more important followup talk, "The Coming Civil War over General Purpose Computing"[6]. These problems need to be solved now, and we need to start establishing proper legal frameworks to protect user rights, because those rights are rapidly being appropriated.

There is a tendency among engineers to avoid political topics and to focus on on technical issues. That is no longer an option, because abstaining from this fight is de facto choosing to let the people trying to hide the General Purpose Computer where nobody can use it. There is no neutral ground when the world is burning[7].

    "It taught us that we have to create the future .. or others will do it for us.
     It showed us that we have care for one another, because if we don't, who will?"
         - Ivanova  (Babylon 5, "Sleeping in Light")
[1] https://en.wikipedia.org/wiki/Trusted_Computing_Group (formerly the TCPA)

[2] https://en.wikipedia.org/wiki/Next-Generation_Secure_Computi...

[3] http://www.wired.com/2015/04/dmca-ownership-john-deere/ (a well-known example)

[4] http://boingboing.net/2012/01/10/lockdown.html https://www.youtube.com/watch?v=HUEvRyemKSg

[5] HN doesn't like to hear this - this is easily the fastest way I've seen to earn downvotes - but another big way we are losing the War On General Purpose Computing is the normalization of the "app store". Apple started this with the original iPhone (and later iOS in general), and now an entire generation of programmers think it's just fine that they have to ask a business for permission to publish their software. (jailbreaking doesn't count)

[6] http://boingboing.net/2012/08/23/civilwar.html https://www.youtube.com/watch?v=nypRYpVKc5Y

[7] https://www.youtube.com/watch?v=DWg2qEEa9CE


I remember when Intel introduced the Pentium III with a serial number[1] --- something that seems almost innocuous today --- and received a very strong "do not want" response which made them remove it in subsequent models. Now they have put in far scarier features, yet receive nowhere near as much opposition from users.

I think the whole "security culture" is to blame for this. Very abstractly, security is about preventing someone from doing something you don't like. That combined with "default deny" turns into "everything which is not explicitly allowed is forbidden", and it naturally promotes locked-down devices where the user has no control. Users are conditioned to accept this because it's "for their security".

In these environments, one way to regain control is to find weaknesses in the security, e.g. jailbreaking/rooting. This is why I'm actually rather afraid of all the developments in "safer" languages and systems. Maybe some insecurity and imperfection, and accepting the occasional hack/leak due to it, is a good thing after all, if it means a path to freedom. I like to keep this quote in mind:

"Freedom is not worth having if it does not include the freedom to make mistakes."

[1] https://en.wikipedia.org/wiki/Pentium_III#Controversy_about_...


security is about preventing someone from doing something you don't like

Indeed. There's a fight for control of devices. Unfortunately the antonym of "secure" on the internet isn't "open" but "exploited"; the average consumer really is worse off if their computer is loaded with malware or a victim of cryptolocker.

I'm in the process of writing a longer thing about this at https://github.com/pjc50/pjc50.github.io/blob/master/pentagr...


You're looking at it from the wrong perspective. Intel is doing this because it's so far ahead of AMD and they have their customers locked-in, so they think they can do whatever they want to their customers' PCs.

It's a shame Google actively tried to push Intel chips in Chromebooks over ARM, when ChromeOS is virtually completely architecture agnostic, and that Microsoft killed Windows RT, too. If those things wouldn't have happened we would see Intel more in a panic mode by now, as "high-end" ARM chips start taking over sub-$400 notebooks.

Before you say "but Intel's chips are so much more powerful for $400 machines" - not they aren't. Intel has transitioned to using almost completely Atom chips (ARM chips' direct competitors) for sub-$400 machines and the OEMs went along with it, paying them ~3-5x what they would pay for an ARM chip of the same or better performance. All thanks to Intel's monopoly in PCs.


ARM has their own version of this (ARM TrustZone CryptoCell).


This would prevent row hammer attacks and detect memory errors in general, so it ought to be of general interest even if it isn't.


Please correct me if I'm wrong, but this won't protect dram from row hammer attacks. You still can access memory cells and cause other bits to flip. Doesn't matter if the content is encrypted or not.


Bit flips will be detected on the next read via MAC failure. Bit flips can still be obnoxious, but they are prevented in the sense the CPU will start a reboot before accepting corrupt memory.


Memory encryption makes cold-boot attacks unfeasible. I doubt someone could decap a processor fast enough in order to mount a cold-boot attack on the keying material stored inside of it. I see this as a natural step in the evolution of Trusted Computing, so Intel's interest in such technology doesn't surprise me.


It also makes it hard to do things like Thunderstrike or whatever the "read secrets in RAM via DMA from attached devices" issue is called.


I have seen strong interest in SGX among some large Intel customers.


Confidential computing in the cloud, for example.


This is only for DRM, right?

Very few people are getting viruses because someone has come into their house, connected the DRAM bus on their PC to a logic analyzer, and injected malicious code directly into memory. Instead, the malicious code is coming in through trusted code the user is running (web browser, faulty samba server), which encrypted memory does nothing to protect against.

I believe ARM has had this in the form of TrustZone for a while.


Almost any feature you build to protect a system from a compromised kernel is, in effect, DRM. DRM and system protection are two sides of the same coin.

Nothing magical happens when your machine gets owned up by malware or exploits. The machine naturally does the bidding of the user, not the owner. When you get compromised, the user changes. Want to defend an ostensibly single-user system from an unexpected and unwanted new user? Congratulations: you're building DRM.


So I do see a number of advantages :

1) transforms all RAM into ECC RAM

2) consumers may not care, but large server cpu customers certainly will see it as a bonus, some will see it as required. Of course, this won't protect against the NSA.

(server cpu customers are the biggest market for intel [1], and also it's the market that's not declining at an alarming rate)

[1] http://www.technologyreview.com/sites/default/files/images/m... (if you know a more recent one, do let me know)


1) transforms all RAM into ECC RAM

Not quite; it doesn't correct errors, just detects them.


I guess it might help with security of certain thins like full disk encryption. I think there are already devices used by law enforcement or hackers that that can steal disk encyption keys or information just by plugging them into any port with DMA access like firewire or express card. I would assume this could prevent that.


Devices? It's a Github repo: https://github.com/carmaa/inception


TrustZone (by itself) doesn't have the attestation or memory encryption features of SGX. On the other hand, I believe TrustZone can reach outside itself. TrustZone is more comparable to Intel's SMM.


> This is only for DRM, right?

I, for one, intend to put my private keys in there. The ecosystem isn't ready yet, though.


Wow, I did not think Intel would go this far in the general purpose CPU line. There has always been a small set of people with "cost no object" requirements for security (thing 3 letter agencies) but if you get this in a "normal" chip I could imagine it being used for copy protection of many forms, or for enterprise security laptops. I does shut down the "but you can snoop memory anyway with the GPU" loophole.


Is it possible to decap a processor without damaging it? (ie. so it will continue to work) If you could do that, then you could attempt to probe the unencrypted data or the key.


Yes. Probing at 14nm is immensely challenge but seemingly possible (see, eg, http://dcgsystems.com/products/nanoprobing/nprober-ii/ which looks terribly expensive!)


It is, indeed, possible: https://youtu.be/XXs0I5kuoX4?t=2m24s

Though I don't think it would allow you to probe the unencrypted data or the key. Not at the connectors to the crystal, at least: it is still encrypted there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: