Hacker News new | past | comments | ask | show | jobs | submit login
Hacker claims to have decrypted Apple's Secure Enclave firmware (techrepublic.com)
181 points by runesoerensen on Aug 17, 2017 | hide | past | favorite | 43 comments



So, security-wise, this doesn't mean much right now. I'm sure that being able to look at the firmware will allow people to attempt even more sophisticated attacks on the SE but does it really change that much in terms of user security? I don't think so. Despite what is learned from the firmware, the actual encryption still relies on unique keys and unique hardware IDs. Someone would still need physical access to your device for any of that to be useful and there are far easier ways to get at the data when you have physical access and a boatload of time. Yes, this might lead to a method for decryption but that method will still, undoubtedly, be much slower than social engineering and other bypass methods.


Can't speak for Apple Secure Enclave but Intel Embedded Security threat model explicitly states that the firmware code is not encrypted and not meant to be secret. [0]

[0]: http://www.apress.com/us/book/9781430265719


> Intel Embedded Security threat model explicitly states that the firmware code is not encrypted and not meant to be secret.

Yep. The root of trust for IME is within silicon, so unless there's a serious flaw in their IME initialization process, it will only run code signed with the root of trust in silicon.

I don't agree with the motivations behind the IME (DRM foremost), but Intel seems to have built a very secure trusted boot chain. If the IME cannot boot a trusted firmware, it will shut down the rest of the platform and you'll have a fancy brick.

This has been the situation for many years and it's why you cannot buy Intel based computers without IME made after ~2008 (X200/T400). Although recently Trammell Hudson seems to have found a way to mostly disable the IME boot process on the X220. [0]

ARM vendors offer a similar feature with their on-silicon root of trust. The BROM (boot ROM) on silicon will only load and execute code signed by a trusted authority. [1]

[0] https://mail.coreboot.org/pipermail/coreboot/2016-November/0...

[1] http://rhombus-tech.net/allwinner_a10/a10_boot_process/


You are correct. In my opinion, more eyes on this is a good thing anyway.


I think it's a huge deal for Apple: their device remains secure for most users, as you say. Yet it also gives them a credible argument against being forced to build a backdoor for law enforcement.


Intentional Achilles heel?


The story itself is newsworthy (it has to have run somewhere better than this by now, right?) but the article is nonsensical. For instance:

Decrypting the SEP's firmware is huge for both security analysts and hackers. It could be possible, though xerub says it's very hard, to watch the SEP do its work and reverse engineer its process, gain access to passwords and fingerprint data, and go even further toward rendering any security relying on the SEP completely ineffective.

That's not how it works! The secrets that protect your data aren't embedded in the SEPOS firmware binary; it's embedded in the phone's hardware.


They're trying to say (in laymen's terms) a bad person could reverse engineer the firmware now that it is not obfuscated, find a vulnerability, and abuse it. I agree that it's a weird way to phrase the hypothetical though, and they ignore how most researchers would likely submit it to Apple's bug bounty program due to the $100,000 bounty (Compared to the zero third-party market value for such a bug).


I disagree that's what they're saying, because of the "it could be possible, though it's very hard" framing. I think they're saying that if you understand how SEPOS works, you can trace data and secrets through it.

(In fact, this is probably a case where the "market value" of the bug greatly exceeds Apple's stated bounty value, because you could probably charge governments a nosebleed per-phone rate to extract secrets from locked phones.)


> I think they're saying that if you understand how SEPOS works, you can trace data and secrets through it.

Fair point actually, that was a bit careless for them to add.

Regarding SEP bounty: Apple offers $100,000 just for accessing the contents. Selling it as a capability would require additional low level exploits to actually interface with SEP in order to actually exploit it, so you could do per-phone deals if you have that type of vulnerability on hand, but I am not so sure many folks do. iBoot and friends (along with everything else pre-lockscreen) are reasonably secure these days.


The paragraph after your quote

>Decrypting the firmware itself does not equate to decrypting user data," xerub said. There's a lot of additional work that would need to go into exploiting decrypted firmware—in short it's probably not going to have a massive impact.

The author seems to be playing it up a bit, gotta get those clicks.


> Compared to the zero third-party market value for such a bug

Why would the third-party market value be zero? It seems tremendously valuable to many extremely well-resourced parties.


This class of vulnerability specifically cannot be used independently for anything useful. You need either a low-level firmware vulnerability or a pre-lockscreen vulnerability that you can escalate to kernel, in order to actually utilize such a SEP vuln. You are not wrong, well-resources parties may be able to do this, but for much less work the SEP vuln can fetch $100,000 and immense branding value as a researcher.



From the article: "Ever since Touch ID came out with the iPhone 5S, there has been a tiny coprocessor embedded in the main S-series, and now A-series, processor chip."

Uh, the A-series is the "main" series of processors -- the ones in iPhones and iPads. The S-series is just in the Apple Watch.


Agreed, but it still had the secure enclave with touch id.


I'm not bothered about the implications for users as they're minimal. I am interested in how they got this key. Anybody got any details?


No details have been released. One potential method would involve getting low level code execution on the SEP, but seeing as this is the iPhone 5s, it is also possible they have low level code execution on the main AP and are able to access SEP address space to feed it the encrypted keybag in order to decrypt (On iOS devices, you cannot extract the GID key from devices, so you instead need to access the crypto coprocessor(s) and feed it content which you'd like to decrypt)


Informative, thank you.


Also note that this seems to be only for the iPhone 5S. The iPhone 6 and newer models use a different architecture for the SEP IIRC.


The hardware is indeed upgraded on 6s and above, same SEPOS though.


Couple this with the ability to now brute force iPhone 7 and 7 Plus phones and you've got something.

https://www.youtube.com/watch?v=IXglwbyMydM


Has anybody verified this? Does anybody know whether xerub went through (or attempted) responsible disclosure?


I'm not sure what responsible disclosure would look like in this case? It's only the firmware decryption key that's been released. Also:

"very important: secure enclave has not been hacked. decryption key in this case is for the firmware, allowing more researchers to look at it" https://twitter.com/chronic/status/898217462029791232


I suppose that responsible disclosure would look like a detailed and formal equivalent of "Hey guys, your firmware decryption key is XYZ and I got it in the following manner".


Uhm I don't think xerub disclosed how he got the key, but Apple will probably look into that now for future SEP devices/firmware updates. Not an expert on this but as far as I can tell that wouldn't really make the disclosure of the decryption key itself more or less "responsible".

Also, people would still be able to decrypt the firmware even if the decryption key (or how to retrieve it) was publicly disclosed at a later time. Apple can't release a fix for this retroactively.


This doesn't bode well for Apple's secure enclave. Apple's enclave environment has benefited from obscurity, though that was always destined to fail eventually.

Every other secure enclave has been hacked because the environments are far too complex. Engineers are recapitulating all the same mistakes that plague existing operating systems and software stacks. It's so bad that now secure enclave environments are adding mitigations like ASLR and stack canaries.

Maybe Apple was smart enough to keep things simple, and doesn't try to implement things like dynamic linking, processes, or multi-tasking in the enclave. Given their production and retail model, they're about the only company capable of doing this. But I doubt it.


I don't think this is true. There was already a blackbox analysis of the SE presented at Blackhat and it concluded it seemed pretty secure: https://news.ycombinator.com/item?id=12231758. They specifically mentioned that there wasn't much code in the SE and so the attack surface was pretty small.


From the Blackhat presentation:

  Runs its own operating system (SEPOS)
  * Includes its own kernel, drivers, services, and
  applications 
OTOH, the enclave is a separate processor entirely, unlike (AFAIU) on Android phones. And applications on the processor run outside the TrustZone, so there are really two enclaves: the outer enclave (the security processor), and the inner enclave (the TrustZone on the security processor). The hardware restrictions on IPC (both ways between the main processor and security processor) also seems really nice. And they're using an L4 derivative for the OS.

So, yes, it seems like a better design. Very nice. Still, I hold to my criticism. At the end of the day, complexity is anathema to security. If you build an OS, people will use and abuse it. The presentation also mentions that most Apple drivers have now moved partly to the enclave,

  * Most drivers have a corresponding app in the
  SEP 
There's a tension there, like with DRM. The more people rely on the enclave to run sophisticated software, the more pressure there'll be to add more features and more complexity. Again, Apple may be better positioned than most to push back, but being smart will only get you so far.


Is SEPOS based on an open-source or proprietary derivative of L4?


From slide 60:

  SEPOS
  * Based on Darbat/L4-embedded (ARMv7)
    - Custom modifications by Apple


>This doesn't bode well for Apple's secure enclave. Apple's enclave environment has benefited from obscurity, though that was always destined to fail eventually.

This is actually totally irrelevant to secure enclave's security. They just got access to the unencrypted binaries for the SEP firmware, which doesn't give any access to anything by itself.


No responsible disclosure needed. The method of gaining access to this key is on-device, not on a server, so so patching would only prevent future use while a device remaining on the firmware which allowed decryption could continue to decrypt the SEP firmware for all future revisions on existing hardware.

I don't see why xerub would release the method for this yet, unless it is now burned/useless.


"responsible disclosure" as a term is a meme started by vendors (and later rebuked by the same vendors) that is highly subjective. It's best that people don't focus on this.


Summary: "in short it's probably not going to have a massive impact"


Misleading title. The firmware can be decrypted. Not the (user) data in the SEP.


Thanks! We've updated the title to clarify.


Not really a "hack" as people are calling it. Heck, there's no reason for the firmware not to be open source in the first place.


Here we go, I'm betting less than a month until someone finds some vulnerability in that because it was probably designed with security by obscurity in mind. If they don't have the binary, they can't find vulnerabilities in that, right?!


> it was probably designed with security by obscurity in mind

Anything in particular that makes you think this is probable? There's evidence to the contrary as mentioned in another comment https://news.ycombinator.com/item?id=15040972

> If they don't have the binary, they can't find vulnerabilities in that, right?

No, vulnerabilities can still be found (and they've had the binary all along). This just allows them to decrypt it and inspect the source code, which makes it easier to identify vulnerabilities than a black box.


I think this wouldn't reveal any source code, but it would reveal the decrypted binary (which they did not have all along), so it can be disassembled and poked at for vulnerabilities.


You're right, thanks!


> This just allows them to decrypt it and inspect the source code

decrypt the firmware and inspect the binaries




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: