* Decapping and feature extraction even from simpler devices is error prone; you can destroy the device in the process. You only get one bite at the apple; you can't "image" the hardware and restore it later. Since the government is always targeting one specific phone, this is a real problem.
* There's no one byte you can write to bypass all the security on an iPhone, because (barring some unknown remanence effect) the protections come from crypto keys that are derived from user input.
* The phone is already using a serious KDF to derive keys, so given a strong passphrase, even if you extract the hardware key that's mixed in with passphrase, recovering the data protection key might still be difficult.
No, the chief protection against the PIN code hacking comes from the retry counter. The FBI doesn't need the crypto keys, it just needs the PIN code. So it needs to brute force about 10,000 PIN codes.
Any mechanism that prevents the application processor from either a) remembering it incremented the count b) corrupts the count or c) patches the logic that handles a retry count of 10, is sufficient to attack the phone.
Somewhere in the application processor, code like this is running:
Now there are two possibilities. Either there are redundant checks, or there aren't. If there aren't redundant checks, all you need to do is corrupt this code path or memory in a way that prevents it's execution, even if it is to crash the phone and trigger a reboot. Even with 5 minutes between crash reboot cycles, they could try all 10,000 pins in 34 days.
But you could also use more sophisticated attacks if you know where in RAM this state is stored. You couldn't need to de-capp the chip, you could just use local methods to flip the bits. The iPhone doesn't use ECC RAM, so there are a number of techniques you could use.
You aren't limited to 10,000 possibilities. You can use an alphanumeric passphrase. The passphrase is run through PBKDF2 before being mixed with the device hardware key.
On phones after the 5C, nothing you can do with the AP helps you here; the 10-strikes rule is enforced by the SE, which is a separate piece of hardware. It's true that if you can flip bits in the SE, you can influence its behavior. But whatever you do to extract or set bits in SE needs to not cause the SE to freak out and wipe keys.
We can still imagine a state actor spending the megadollars to research a reliable chip-cloning process, to bring parallel brute-forcing within reach. I wonder if the NSA have been on a SEM/FIB equipment buying spree lately.
The ultimate way to defeat physical or software attacks is to exploit intrinsic properties of the universe, which suggests finding a mathematical and/or quantum structure impervious to both.
Your reply is the kind of comment I come to HN for - we've started off talking about mobile device security and ended up discussing unbreakable quantum encryption.
I'm speaking the case of the San Bernardino killers. Using strong alphanumeric pass phrases are anti-usability, the vast majority of people won't use them. Hell, the vast majority of people don't even have strong alphanumeric passwords on desktop services.
So it falls to either 2-factor or biometric to avoid PINs. Biometric of course has it's own problems.
Perhaps people should really carry around a Secure Enclave on a ring or something, and with a button to self-destruct it in case of emergency. (e.g. pinhole reset)
You only need the strong alphanumeric pass phrases on device startup, then you can use TouchID. I bought an iPhone 6 for exactly this reason (employer required strong passphrase, was too annoying to type in on the Android device I had at the time).
You seem to make the assumption that corrupting the secure enclave firmware is easy, or that its RAM is exposed of chip.
The entire point of an secure enclave is to completely enclose all the hardware and software needed to generate encryption keys in a single lump of silicon.
This means that all of its processing requirements (it's a complete co-processor) are on chip, it's RAM is on chip (not shared with it the main CPU, and probably has ECC), and it uses secure boot to cryptographically verify that it's firmware has not been tampered with before it starts executing. Additionally it may even be possible to update it bootloader in the future to prevent further updates without a passcode.
The end result means that attacking a secure element is very difficult. There are few, if any, exposure points that would allow you to fiddle with its internal state, and any attempts too should result in the secure element wiping stored keys, making further attacks a moot point.
I don't make that assumption, I worked on developing TPM modules myself in the 90s at research labs, and our prototypes had even more anti-tampering than so far revealed about Secure Enclave/Trustzone: we had micro-wire-meshes in the packaging to self-destruct on drilling or decapping, we had anti-ultrasonic and anti-TEMPEST shielding. I'm pretty familiar.
The point is that state actors have vast resources to pull off these attacks. The NSA intercepted hardware in the supply chain to implant attacks as documented by Snowden. Stuxnet was a super-elaborate attack on the physical resources of the Iranian nuclear program, which was obviously carried out with supply chain vendors like Siemens. Apple uses Samsung as a supplier, and the US government has very high level security arrangements with the South Koreans, so how do we know the chips haven't been compromised even before they arrive at Foxconn for assembly?
Perhaps the NSA is savvy enough to know that a heroic effort isn't needed, and that the FBI is mostly looking to set precedent rather than find anything worth the cost and risk of chip-hacking.
Seems to me when we are at a point were every time the NSA wants to get at some data, the have to start a heroic effort of attacking low level hardware, we are in a pretty good state in terms of device security.
>it's RAM is on chip (not shared with it the main CPU, and probably has ECC)
Apple's security guide would indicate otherwise, look on page 7. The secure enclave encrypts its portion of memory, but it isn't built into the secure enclave itself.
Is there anything preventing them from imaging the parts of the device that store data? The data in the image would be encrypted, of course, but wouldn't this give them essentially unlimited (or up to their budget) attempts at getting to the data?
If the encryption key didn't depend on the hardware this would work. Even the iPhone 5C that the recent court case is about relies on the hardware keeping a key secret and it doesn't contain the secure enclave. For an iPhone 5C, the encryption key is derived from the pin and a unique ID for the phone that the CPU itself can't read. The only thing that the application processor can do is perform some crypto instructions using the key, there isn't an operation that would just put the key into memory or a register that you can read from. Even if you have root and the phone in front of you with the password, there's nothing you can do short of decapping it to try to identify that key.
Unless there is weakness in the PRNG/RNG that creates the fused key in the secure enclave itself. Which is not out of question. I am not sure why FBI didn't ask apple politely how these keys are generated in the first place.
That seems excessively unlikely to me. The phone itself wouldn't have anything to seed a PRNG with, so the random number would need to come from an embedded hardware generator or a dedicated random number device in the factory, and both of those options would have huge amounts of engineering oversight.
* Decapping and feature extraction even from simpler devices is error prone; you can destroy the device in the process. You only get one bite at the apple; you can't "image" the hardware and restore it later. Since the government is always targeting one specific phone, this is a real problem.
* There's no one byte you can write to bypass all the security on an iPhone, because (barring some unknown remanence effect) the protections come from crypto keys that are derived from user input.
* The phone is already using a serious KDF to derive keys, so given a strong passphrase, even if you extract the hardware key that's mixed in with passphrase, recovering the data protection key might still be difficult.