Hacker News new | past | comments | ask | show | jobs | submit login

> Apple doesn't want to do it to avoid setting a precedent

Unfortunately that's exactly what they're going to end up doing with this faux resistance. It seems like in this case there is a master key that only Apple has. If the device's security is broken in this manner, then this is a terrible place to make a stand, as Apple will have no choice but to eventually comply.

Next time, with an actually secure implementation, the stance will be "you protested last time and gave in, do that again". And when USG realizes Apple isn't bluffing that time, their bolstered entitlement will result in the inevitable law for Apple to go back to the backdoored nearly-just-as-secure scheme.

To the first order, USG doesn't care about the argument that foreign governments could also compel Apple, since that simply reduces to traditional physical jurisdiction. And governments seem to be more worried about protecting themselves from their own subjects than from other governments.

We can only hope that the resulting legal fallout is implemented in terms of the standard USG commercial proscriptions based on the power of default choices, leaving Free software to continue to be Free.




Apple doesn't have the master key. Apple can't create a master key. Apple can, if they are forced to, create a variant of their software (which doesn't exist today) that will allow the FBI to try all keys and hope that they find one that works.

Apple could be forced to write software that removes the rate limiter and the FBI could still be stuck without access because it's possible the user used a password with too much entropy.


If this software variant can be installed on a locked device and post-facto modify its security model, then either that device was insecure or there is a master key.


Disabling the bruteforce inhibiting code does not mean it was insecure or that there's a master key.


It certainly means the bruteforce inhibiting was insecure.


Explain.

Include in the explanation how removing code makes the prior state insecure.


If an attacker can modify the code, they simply nullify that check (as we're discussing here). The prior state was insecure because the whole point of crypto/trusted hardware is to prevent against such attacks, and the prior state should have never allowed a code update on a locked device (if we're talking about trusted hardware).

If we're not talking about trusted hardware, then naive code which calls sleep() is defective for the same reason - the security of the system cannot depend on running "friendly" code. See Linux's LUKS which has a parameter for the number of hash iterations when unlocking, which sets the work factor for brute forcing.

If this still isn't apparent, you need to try thinking adversarially - what would you require to defeat various security properties of specific systems?


An attacker can't modify the code. The code isn't public. Only the sole keeper of the code can modify the code, it's proprietary software. Further the code is signed by author's private key, so even if an attacker could modify compiled code (via a decompiler for example), they still can't inject that modified code into the hardware without signing.


> Only the sole keeper of the code can modify the code, it's proprietary software

LOL!

> Further the code is signed by author's private key

This is the crux - if Apple is in a privileged position to defeat security measures and you're analyzing security in terms of Apple/USG, this counts as a backdoor. It doesn't provide full access, but it does undermine purported security properties of the system.

It's quite possible to implement a system with similar properties that doesn't give Apple such a privilege. It sounds like they didn't.


> An attacker can't modify the code. The code isn't public. Only the sole keeper of the code can modify the code, it's proprietary software.

This is not correct. Reverse engineering is a thing. Proprietary software just makes it harder. People modify proprietary code all the time.

> Further the code is signed by author's private key, so even if an attacker could modify compiled code (via a decompiler for example), they still can't inject that modified code into the hardware without signing.

This is the actual point.


There is the most master of all keys - the one they sign the bootloader/os with.


As long as anything like this exists, and can be used to flash a new system image while data remains intact, then Apple claiming they have a system secure against government is extremely negligent.

An OS signing key is never a replacement for a bona-fide user-initiated upgrade intent.

In designs with trusted hardware to prevent evil maid attacks, the boot trust chain should use a hash rather than a signature. This hash is updated only when the trusted chip is already unlocked.

To avoid creating useless bricks, said trusted hardware should allow the option to wipe everything simultaneously. But nothing more granular.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: