Hacker News new | past | comments | ask | show | jobs | submit login

Apple doesn't have the master key. Apple can't create a master key. Apple can, if they are forced to, create a variant of their software (which doesn't exist today) that will allow the FBI to try all keys and hope that they find one that works.

Apple could be forced to write software that removes the rate limiter and the FBI could still be stuck without access because it's possible the user used a password with too much entropy.




If this software variant can be installed on a locked device and post-facto modify its security model, then either that device was insecure or there is a master key.


Disabling the bruteforce inhibiting code does not mean it was insecure or that there's a master key.


It certainly means the bruteforce inhibiting was insecure.


Explain.

Include in the explanation how removing code makes the prior state insecure.


If an attacker can modify the code, they simply nullify that check (as we're discussing here). The prior state was insecure because the whole point of crypto/trusted hardware is to prevent against such attacks, and the prior state should have never allowed a code update on a locked device (if we're talking about trusted hardware).

If we're not talking about trusted hardware, then naive code which calls sleep() is defective for the same reason - the security of the system cannot depend on running "friendly" code. See Linux's LUKS which has a parameter for the number of hash iterations when unlocking, which sets the work factor for brute forcing.

If this still isn't apparent, you need to try thinking adversarially - what would you require to defeat various security properties of specific systems?


An attacker can't modify the code. The code isn't public. Only the sole keeper of the code can modify the code, it's proprietary software. Further the code is signed by author's private key, so even if an attacker could modify compiled code (via a decompiler for example), they still can't inject that modified code into the hardware without signing.


> Only the sole keeper of the code can modify the code, it's proprietary software

LOL!

> Further the code is signed by author's private key

This is the crux - if Apple is in a privileged position to defeat security measures and you're analyzing security in terms of Apple/USG, this counts as a backdoor. It doesn't provide full access, but it does undermine purported security properties of the system.

It's quite possible to implement a system with similar properties that doesn't give Apple such a privilege. It sounds like they didn't.


> An attacker can't modify the code. The code isn't public. Only the sole keeper of the code can modify the code, it's proprietary software.

This is not correct. Reverse engineering is a thing. Proprietary software just makes it harder. People modify proprietary code all the time.

> Further the code is signed by author's private key, so even if an attacker could modify compiled code (via a decompiler for example), they still can't inject that modified code into the hardware without signing.

This is the actual point.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: