Hacker News new | past | comments | ask | show | jobs | submit login

If an attacker can modify the code, they simply nullify that check (as we're discussing here). The prior state was insecure because the whole point of crypto/trusted hardware is to prevent against such attacks, and the prior state should have never allowed a code update on a locked device (if we're talking about trusted hardware).

If we're not talking about trusted hardware, then naive code which calls sleep() is defective for the same reason - the security of the system cannot depend on running "friendly" code. See Linux's LUKS which has a parameter for the number of hash iterations when unlocking, which sets the work factor for brute forcing.

If this still isn't apparent, you need to try thinking adversarially - what would you require to defeat various security properties of specific systems?




An attacker can't modify the code. The code isn't public. Only the sole keeper of the code can modify the code, it's proprietary software. Further the code is signed by author's private key, so even if an attacker could modify compiled code (via a decompiler for example), they still can't inject that modified code into the hardware without signing.


> Only the sole keeper of the code can modify the code, it's proprietary software

LOL!

> Further the code is signed by author's private key

This is the crux - if Apple is in a privileged position to defeat security measures and you're analyzing security in terms of Apple/USG, this counts as a backdoor. It doesn't provide full access, but it does undermine purported security properties of the system.

It's quite possible to implement a system with similar properties that doesn't give Apple such a privilege. It sounds like they didn't.


> An attacker can't modify the code. The code isn't public. Only the sole keeper of the code can modify the code, it's proprietary software.

This is not correct. Reverse engineering is a thing. Proprietary software just makes it harder. People modify proprietary code all the time.

> Further the code is signed by author's private key, so even if an attacker could modify compiled code (via a decompiler for example), they still can't inject that modified code into the hardware without signing.

This is the actual point.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: