Hacker News new | past | comments | ask | show | jobs | submit login

I think from the context it's pretty clear that "hack" in this case is referring to "being forced to unlock". Yes, they could still deliberately break encryption for future OSes and phones, but the same could be said of any software, open or closed source.

I don't think acting like an open ecosystem is the be-all and end-all of security is productive. Most organizations (let alone individuals) don't have the resources to vet every line in every piece of software they run. Software follows economies of scale, and for hard problems (IE, TLS, font rendering, etc) will only have one or two major offerings. How hard would it be to introduce another heartbleed into one of those?




How does a 3rd-party researcher find the next heartbleed if they can't even decrypt the binaries for analysis?


Binaries can be converted back to assembly and quite often even back to equivalent C; bugs are most often found by fuzzing (intentional or not) which does not require source code. The difference between open and closed source is that open is more often analysed by white hats who rather publish vulnerabilities and help fixing them, while closed by black hats who rather sell or exploit them in secret.


You misunderstand; if you can't even decrypt the binary, you can't disassemble, much less run a decompiler over it.

As someone who has done quite a bit of reverse engineering work, I have no idea how I'd identify and isolate a vulnerability found by fuzzing without the ability to even look at the machine code.


If it runs, it has to be decrypted (at a current level of cryptography); at most it is obfuscated and the access is blocked by some hardware tricks which may be costly to circumvent, but there is nothing fundamental stopping you.


> don't have the resources to vet every line in every piece of software they run

For the same reason I do not independently vet every line of source code I run, but still reasonably trust my system magnitudes more than anyone could - and I argue, nobody can - trust proprietary systems. And that is because while I personally may not take initiative to inspect my sources, I know many other people will, and that if I were suspicious of anything I could investigate.

Bugs like Heartbleed just demonstrated... well, several things:

1. Software written in C is often incredibly unsafe and dangerous, even when you think you know what you are doing. 2. Implementing hard problems is not the whole story, because you also need people who comprehend said problems, the sources implementing them, and have reason to do so in the first place.

Which I guess relates back to C in many ways.

I look forward to Crypto implemented in Rust and other memory / concurrency / resource safe languages. There is always a surface vector of a mistake being made that can compromise any level of security - if you move the complexity into the programming language the burden falls on your compiler. But in the same way you can only trust auditable in production heavily used sources, nothing is going to be more heavily used and scrutinized, at least by those interested, than languages themselves.


C is not a problem -- you can make a bug in every language. Even with memory safety and a perfect compiler, bug may direct the flow in bad direction (bypassing auth for instance) or leak information via side-channel.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: