You have an odd definition of "less exploitable". On what planet is compromise of all trusted communications "less exploitable" just because it doesn't immediately lead to code execution?
Once you are MITM'ing, you get passwords. You get personal details. You get everything you need to do far, far more than run code on someone's phone.
All the vulnerabilities you list are bad. Sometimes I wonder if you post your consistently contrarian posts on HN just to stay visible.
It's obviously less exploitable, because a memory corruption flaw that affected the TLS handshake would be devastating regardless of whether your application used a TLS connection to update your email password or refresh the available episodes on the Nightvale podcast feed, and the certificate validation bug isn't.
I would expect memory corruption bugs to be harder to exploit, because their exploiting might require knowledge of some particulars of the environment (maybe we need to know the exact version of the buggy library used, so that we know the memory layout/location of some symbols; maybe we need to know some particulars of the interaction between library and application like how long are the buffers that the application passes to the library). Am I completely wrong or are such problems easy enough to overcome?
The example provided was an arbitrary code execution in the same TLS library.
Tptacek isnt saying the issue isn't serious, it quite self evidently is, simply that other issues that appear regularly are also of equal or greater exploitability, and this being the bug that gave the NSA the keys to the kingdom is unlikely.
Code execution requires active compromise of target hosts. MITM'ing on the NSA scale can be done without risking any noticeable intrusion on the target hosts.
Given that we know from leaked slides that the NSA has a policy of restricting the use of exploits in order to avoid information about them being compromised, and that obtaining the same level of access through code execution would involve leaving code behind that's at risk of being detected whereas this leaves no trace - which is actually the bigger compromise?
Sandboxing the app gives me a buffer between single application-level arbitrary code execution and full access to all application data and full phone code execution.
Broken TLS means all my sensitive data from all my apps are exposed.
Which one is worse? With my passwords, you could access our servers, cloud storage, corporate data. You could leverage this into VPN access, and from there, easy server-side code execution.
All without ever having to actually run code on my phone and potentially trigger the notice of observant parties.
I don't know the answer so certainly as you seem to, because it seems much more complicated a question to me.
Hold up. Regardless of whether the media appreciates this point, a memory corruption bug in a relatively restricted (compared to the wealth of JavaScript) protocol like TLS has a decent chance of being very difficult to exploit reliably on modern systems, probably including the NSS bug you mentioned. The NSA might be able to do it, but it's not magic, and failure has some possibility (though low) of being detected; a bug like the current one, an exploit for which is basically 100% reliable if set up correctly and is probably less likely to be detected if not, is going to be a lot more appealing, and sniffing data rather than executing code is probably usually good enough for its purposes. Many other adversaries won't even try difficult exploits, but can easily set up this one now that it's been disclosed.
Admittedly, the above leaves aside the elephant in the room - regular old browser bugs, which are vastly more common, more likely to be reliable, and for the aforementioned other adversaries, relatively easily leveraged with exploit kits. As I assume you know, browser sandboxes provide protection against renderer bugs on MitMed or attacker-controlled origins being used to hijack a HTTPS site (and OS-level sandboxes protect non-browser SSL applications), but sandboxes get popped all the time and one of the two targets of this bug, iOS Safari, is single-process, so meh.
I agree the coverage of this bug is overblown (...depressingly, since that's just saying that modern computing is drastically insecure); I strongly doubt it was introduced intentionally, or that the NSA, if it knew about it, was "relying on" it in the sense that the disclosure is a huge loss. But it's still a particularly good bug.
I don't disagree with any of this. We're now litigating a thought experiment I proposed upthread. In particular: I agree that straight up browser bugs are more common, more exploitable, and more appealing than NSS parsing bugs; I just liked the symmetry of comparing a TLS logic flaw to a different TLS parsing bug in a codebase people weren't as familiar with.
Sandboxing isn't going to save you here, because a arbitrary code execution bug (via say the TLS library) inside the same process that has sensitive data leaves everything fully exposed.
Once you are MITM'ing, you get passwords. You get personal details. You get everything you need to do far, far more than run code on someone's phone.
All the vulnerabilities you list are bad. Sometimes I wonder if you post your consistently contrarian posts on HN just to stay visible.