Hacker News new | past | comments | ask | show | jobs | submit login
On the Timing of iOS’s SSL Vulnerability and Apple’s ‘Addition’ to NSA’s PRISM (daringfireball.net)
257 points by dieulot on Feb 22, 2014 | hide | past | favorite | 140 comments



This seems so silly to me, Jon.

The OSX/IOS SecureTransport TLS vulnerability is getting coverage because it's easy for laypeople to understand. Any geek can look at a description of the bug, or even the code itself, and say to themselves, "hey, I could exploit that!". Because more people understand this bug than they would any other bug, it paradoxically seems scarier.

The reality however is that the ease with which a bug can be exploited has little to do with its impact. What matters is the feasibility of exploitation. And in terms of feasibility, this bug is less exploitable than the kinds of bugs that are disclosed in every platform every month; it requires a specific (common, but not universal) set of circumstances to escalate it to code execution.

Matthew Green told a Reuters reporter yesterday that the TLS bug was "as bad as could be imagined". He was also drinking†, which explains how he managed to find a TLS validation bug worse than memory corruption, which is discovered routinely in all platforms, and produces attacks that directly, instantly hijack machines regardless of their configuration and the network they operate on.

Vulnerabilities in TLS code are not all that unusual. We get a new one every couple years or so. There was a vulnerability in the NSS False-Start code a year ago --- it didn't get covered because (a) few people know what NSS is (it's the TLS library for Firefox and Chrome) and (b) nobody knows or cares about False-Start. Here's another example: NSS misparsed PKCS1v15 padding, such that an e=3 RSA certificate could be forged --- anyone on the Internet could run a small Python script to generate a certificate for any site. Certificate chaining has broken within the last ~4 years ago. Again, no coverage: what's e=3 RSA? How does chaining work?

Simple thought experiment: imagine if, instead of a missing-brace bug that broke all of TLS, it was instead disclosed that NSS had a memory corruption vulnerability in its PKCS1v15 parsing. Would there be a shitstorm in the media and on Twitter? I doubt it: comparable bugs are found routinely without shitstorms.

I don't know if NSA knew about this bug or didn't. If they did, I'm confident that they exploited it; that's what they do. But ask yourself if you're sure that NSS and SecureTransport and OpenSSL and SCHANNEL are free from the kinds of memory corruption vulnerabilities that would allow NSA (and other organized criminals) to hijack machines directly. You think this is the bug they're relying on?

No, really.


> (it's the TLS library for Firefox and Chrome)

The False Start bug in NSS that you are talking about (CVE-2013-1740) never affected a released version of Firefox; we found the bug during testing of False Start in preparation for enabling it in Firefox 26. We pushed back the enabling of False Start to Firefox 28 (which will be released next month) so that we could fix this bug first.


You have an odd definition of "less exploitable". On what planet is compromise of all trusted communications "less exploitable" just because it doesn't immediately lead to code execution?

Once you are MITM'ing, you get passwords. You get personal details. You get everything you need to do far, far more than run code on someone's phone.

All the vulnerabilities you list are bad. Sometimes I wonder if you post your consistently contrarian posts on HN just to stay visible.


You could achive a full MITM and much more with the abillity to run arbitary code on a device. It is categoricly more exploitable.


This affects every single application using TLS, even when sandboxed. Is that more, or less, exploitable?

How many angels fit on a head of a pin?


It's obviously less exploitable, because a memory corruption flaw that affected the TLS handshake would be devastating regardless of whether your application used a TLS connection to update your email password or refresh the available episodes on the Nightvale podcast feed, and the certificate validation bug isn't.


I would expect memory corruption bugs to be harder to exploit, because their exploiting might require knowledge of some particulars of the environment (maybe we need to know the exact version of the buggy library used, so that we know the memory layout/location of some symbols; maybe we need to know some particulars of the interaction between library and application like how long are the buffers that the application passes to the library). Am I completely wrong or are such problems easy enough to overcome?


The example provided was an arbitrary code execution in the same TLS library.

Tptacek isnt saying the issue isn't serious, it quite self evidently is, simply that other issues that appear regularly are also of equal or greater exploitability, and this being the bug that gave the NSA the keys to the kingdom is unlikely.


Code execution requires active compromise of target hosts. MITM'ing on the NSA scale can be done without risking any noticeable intrusion on the target hosts.

So what's "greater"?


Code execution.


Given that we know from leaked slides that the NSA has a policy of restricting the use of exploits in order to avoid information about them being compromised, and that obtaining the same level of access through code execution would involve leaving code behind that's at risk of being detected whereas this leaves no trace - which is actually the bigger compromise?


My iPhone apps are sandboxed, but thanks for simmering down a complex, nuanced topic into an authoritative statement of contrarian truthiness.


Only on HN is this "contrarian". Exactly what does sandboxing do for you here? Consider the actual vulnerabilities carefully.


Sandboxing the app gives me a buffer between single application-level arbitrary code execution and full access to all application data and full phone code execution.

Broken TLS means all my sensitive data from all my apps are exposed.

Which one is worse? With my passwords, you could access our servers, cloud storage, corporate data. You could leverage this into VPN access, and from there, easy server-side code execution.

All without ever having to actually run code on my phone and potentially trigger the notice of observant parties.

I don't know the answer so certainly as you seem to, because it seems much more complicated a question to me.


You're not reading carefully. Memory corruption in PKCS padding impacts every program that uses TLS.


Hold up. Regardless of whether the media appreciates this point, a memory corruption bug in a relatively restricted (compared to the wealth of JavaScript) protocol like TLS has a decent chance of being very difficult to exploit reliably on modern systems, probably including the NSS bug you mentioned. The NSA might be able to do it, but it's not magic, and failure has some possibility (though low) of being detected; a bug like the current one, an exploit for which is basically 100% reliable if set up correctly and is probably less likely to be detected if not, is going to be a lot more appealing, and sniffing data rather than executing code is probably usually good enough for its purposes. Many other adversaries won't even try difficult exploits, but can easily set up this one now that it's been disclosed.

Admittedly, the above leaves aside the elephant in the room - regular old browser bugs, which are vastly more common, more likely to be reliable, and for the aforementioned other adversaries, relatively easily leveraged with exploit kits. As I assume you know, browser sandboxes provide protection against renderer bugs on MitMed or attacker-controlled origins being used to hijack a HTTPS site (and OS-level sandboxes protect non-browser SSL applications), but sandboxes get popped all the time and one of the two targets of this bug, iOS Safari, is single-process, so meh.

I agree the coverage of this bug is overblown (...depressingly, since that's just saying that modern computing is drastically insecure); I strongly doubt it was introduced intentionally, or that the NSA, if it knew about it, was "relying on" it in the sense that the disclosure is a huge loss. But it's still a particularly good bug.


I don't disagree with any of this. We're now litigating a thought experiment I proposed upthread. In particular: I agree that straight up browser bugs are more common, more exploitable, and more appealing than NSS parsing bugs; I just liked the symmetry of comparing a TLS logic flaw to a different TLS parsing bug in a codebase people weren't as familiar with.


Sandboxing isn't going to save you here, because a arbitrary code execution bug (via say the TLS library) inside the same process that has sensitive data leaves everything fully exposed.


You think this is the bug they're relying on?

Actually... yes. Memory corruption bugs are harder to exploit quietly, because if you get something wrong you'll get a crash instead of code execution. Missing certificate validation is much safer to exploit.


Tptacek, it is scary because of exactly that -- it is so simple. How could such a secure platform built by such a massively large cap tech company do something so incredibly stupid in its most critical security stack.

If this trivial bug can happen and pushed into production .. Good lord, anything can happen.

The foundation of our mobile economy has just been proven to be on very very shaky legs.


It would be nice if massive meant "more secure," but the real relationship is not between an organization's size and its security, rather it's about how much that organization tries to minimize, or displace, risk. Very large organizations tend to not specifically care that something is provably secure so much as they want contractor X to be liable if something bad happens because of it.

Some organizations--and even though it's easy to mock Apple here, I do believe they're one of them--sufficiently appreciate that improving security minimizes risk, and so spend a decent amount of money on it. You could argue they failed here, but even Adam Langley who essentially "is SSL" at Google admits he's not sure they have a test that simulates an attack on a possible similar implementation error in their own code (though he does argue that such an error would have been caught in Google's code review process.)

In any case, an attention to security is certainly not something that is borne out of an organization's revenue or number of employees.


I'm under the impression that Apple prides itself on having a smaller number of very dedicated employees that are perfectionists.

I can totally see this being "innocent 2AM mis-judgement call by a single employee", judging from what my peers tell me of Apple's corporate culture. I do think that Adam Langley's suggestion that code review would help is plausible, but it merely just means that more than one person has to make the same mistake in a judgement call. (It reduces the probability of such an error happening; it doesn't theoretically eliminate it.)


I can totally see this being "innocent 2AM mis-judgement call by a single employee", judging from what my peers tell me of Apple's corporate culture.

https://support.apple.com/library/APPLE/APPLECARE_ALLGEOS/HT...

We begin therefore where they are determined not to end, with the question whether any form of democratic self-government, anywhere, is consistent with the kind of massive, pervasive, surveillance into which the Unites Sta tes government has led not only us but the world.

This should not actually be a complicated inquiry.

http://snowdenandthefuture.info/events.html


FIPS 140-2 certification isn't remotely an indication of correctness of code, for better or worse.

Take, for example, the implementation Dual EC DRBG in the FIPS 140-2 certified OpenSSL module -- it was fatally flawed, and has never worked in practice. (It will be removed from the next version of the module in light of developments in the past year.)

https://lwn.net/Articles/578375/


> The foundation of our economy has just been proven to be on very very shaky legs.

Exactly. That sums up my feelings about it better than I could. Something so critical, something trivially easy to catch in a code review was not caught. And, the only scenario in which a code review might not have caught it: no code review. That's no code review for libssl.

If this is incompetence and not malice, it's incompetence of monumental proportions.


No. No. No.

Code reviews only mean your code is going through two sieves instead of one.

Of course it helps. But there is no guarantee of anything unless the reviewer is incapable of making mistakes, in which case you could just ask him to write the code in the first place.


> But there is no guarantee of anything unless the reviewer is incapable of making mistakes, in which case you could just ask him to write the code in the first place.

Did you look at the bug? I'll quote it here:

    if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)  
        goto fail;
    if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)    
        goto fail;  
        goto fail;  
    if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)  
        goto fail;  

Even somebody with basic programming skills can see that's wrong. And, remember this is libssl. Any checkins to that warrant thoroughness if not paranoia.


In the CR tool I assume it may have looked more like this:

        if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)  
            goto fail;
        if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)    
            goto fail;
    -   if ((err = SSLHashSHA1.update(&hashCtx, &somethingElse)) != 0)
            goto fail;  
        if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)  
            goto fail;
Which is a little harder to see. Obviously you should always look at it in a side-by-side view (god I wish github would implement this) or at the resulting code, but people are imperfect.


Doesn't look like any (major) modification to surrounding lines.

    if ((err = ReadyHash(&SSLHashSHA1, &hashCtx, ctx)) != 0)
        goto fail;
changes to:

    if ((err = ReadyHash(&SSLHashSHA1, &hashCtx)) != 0)
        goto fail;
See: http://www.diffnow.com/?report=ob51k

Diff35


We only see the diff between released versions, not intermediate commits. For all we know, Apple developers use a single-pane diff tool, where such bugs are easy to miss.


Damn fine point. I really don't think we can conclude this is malice; do all of our own bugs always turn out to be really tricky to spot? Do we never make utterly ridiculous mistakes along these lines?


Except that I don't think this was the original code considering it appears to be cut and paste of same code in same file. I commented on this: https://news.ycombinator.com/item?id=7286582


I seriously don't understand why so many people don't habitually use braces there...

  if (...) {
    goto fail;
  }
  if (...) {
    goto fail;
    goto fail;
  }
  if (...) {
    goto fail;
  }
tada, not actually a bug!


Which is fine, but in my experience flaws like this are often introduced via automated merges that could as easily have resulted in:

  if (...) {
    goto fail;
  }
  if (...) {
    goto fail;
  }
    goto fail;
  if (...) {
    goto fail;
  }
which produces the same bug. (Everyone's been shouting about "braces in single statement if clauses" as though they're an absolute fix; they're not, although they're a good idea in general. And yes, code review, better merge tools, yada yada.)


Agreed. That and blank lines between the if blocks to visually separate the code blocks as well.


Interesting, does this mean if you were using an updated SHA256 hash (as will be required soon for all EV certs) that the exploit would not have occurred?


> do something so incredibly stupid in its most critical security stack.

I disagree. We are human. We are not divine. We do stupid things and we can write stupid infinite loop or off-by-1 and the bug will live for a decade. I am sure you have written a program which can be easily exploited and you know that it is such a trivial bug.


Yes, the programmer is only human. But the vp of engineering should have had a budget and process that would not let this slip through.


Are you saying they should have extra people doing code review - requires two instead of 1 (I don't know the number, but assume that's the norm). Or more rigorous testing technique? I am all for it. Is this something they can easily discover through automated testing?


The vp of engineering should have invested in a good static analysis tool which would have spotted this section in about five seconds. Failing that turning on a sensible set of defaults in the compiler (warning about unreachable code) would have also drawn attention to it.

There are many many ways this should have been caught before it even left the building by numerous automated tools. Heck most modern IDEs will flag unreachable code in the editor so there is no real excuse for this. (Adding -Wunreachable-code to a sample project in Xcode immediately flags the next line).

The programmer is human which is why automated verification tools exist.

(AppCode's default inspections will also flag the issue providing you run them)


I'd be really really surprised they didn't run this through static analysis. I will, to be honest, then maybe people are right about "it's stupid they make such stupid mistake."

I am not familiar with running static analysis and looking output, only did very minimal undefined behavior santisizer detection.


I haven't been keeping up, but a year ago Google wasn't using any static analyzer at all on Chrome. They didn't use a real memory access checker like valgrind even to load a blank page, although they came up with a weaker memcheck that mostly works.

Lax development is par for the course at these mega corps.


We've been using valgrind bots for a long time on the Chromium project - certainly for much longer than a year.

For example, according to the Wayback Machine, our sites page had an entry about it since 2009:

http://web.archive.org/web/20091123132855/http://www.chromiu...



I cannot comment on Chrome itself, but internal C++ development at Google has a pretty comprehensive array of checks, both static and dynamic.


having extra people doing code review risks something that happens with physical products.

You take the experienced people off the shop floor, and call them inspectors. So now the quality of product coming off the shop floor is worse.

Ann is the first inspector in the chain. When she's busy (and remember, she will be because quality has dropped) she might be tempted to let a few things go because Bob, the second inspector in the chain, is bound to catch them. That's what he's there for.

Bob is the second inspector in the chain. Sometimes Ann really churns the product through. Luckily during those busy times he knows she's already inspected the stuff, so he only needs to give a 10% inspection.

So you have worse product with more errors and leaky inspection.

Ideally you'd have a system with skilled workers and self inspection. That's okay for aerospace (which pays well) but not so great for lower cost product.

Not quite sure how this transfers to coding.


Okay. Good point but I hope if people take the job seriously they will have to be very careful. Though that's ideal..

How about this:

owner/peer of the module has to sign off, and randomly select two more. One must be QA and one is another programmer who works on the module. We can also pick a "junior" level, but then that's probably not going to work since Apple employes programmers who have some years of experience already. Or we can pick someone who isn't directly working on that module, but have some qualification to do review. Mostly just asking "why are you having two goto, why return -100 here)

But I see counterargument: they will just listen to the author if he's senior or the owner who is also a senior. Their words carry weight.


That's assuming they are full time at inspection, another scenario is where one person does it in addition to their current responsibilities. Therefore the overall quantity of what's produced is reduced but the inspection levels are higher.


This is a comment that seems to suggest that it's abnormal for dumb bugs to cause huge security problems. It is not. If the foundation of our economy is C software security, well, hate to break it to you, but...


It is absolutely abnormal for companies like Apple to release core security bugs this shallow that could have been easily discovered by straightforward unit tests and static analysis tools.

This is why it's a big deal.


The other reason why this is no big deal (anymore) is that the Snowden leaks have shown that the NSA has total control over all iPhones.

Why should we even bother talking about bugs like this anymore? Pure distraction.


Which is worse: the NSA having total control over all iPhones (citation? I obviously haven't been paying enough attention), or the NSA and all the (other) bad guys in the world having total control ...? Sure, they're both terrible, but I'd take the former over the latter.


> the Snowden leaks have shown that the NSA has total control over all iPhones

You mean: had total control over all the original iPhones that they could get physical access to. (back when jailbreaking was extremely simple and common)


Where does the "physical access" part come from? And if that was the case, why would it be impossible now?

One of the sources: http://www.forbes.com/sites/erikkain/2013/12/30/the-nsa-repo...


From the slide on your very link: "The initial release of DROPOUTJEEP will focus on installing the implant via close access methods. A remote installation capability will be pursued in a future release."

Basically how it worked was they jailbroke your iPhone and installed spyware on it. Is it quite likely that today, 7 years later, they have a remote 0day to do the same? Absolutely. But there's no proof that "the NSA has total control over all iPhones".


What strikes me is that the bug looks like a typo but it implements a logic error and that logic error pretty much negates everything the library represents itself as doing and what the library represents itself as doing is providing a foundation upon which signals security relies.

Suppose a contractor was hired by the NSA to write an exploit with equivalent function. How could it be crafted more cleverly?


The ease with which a bug can be exploited has everything to do with its impact. Exploiting buffer overflows is messy, requires a lot of effort, and it's detectable. Thus, you are more likely to use it for something special.

Apple's TLS handshake bug was trivially easy to exploit, entirely silently, in a fully automated fashion and with zero chance of detection and without leaving a trace. That's why it's a big deal.

If you are in the business of collecting massive amounts of data, this is exactly the type of bug that you would be using. SSL clients leak a large amount of data in the handshake (supported protocol versions, cipher suites, extensions, etc), allowing you to fingerprint them and detect the vulnerable ones on per connection basis. You then attack, safe in knowing that you won't be detected.

Once you get the victim's password (e.g., if you attack their email client), you can suck out all the data you want. Because you are the MITM, you can do it from their own IP address, too. All fully automated.


I think you're assuming that a platform can detect a buffer overflow. How exactly is an iPhone or iPad going to detect a buffer overflow?

I think we can all agree that owning the entire platform on a device is far worse than a MITM attack on a session. Even if it's all the sessions on that platform using TLS/SSL. executing code on the box means you can do many other things on that box, such as setting up a pivot point and keeping access to the platform as long as you want. If you can root the platform, then all bets are off.


Yes, we can agree that owning the entire platform is worse. But my point is that you can't ever achieve that without someone noticing. Think millions of devices. The malware would have to work flawlessly across a number of versions, interacting with many components installed, and even on jailbroken devices. It is very risky and intrusive, which makes it much less likely. An attack against TLS connection authentication is so _clean_ in comparison, that I can easily imagine various parties jumping to the opportunity.


You seem to be downplaying the seriousness of the bug to the point of making it seem trivial, and while I agree that there are many, much more serious types of bugs that 1. don't require active MITM, and 2. the press don't give proportionate attention, it is only "trivial" in the same way that Yahoo not using SSL for their webmail is "trivial," or the way a website leaking an authentication cookie over HTTP is. Only here it's not just Yahoo's email; it's (almost) all applications. (Not to mention that you can use these "secure" channels to deliver e.g. memory corruption exploits.)

If you don't have authenticity, the "secure" channel isn't worth very much.


If you're going to inexplicably speak directly to the author, and then do so by their first name, at least get the spelling right.


That's a perfectly understandable, and common, style of writing; it's not necessarily literally directed at the author. And surely the spelling is important whether it be the first name, or any other.


You're arguing that memory corruption is a worse vulnerability, which is true, but doesn't contradict the claim that the NSA could exploit this vulnerability to intercept traffic and add Apple devices to PRISM. So ultimately you fail to demonstrate that Gruber's post is "silly".


TLS validation bypass + automatic software update using TLS as authentication mechanism = RCE

I'm not sure why this is being classed differently.


Not quite. iphone enforces sig verification on write to the flash. A modern iPhone does not allow arbitrary base flash images to be written. Modern Jailbreak relies on kernel and user mode exploitation. Server auth bypass on SecureTransport TLS cannot lead to RCE via trojanized updates.


I think he's referring to app updates, not iOS updates. Also, desktop Mac OS X.


App installs and updates are still subject to code signing, and unless you jailbroke the device by some other means, any code that would be able to mess with the OS' public keys used to to authenticate the signatures would have to be signed itself and presumably would have to come in the form of a signed OS update.


Sparkle Framework.


Though ... if anywhere uses Safari on a regular basis and thus trusts their SSL to it, that would be Apple, yes? ;-)


Yep, you're right. PRISM is a process through which NSA obtains legal orders to demand information from companies through a federal law known as FAA 702. I tried to set the record straight on this last summer: http://news.cnet.com/8301-13578_3-57588337-38/

Since then, we've learned that FAA 702 orders add up to a tiny fraction of user accounts. The order of magnitude is 1,000 accounts per company per year. I really can't get too worked up about this; there are bad guys out there and that figure seems not immediately unreasonable. What we should get worked up about is bulk AT&T etc. fiber taps that vacuum up everything without any accountability, but, well, PRISM is a nice sexy name and everyone's attention spans are short and, yikes, isn't this confusing and BULLRUN and EDGEHILL are too hard to remember. Right?

No companies "joined" PRISM. The slides are likely referring to when NSA managed to write the conversion utilities to import FAA 702-obtained records into the PRISM database. But critical thinking is hard and life is short. Right?


Obviously, I trust you to know about these things, and you point to other bad examples which I'm not familiar with. In this context though, code execution isn't necessarily important (though I'm sure that's a dream scenario). The ability to snoop on assumed secure traffic is a pretty sweet starting point.


I take it you're familiar with threat trees? Any vulnerability is serious. It's just another vector that either realises a threat, or brings the attacker closer to realising a threat.


> it requires a specific (common, but not universal) set of circumstances to escalate it to code execution.

Man in the middle attack does not require code execution because the data is already available.

In a sense, this bug is like having part of a keyboard sniffer already installed for you.


So even though this bug is easy to exploit and deployed on hundreds of millions of devices, it's not a big deal because it doesn't give you root?


Except this bug is passively exploitable at scale.


It's the NSA, of course they knew about the bug.

It probably wasn't the only trick in their book, but it was surely one of them (and a quite useful one at that).


I appreciate the technical insight into the impact of this bug versus other commonly found vulnerabilities. I don't disagree, but it seems to me this insight is tangential to the larger story. It's a (bit) of a blow to lay men's trust of Apple if they can easily understand why a bug occurred without having to dive into the arcane details known to researchers such as yourself.

To me, the interesting question brought up by this story is how soon the vulnerability was known to potential exploiters. Did they just have to have an appropriate set of black box tests rigorously applied to every major software release to spot this? That would, sadly, diminish my trust in the quality of Apple's engineering, though I don't see a reason to attribute it to an inside job without any evidence.


> And in terms of feasibility, this bug is less exploitable than the kinds of bugs that are disclosed in every platform every month; it requires a specific (common, but not universal) set of circumstances to escalate it to code execution.

What I'm afraid of is an attacker stealing my credentials. Code execution bugs obviously is game over for me. But now with everything in the cloud, an attacker could get all I'm afraid of losing by fucking with my SSL connections. If they get access to everything I have on the web the fact that my machine executed no malicious code locally is no consolation.


I could be misunderstanding, but are you responding to the point made by the timing of the NSA's "Announcement" of their abilities with Apple products? Jon himself admits his evidence leads him to believe #3 on his list. Is that silly?


Why is it every time I read a post by you tptacek, my troll detector goes off.

Kind of makes me want to investigate what work you and your company has done for the NSA.


He was also drinking

Nice ad hominem, dude.


Let's not ascribe all deviousness to the NSA!

Sneaking this in, and being the 1st (or only) group to know about it, would be valuable to lots of amoral entities, from solo criminals up through state actors. Simply being bold enough to use a bogus signing-key (different than the one in the certificate you just showed) was enough to fool every iOS6/7 device for more than a year!

I'd personally hope that the NSA would be more subtle, and has a longer-term interest in ensuring Apple products seem secure. Other state actors or criminal gangs would be more interested in a short-term benefit, and then wouldn't mind that afterward Apple (and US tech in general) gets egg-on-their face. That could be icing on the cake.

I hope Apple does a deep-dive on how this happened – not just "5 whys" but "50 paranoid whys" – including looking at the employee who made the change's background and personal security practices.

What if the person it's traced to does not recall making that change? What if they find evidence that systems designed to catch such a thing were themselves subverted, such as different code shown to a reviewer, or build tools set to suppress certain warnings? It might be a simple slip up... or a very, very deep compromise that takes weeks or months to unwind.

Letting my paranoid imagination roam wild, recall a few years ago when Google suddenly became very, very chilly towards China, after reports of some deep compromises. My hunch is that Google perceived a threat so extreme – like an attempt to hijack their auto-update mechanisms – that it resulted in an all-new level of lockdown and resentful cold war. What if this is a similar existentially-revelatory moment for Apple, with regard to some state actor?


Your conspiracy theory is a bit too complicated. Here is how it works in practice: Bad actor shows up (compromised employee, or vendor with system access, whatever). Bad actor notices that code reviews are not required or a reviewer can be buffaloed with a 10k line commit, and static analysis tools aren't a commit hook, or employees don't lock their computers, or he has access to scp a binary on to a production server, or dozens of possible exploits that everyone knows about and is planing to fix eventually...


Good points. Could it simply be that someone working on this code at apple have had their computer backdoored, and the attacker snuck in the double goto in their working source dir, just waiting for it to be picked up on the next commit?

I remember reading an article some time ago how facebook played out this exact scenario, even going to such lengths as acquiring a 0day to keep the simulation even more real.


Unless I'm misunderstanding exactly what PRISM is, this bug seems like it would only be useful for OTHER nsa data collection programs, not PRISM.

Unless, of course, we are now using the term PRISM to refer to all NSA activities that we don't agree with.

Gruber should know better.


We are definitely using the term PRISM that way.


You might be, but Gruber wasn't -- the evidence he linked to was a powerpoint slide which was specifically about PRISM. Which means that it is not relevant when trying to determine if NSA introduced this vulnerability or not.


To everyone claiming "it looks like a merge gone bad": That's a bit hard to believe. There's no other changes around the extra goto, so it's pretty weird for a duplicate line to appear like that out of nowhere. It's also incredibly visible in any diff viewer, so how such a checkin could have gone in without being noticed is just unfathomable.


It's hard to believe if you pay no attention to bugs caused by code refactorings, but easy to believe if you, for instance, saw the OpenBSD IPSEC team refactor authentication checking on IPSEC packets out of the kernel by accident, in almost the exact same manner as this bug.


No, because the OpenBSD case was NSA as well (via NETSEC, who worked on the OpenBSD IPSEC stack, and have since admitted to sneaking backdoors in). Exact same MO, 15 years later!

(I don't actually believe this, but it was too convenient and amusing not to call out.)


I thought the NETSEC conspiracy theory was Angelos Keromytis working for the FBI.


Where's the commit that introduced the extra goto? I haven't seen it yet.


As outsiders we can't see commits, but the next best thing is a diff between the public source code releases: https://gist.github.com/alexyakoubian/9151610/revisions


Pretty yak-shavy. Looks like some symbols were renamed and many of the APIs were (probably pointlessly) mutated. In total, unreviewable.

But as you say, this may not have been the internal diff seen by someone at Apple. They may have seen individual, focussed patches that did single things like changing from the 2-argument to the 1-argument SSLFreeBuffer, another patch for changing selectedCipherSpec to selectedCipherSpecParams (hello, yak shaving), renaming noErr to errSecSuccess, and so forth.

The question is, in any of those changes, the extra "goto fail;" line would have stuck out as irrelevant. In a gigantic 500-line delta the reviewer's eyes may have been glazed over by the time they got to that one.

Assuming anyone reviews code at Apple, of course.


I'm shocked and find that hard to believe; even without separate commits, it's not a huge single diff and it should really not be too much to ask for a proper review.

You'd think one would be a little less sloppy when hacking on the very core of the SSL libraries being deployed on hundreds of million of devices.


You'd think one would be a little less sloppy when hacking on the very core of the SSL libraries being deployed on hundreds of million of devices

Hahahahahaha.


Line 631 in the second column for anyone wondering.


It could have been merge between public releases.

My money would be on a copy & paste error, but there is no more evidence for that than the merge error theory.


> As outsiders we can't see commits

Exactly. I was going to refute your point but you did it for me.


It's not the first time Apple has a bug with verifying the hostname of the certificate.

In June 2010 I reported that Safari 4 did not check the last letter of the hostname, so a certificate for example.de was accepted when accessing example.dk, and it would accept cert for example.co.ug when accessing example.co.uk

The real problem is that Apple did not add unit testing when they fixed the problem in 2010. If they had, the goto bug would have been found.


The "person" who made the "error" in the core code could have made a similar "error" in the unit test.


Yes, that would be quite the "coincidence", wouldn't it?


Why would the NSA put a backdoor in an open source component of iOS and OSX when presumably, it would have been just as easy to put it in the closed source part, where it would be much less likely to be found and even less likely to ever be publicly exposed?


Wrong. The only 3 options are these:

1) The NSA knew about it, and exploited it.

2) NSA itself planted it surreptitiously.

3) Apple, complicit with the NSA, added it.

Such a bug would've never lasted this long without NSA knowing about it. Even if they didn't find it themselves, which seems unlikely, they would've bought it from the exploit black market long before now. There are people who all they do long is try to find exploits in the iPhone.

And as we know, if NSA can do something, NSA will do it. So they most definitely took advantage of that "bug" and everyone's data that they could get through it.

To me the more likely scenario is still that Apple cooperated with the NSA here. The "adding to PRISM" at the same time this bug appear is way too coincidental.


Further to 1) if the NSA fulfilled its mission and actually helped secure the country against enemies, they would have immediately told Apple about the security hole. Hence we're down to 2) and a modified 3) Apple, complicit with the NSA, added or kept it.


> I see five levels of paranoia: ... 5. Apple, complicit with the NSA, added it.

While it seems possible that Apple conspired with the NSA to add a security hole in SecureTransport, I doubt it. According to sources in the article, this bug was introduced in iOS6; and I haven't heard a mention of it until yesterday, despite it being open-sourced (http://opensource.apple.com/source/Security/Security-55471/l...).

Since nobody was raging on the internet about this bug, I see it as a good-faith effort by Apple to fix a bug that they've just discovered.


Consider this: it was only at the end of December when Appelbaum showed some documents about iPhones being hacked by the NSA, and it made a pretty big splash in the media. I think it even forced Apple to respond at the time. Especially if this was open sourced, and everyone could see they fixed it, they wouldn't immediately try to plug the bug/backdoor after that piece of news came out, especially with such a weird bug.


Those documents were about NSA bein able to plant a malware on a iPhone (1st generation) when given physical access. I would say it has nothing to do with this TLS bug


I also would go at least as far as #3 (The NSA knew about it, and exploited it.). And it does seem a reasonable explanation for the prism slide update.

I only just heard about this bug a couple of hours ago. I can not for the life of me fathom how this code was not tested. Surely this is a basic function of the security code. You can test for it with a simple bit of js. If something as important and easily testable as this is broken, what other subtle issues are down there?

I, for one, no longer trust the Apple's software (I know that seems a bit dramatic but I just don't enjoy using it anymore, on any level). Intentional or not, this has all been handled very badly. Massive security hole, quietly releases iOS update, doesn't offer update for their freaking desktop os.


In theory yes. But Google Engineer Adam Langley wrote:

> A test case could have caught this, but it's difficult because it's so deep into the handshake. One needs to write a completely separate TLS stack, with lots of options for sending invalid handshakes. In Chromium we have a patched version of TLSLite to do this sort of thing but I cannot recall that we have a test case for exactly this. (Sounds like I know what my Monday morning involves if not.)


> The Prism program collects stored Internet communications based on demands made to Internet companies such as Google Inc. and Apple Inc. under Section 702 of the FISA Amendments Act of 2008 to turn over any data that match court-approved search terms.

https://en.wikipedia.org/wiki/PRISM_(surveillance_program)

"Demands made to the companies". It looks to me like PRISM was a voluntary thing, even if the companies didn't know it was named PRISM.

If that's the case, and Apple's "bug" happened at the same time with their voluntary addition to PRISM, then the bug was actually a backdoor, and Apple knew about it.


Prism is as precisely as "voluntary" as the cops showing up with a search warrant at 5am and asking you nicely to open up "voluntarily", because, well, they have a nice 5.11 MiniRam Breaching Tool 50091 and your door looks very pretty and it would really be a shame to have to knock it off its hinges. Yes, that would be a real mess.

See what I wrote on this last summer: http://news.cnet.com/8301-13578_3-57588337-38/


I thought the date NSA was added to prism simply had to do with Steve Jobs dying. It's the kind of thing I could see him refusing to cooperate with.


The date has to do when Apple had cloud services that more than a few people outside the US used. There's no point simplifying data requests to a provider that doesn't have any data they want.


Jobs died in October 2011


Where is the patch for OSX Mavericks Apple? Patches aplenty for iOS yet it appears Safari is still vulnerable.


Exactly. My MacBook is now for sale. Back to my Lenovo T400.

Trust = gone.

Apple's policy of silence and denial don't cut it any more. At least Microsoft use proper threat modelling, disclosure and mitigation processes, documented KB articles and have literally tonnes of QA and test capacity. Not joking but they have tens of thousands of machines in their test labs.


Deafening silence.


The thing that really bites about this is the idea that the NSA could have taken advantage of the vulnerability rather than alerting apple to it.

The idea that they are as fiercely hostile to the security of millions of americans as taking advantage of the vuln would require, is absolutely terrifying.


I am not sure if such post has any value to add to our current state of affair. If this is not the bug they use to exploit iOS users, there are other bugs. So what is the point of this article? Are we going to do a witch hunt and blame the developers who either overlooked at the diff, or the developer who happened to be an NSA mule?

I get it. NSA is behind everything. And at the end of the article, OP says "[so] if this bug, now closed,2 is not what the NSA was exploiting, it means there might exist some other vulnerability that remains open." So why are we writing this article if there are other bugs?

Okay. I will give the credit and think about it: anyone could be NSA mule. Anyone can contribute to your OSS and inject some clever code into your OSS codebase. And we usually overlook security and just accept anyone's PR as long as the code makes sense. Yeah, think about someone cleverly injecting a line into your docker release or your openstack release last month.

So, the point is: be alert? Be aware of spy everywhere? This reminds me of a scene in movie Eden. Bob, the corrupted federal Marshall, who was actually a PIMP, once said to other marshals: "So who are you [looking for human trafficking mules] looking for? The answer is everyone."

Hence, the main point is: trust is destroyed.


My own conspiracy theory is that these times match up where they adopt some form of widespread XMPP. Google Talk, iMessage (2012), MS Messenger, etc... and aligns with the recent talk of re-doing encryption in federated XMPP: https://github.com/stpeter/manifesto

XMPP isn't just for chat, it's for video, server to server message passing, status updates, etc...


From the article:

> NSA itself planted it surreptitiously.

> Apple, complicit with the NSA, added it.

Neither seems very likely given how visible the goto is. Something just a little more subtle like a semicolon at the end of the if() clause might look better.

Of course given how glaring it is, it could be a case of plausible deniability: would we do something to stupid, so unsubtle.


"A sneaky bug? Only the NSA could have pulled that off! An obvious bug? Only the NSA would be so brazen in trying to throw us off their track!"

If the witch had led an evil and improper life, she was guilty; if she had led a good and proper life, this too was a proof, for witches dissemble and try to appear especially virtuous. After the woman was put in prison: if she was afraid, this proved her guilt; if she was not afraid, this proved her guilt, for witches characteristically pretend innocence and wear a bold front. Or on hearing of a denunciation of witchcraft against her, she might seek flight or remain; if she ran, that proved her guilt; if she remained, the devil had detained her so she could not get away.[1]

If you argue that an obvious bug is evidence of NSA involvement, then you must also believe a subtle bug would be evidence against NSA involvement. You can't have it both ways.

1. From Conservation of Expected Evidence http://lesswrong.com/lw/ii/conservation_of_expected_evidence...


> If you argue that an obvious bug is evidence of NSA involvement, then you must also believe a subtle bug would be evidence against NSA involvement. You can't have it both ways.

Thanks for taking the argument to a mathematical plane. My reasoning was the following:

P(is-involved) = P(is-involved|subtle)P(subtle) + P(is-involved|subtle)P(~subtle)

Now, we know that 0 < P(is-involved) < 1. Assuming that there have been past instances of their involvement in both subtle and unsubtle bugs, I think my argument where I can attach a probability of their involvement in both the case was fair. Or do you think I missed something.

Note: I think I got it. Essentially your point is that given that I am using the unsubtle-ty of the bug to argue both ways, implies that this variable provides us no useful information and can be removed from the discussion.

Thanks for pointing out my muddy reasoning.


Shouldn't it be

P(is-involved) = P(is-involved|subtle)P(subtle) + P(is-involved|~subtle)P(~subtle)


“Never ascribe to malice that which is adequately explained by incompetence.”

Just for the record and off topic, this is known as the Hanlon's razor. It's usually attributed to Robert J. Hanlon, not Napoleon Bonaparte.


While Apple may not have intentionally added this (and I'm a bit skeptical--you have to shut off unreachable code and indentation systems to let this through), they almost certainly intentionally LEFT this in.

This is such an easy exploit that somebody found it and was using it. There is no way that this was not spotted by Apple in the field.


I highly doubt that this particular bug was due to a shadowy government agency or that type of collusion. It's more likely that someone fucked up, it can happen to anyone. You have to look at it from a game-theoretic perspective. Assuming there is collusion between Apple and NSA, it would at least __look__ intentional.


I'm so sick of all this conspiracy theory nonsense. If PRISM means Apple supplies data directly to the NSA there's no need for a MITM attack. I mean what's the argument? Plausible deniability?

> Never attribute to malice that which is adequately explained by stupidity.

A programmer screwed up. It happens every day.

I'm reminded of Chesterton's madman:

>The madman’s explanation of a thing is always complete, and often in a purely rational sense satisfactory. Or, to speak more strictly, the insane explanation, if not conclusive, is at least unanswerable; this may be observed specially in the two or three commonest kinds of madness. If a man says (for instance) that men have a conspiracy against him, you cannot dispute it except by saying that all the men deny that they are conspirators; which is exactly what conspirators would do. His explanation covers the facts as much as yours. Or if a man says that he is the rightful King of England, it is no complete answer to say that the existing authorities call him mad; for if he were King of England that might be the wisest thing for the existing authorities to do. [...] Nevertheless he is wrong. But if we attempt to trace his error in exact terms, we shall not find it quite so easy as we had supposed. Perhaps the nearest we can get to expressing it is to say this: that his mind moves in a perfect but narrow circle. A small circle is quite as infinite as a large circle; but, though it is quite as infinite, it is not so large. In the same way the insane explanation is quite as complete as the sane one, but it is not so large. [...] If we could express our deepest feelings of protest and appeal against this obsession, I suppose we should say something like this: "Oh, I admit that you have your case and have it by heart, and that many things do fit into other things as you say. I admit that your explanation explains a great deal; but what a great deal it leaves out! Are there no other stories in the world except yours; and are all men busy with your business? Suppose we grant the details; perhaps when the man in the street did not seem to see you it was only his cunning; perhaps when the policeman asked you your name it was only because he knew it already. But how much happier you would be if you only knew that these people cared nothing about you! How much larger your life would be if your self could become smaller in it; if you could really look at other men with common curiosity and pleasure; if you could see them walking as they are in their sunny selfishness and their virile indifference! You would begin to be interested in them, because they were not interested in you. You would break out of this tiny and tawdry theatre in which your own little plot is always being played, and you would find yourself under a freer sky, in a street full of splendid strangers"


So you're still deriding ideas like this as "conspiracy theory nonsense," even after extensive documentation that the NSA is, in fact, surreptitiously introducing security holes in software?

Personally I've adjusted my Bayesian priors a bit.


Extensive documentation does not exist. It's all conjecture and speculation.

We know the NSA spies on foreigners, we know they have relationships with tech companies to make that spying easier and therefore have access to all that information. We don't the extent of domestic use. We know they collect phone metadata. We know they have infiltrated software abroad, they deny having done it domestically. There's just a whole lot we don't know.

Here's something I do know: the government is not infallible. In fact, just the opposite. Sure Snowden revealed a lot about the NSA spying programs, but he also revealed another salient fact: their background check process was a joke. Like every other government agency they display an incredible degree of incompetence.

Sleeper agents at Apple inserting bugs into code in order to bypass security checks as part of some grand scheme to infiltrate the communications of millions of Americans... it's not even a good idea on the face of it, but even if they tried to pull this off they'd screw it up somewhere along the way. Human beings make mistakes. You guys are giving way too much credit to the NSA.


> extensive documentation that the NSA is, in fact, surreptitiously introducing security holes in software?

I've seen speculation to that, but not "extensive documentation", at least from the perspective of simply breaking all hardware.

Buying descriptions of existing vulnerabilities is not "introducing" them. Nor is haranguing companies into leaving in known vulnerabilities (though that is bad enough).

Even things like asking companies to use Dual EC DRBG is not "introducing security holes" in the way we understand it, as EC DRBG is actually secure against all adversaries except NSA.

Like, I'm re-reading the Guardian article now and it talks about the NSA "using supercomputers to brute-force encryption" as a strategy... hardly a jumping testament to the massive brokenness of the Web.

Going further to read the actual list of NSA practices helps confirm this a bit too.

For starters if you look at the description of their SIGINT Enabling Project it states that "To the consumer and other adversaries, however, the system security remains intact." (emphasis mine), which seems to be hinting at Dual EC DRBG (or at least, Snowden doesn't seem to have leaked any other NSA technologies that are broken only to NSA but resistant against other adversaries).

The one blurb I could find about deliberately introducing vulnerabilities had a very important caveat which everyone leaves out: "Insert vulnerability into commercial encryption systems, IT systems, networks, and endpoint communications devices used by targets" (again, emphasis mine). The Guardian somehow left that out of their description of that bullet, I'm sure it was just an oversight.

In other words this is not mass introduction of simple exploitable but a seeming formalization of the types of corporate-government partnerships that led to things like the Siberian pipeline sabotage, to be used in specific targeted operations. Indeed the Guardian seems to confirm that in their description of the NSA Commercial Solutions Center.

Even Snowden has spoken up in support of the concept of targeted operations by U.S. intelligence agencies, so I'm not sure why this should be surprising; it's the kind of stuff we expect the U.S. to do to gear going to Iranian nuclear weapons facilities or Syrian C2 bunkers.

So even if we give the NSA credit for surreptitiously breaking crypto around the world, this particular method does not appear to match their style or even their own internally-held methods. It seems like the kind of thing NSA would take advantage of without revealing it, but not the kind of thing they'd intentionally add to a non-targeted iPhone. And, if they did add it, they'd add it to the flashed image, not the source, à la "Reflections on Trusting Trust".


It's a lot more than PRISM. Have you watched "To Protect and Infect Pt 2"? Turns out the conspiracy theorists didn't go far enough.


>If PRISM means Apple supplies data directly to the NSA there's no need for a MITM attack.

Why collect data once when you can do it three times for to verify all your other collections are working?


This just remind me of the slides from the NSA operation ORCHESTRA: Annual Status Report in FOSDEM, Brussels, this year - see video here: http://t.co/8WSSjOFrLk


No fix for iOS 6.x on devices capable of iOS 7 either. So now I have the choice of no SSL or destroying my phone's UX by upgrading to iOS 7.x :-(


Another explanation is that it was an act of vandalism / code graffiti by a script kiddy or perhaps a real hacker with more skill than maturity.


Could you explain how that's a plausible explanation? I don't see this being the kind of thing a vandal would do.

Is external (hacker/SK) influence even possible? I know Apple releases the source, but do they actually take patches from the community?


So is there evidence yet of this out in the wild? And was it discovered by Apple or an independent researcher?


And this is why you should always put curly braces around your if statement body.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: