Hacker News new | past | comments | ask | show | jobs | submit login
NSA Said to Exploit Heartbleed Bug for Intelligence for Years (bloomberg.com)
323 points by taylorbuley on April 11, 2014 | hide | past | favorite | 174 comments



Bloomberg really puts its bias on display:

> The Heartbleed flaw, introduced in early 2012 in a minor adjustment to the OpenSSL protocol, highlights one of the failings of open source software development.

And its discovery and resolution highlights one of the advantages of open-source software development.


> And its discovery and resolution highlights one of the advantages of open-source software development.

I wouldn't say that its discovery (two years later) says anything good about open source development.


I might be completely wrong but I don't see much difference between open source and closed source in this case. If I were biased I might say that it might have taken substantially longer to discover such a bug in closed source software but that seems to be as much sensible as saying that open source has less security bugs because more people look at the same code.

In the end software is written by people and people make mistakes. I'm pretty sure there are a lot of software be it open or closed source that had absolutely terrible security bugs. Judging open source as a whole by looking at a single project sounds a little bit like overgeneralization. Also, they are obviously in need of help though, I hear a lot of complaints from people about the OpenSSL code.


And I also wouldn't say that the existence of a bug was caused by the license that was used. It's not like me keeping all the code to myself would make me a better programmer.


[deleted]


>> Well, if you were keeping all the code to yourself you presumably wouldn't be accepting poorly reviewed patches from random people.

How does this follow? You can just as easily hire "random people" to make mistakes as you can accept mistakes from people you don't pay. The problem is poorly reviewed code either way. Not the license.


[deleted]


The same amount of time: it was apparently found with a fuzzer.


I didn't know about fuzzers before this whole imbroglio -- not denoted as such, at least.

If you know `crashme` you already know one fuzzer, which "intended to test the robustness of Unix and Unix-like operating systems by executing random machine instructions." See https://en.wikipedia.org/wiki/Fuzz_testing

As a good app-sec'er you seem to need to be deeply steeped in fuzzer lore. Matasano: "We'll have you write a fuzzer. Everyone here writes fuzzers." http://www.matasano.com/careers/

(I'm obviously not replying to tptacek, just highlighting a bit. ... And basking in the good glow, yes.)


What about the relative ease and speed with which the bug was fixed? Fuzz testing could certainly find the bug in closed source software, but patching it is a different story, especially if the person or group that controls the source code is slow, uncooperative, or extinct.


It was definitely easier to fix because it was open.


However, the software being open source aided those who may have rushed out to take advantage of still-vulnerable systems. It's a mixed bag both ways, lets not put blinders on for ideological reasons.


Ideology is the most important reason to choose open source and free software -- the ability to inspect and learn from the code is paramount, regardless of whether the software is superior or inferior to some other closed product.


I'm not so sure about the fuzzer. We have several good symbolic fuzzers around by ourselves, like fuzzgrind, but not so many open symbolic bug finders, like BAP, MAYHEM, EXE, forensic, cmbc. It could also be that they just added symbolic annotations like with Frama-C (as done with portalssl) and found bugs thereby. openssl nor gnutls is not in the shape to add something like this by themselves.

It's much easier to come up with workable exploits in decent time with symbolic input and the stp or z3 solver than with simple fuzzing.


One of the claimed advantages of open source is to produce higher quality, more secure software.

"As a result, the open source model builds higher-quality, more secure, more easily integrated software. And it does it at a vastly accelerated pace, often at a lower cost." - http://www.redhat.com/about/whoisredhat/opensource.html for example.

As an ideology, comparing this OpenSSL happening to the stated goals of strong proponents open source, it is a complete failure of that ideology.

That it was then fixable is an advantage, but that's a separate thing. It doesn't offset the way a really common error in a change to a really common and important library sat unnoticed for two years ... until a private company paid security researcher found it.

You could argue that the bug was found and fixed and the software is now more secure :: the system works, this is how it works.

But this is not how open source proponents present it as working.

NB. This doesn't mean closed source is better. People don't push closed source as an ideology in the same way at all. But do compare open source to the claims made by open source advocates, as well as comparing it to the ease of fixing and finding bugs in closed source software.


This might seem like a nitpick, but I think you're confusing two distinct camps here: let's call them "Open Source" and "Free Software".

Open Source advocates -- such as Eric S. Raymond -- believe that it is superior on technical grounds, the "many eyes make all bugs shallow" theory. They tend to disregard ideology and instead believe OS is the rational decision of those who want technically better software. In my opinion this is not always true, as Heartbleed shows (but then again who knows how many undisclosed vulnerabilities are there in existing proprietary software! At least now we know about Heartbleed!)

Free Software advocates -- such as Richard Stallman and the FSF -- believe it's a matter of ideology. This has little to do with technical quality. They say "sure, if the software is better that's a plus, but freedom is a matter of principle to us".

I'm not a zealot but I tend to side with the Free Software camp, and I don't see how the OpenSSL fiasco undermines their ideology.

PS: I also take issue with the "paid security researcher" remark. Absolutely nothing in either Free or Open Source excludes paid personnel or private companies from the equation. Hobbyist programmers are not the only ones accepted. I don't understand why you see this as extraordinary.


PS: I also take issue with the "paid security researcher" remark. Absolutely nothing in either Free or Open Source excludes paid personnel or private companies from the equation. Hobbyist programmers are not the only ones accepted. I don't understand why you see this as extraordinary.

I never meant to imply paid researchers should not work on it. What I meant is:

The whole world can see every line of code in Linux. This is one of the reasons Linux is more secure than other operating systems and why open-source software overall is a safer than closed software. The transparency of the code ensures it’s secure. - Linux Foundation executive director Jim Zemlin

http://venturebeat.com/2013/11/26/linux-chief-open-source-is...

What happened in the case of Heartbleed? The security flaw was found by paying someone to work on the security.

I meant to mock this: The transparency of the code ensures it’s secure., mocking it by noting that nobody cared to look for or fix that bug because OpenSSL was important, because it was widely used, because it was interesting, because it was open source, because it was a puzzle, just for something to do one rainy day. Only when someone was paid to do it did it get done. Therefore the "open source is more secure" claim is a nonsense.

It's more secure because someone was paid to work on it. The claim that "open source did it" is snake oil.


In this case, you're right. I don't know that I would make a general rule out of it, though. Maybe in general open source helps, but in this case (for several reasons, including that OpenSSL seems to be a barely understandable mess, or "written by monkeys" as some put it) it didn't.

I agree that thinking "open source magically makes software better and more secure" is absurd. I also agree that Jim Zemlin's statements (in general, in that article) are more of a PR thing than accurate statements.


I do see the distinction between open source and free software, but I'm not sure it applies here. This bit of your comment particularly:

They tend to disregard ideology and instead believe OS is the rational decision of those who want technically better software.

If ESR is writing software in the way he thinks results in better software ... how come I know who he is? Because he's not doing that, he's doing more than that.

I don't know if he prefers working in the mornings or evenings. I don't know how he backs up his work. I don't know whether he prefers a laptop screen or an external display or who he trusts to contribute to it - presumably he made rational decisions there for the benefit of his software, and didn't feel the need to tell the world all about it.

Yet when it comes to open source, he does more than "choose the best option for his software and use it", he also: spreads the word, advocates for it, tries to convince others. Wikipedia says "Raymond was for a number of years frequently quoted as an unofficial spokesman for the open source movement."

By contrast, there is no comparable "closed source movement" which organizes conferences and runs websites and talks to journalists and advocates in favour of closed-source development because it makes software better. There's no popular closed-source unofficial spokesperson I can point to whom you recognise.

I argue that people pushing "open source produces better code" are making that an ideology of its own, separate from anything to do with Stallman and 'free as in speech'.

And it's that ideology of 'Open Source leads to technically better software' which Heartbleed is showing up as weak and oversold. You agree that it's "not always true". My point is that Open Source proponents make it seem like should be "always true", like there's a very strong case for it. And I say that Heartbleed shows there isn't.

'Everyone' knows about bounds checking in C. Everyone knows about not trusting input from a remote machine without verifying it. Everyone knows about being extra tip-toe careful around cryptography software because it's high importance and brittle. What did OpenSSL do about it? Nothing.

Almost as if Open Source made no difference at all, and what matters about developing trustworthy software is people, welcoming communities, a thousand cultural decisions setting and holding patterns of procedures to systematically catch common errors in C, common errors in network code, common errors in security, common errors in memory managment, add regression tests, encourage documentation, add compliance tests, etc. etc.

Open source does not automatically lead to better software.

And many people strongly imply that it does, automagically, lead to better software.

(The fact that OpenSSL still became popular and widely used despite being a mess is if anything a win for 'free as in speech' - anyone can do any due diligence they want on the OpenSSL code, make their decision to use it for any project, fork and patch and modify it as they go. On that front it's a massive win for that ideology).


I agree with you that open source does not automatically lead to better software.

Now that you've clarified you weren't talking about RMS's "free as in speech" ideology, I retract my nitpick.

I wouldn't say open source doesn't matter in regard to quality and security, though, but I agree with you on the importance of the other factors you mention. I do wonder what would have happened with a similar bug/exploit in a piece of commercial software. Who knows? Maybe it's already there and we simply don't know about it, and there are fewer people looking at it.


Security researchers do find and report buffer overflows in closed source software, which then get fixed by the manufacturer. It happens.

But yes, I also wonder about the balance of bugs in similar open/closed source software.


This is a Washington-based reporter, meaning he's most likely more policy-oriented than technology-oriented (got his start chasing stories on the Hill, not in Silicon Valley).

It doesn't excuse a misunderstanding of his subject matter, but an accusation of bias is likely an over-analysis.


In fairness, the "public good" nature of an Open Source security library does create an incentive problem that's well known in the literature, and is probably related in some way to the failure here.

You might have seen the story about how it gets $2000 in donations a year, which is supposed to be enough to marshal the expertise to prevent this kind of thing.

https://news.ycombinator.com/item?id=7575210


Here we observe a side affect of the NSA/GHCQ operating in a manner which always gives offensive capability precedence over the defense of civilian systems.

In case you haven't made the time yet -- ACLU's interview of Snowden at SXSW was excellent and dives into the implications of this: https://www.youtube.com/watch?v=UIhS9aB-qgU

On another (ironic) note this PSA from the US government is about 2 years late: http://www.bbc.com/news/technology-26985818


I don't know if Heartbleed could reach this point, but I think probably the only possibility for getting average citizens up in arms about this kind of thing is for them to start seeing major personal detrimental effects (like oops, all my email has been stolen and deleted and my bank account's empty), and then learn that the NSA could have easily prevented it if they weren't having so much fun being super-hackers instead.


I don't think average people (so to speak) really care about their email.


I don't know about that. Yesterday, my sister (a person who has no interest in tech, government, and is downright afraid of both) asked me about Heartbleed and stated it was the topic of discussion during breaks at work. I was pleasantly surprised.


You know, even my bank has a special warning up about Heartbleed. I also saw it as #1 on BBC News at some point, fed to my Android phone's news, etc. It's a surprisingly big deal.


They don't until either they lose all of their email, or they lose their email account.


You are right, I could have been more specific, they do often care about maintaining control of their account. I don't think they care very much about their message history though, and there is at least some segment of the population that considers email accounts entirely disposable.


You'd be surprised about the number of emails I've received from people who apparently don't care about their email/website asking if it was vulnerable to the heartbleed/heartbug/heartbeetle/heartblood malware/virus/bug/issue/exploit in the past week.

Though I must say I find it funny all the different names/what exactly it is I've seen.


You are very wrong.


Even n00bs understand that if their email gets jacked, that can be used to reset all their other passwords and jack those accounts.


I had a fifteen minute conversation with a relative about this yesterday.

No, they emphatically do not, at least until you explain that to them. And even then, it was "well, I don't really do anything important online anyway..."


I'm not so sure, I know plenty of people who don't realize that until it's pointed out to them. Even then, many don't even care ("I don't have anything important anyways...").


Should be possible to convince them with "Hacker makes incriminating but false post on your Facebook" -> "Employer checks your Facebook" -> "Employer shows you the door."

"Why would anybody do that?" For the lulz ("random acts of malice"), sadly.


This is how NSA "protects America" and its infrastructure from "cybercrime" - by allowing a bug like this to exist for years without telling anyone about it.

I hope it's now clear to everyone what NSA's vision about "cybersecurity" is. They think having vulnerabilities like this in the Internet's infrastructure is a good thing, because then they get to attack their "targets", to "protect us". It has nothing to do with actual security. Weakness is strength. Vulnerability is security.


This looks like another case where the actions of the NSA are the opposite of what's in the best interest of US Citizens.


Are there any cases where the actions of the NSA are in any way beneficial to US citizens? Can they show that they have ever done anything positive at all? Have they saved a single life? Stopped a single threat? Or are they too busy jerking it to sexting pics and playing WoW (seriously? Come on, guys) to actually do anything useful with the BILLIONS of dollars of money that they get to play with?


The NSA has done a lot of beneficial things, such as helping design more secure crypto primitives (e.g. strengthening DES against differential cryptanalysis, fixing SHA-0 to produce SHA-1) and helping build secure software (e.g. SE Linux).

I assume what you really mean is whether their dragnet surveillance in particular is ever beneficial to US citizens, and that certainly seems to be a "no".


Why should we assume that SELinux is secure? I would suspect that if there was NSA involvement then it is necessarily less secure because they would want to be able to access it at will.


You can login to Russell's selinux debian box and see for yourself. He gives the hostname and the root password here:

http://www.coker.com.au/selinux/play.html


> Are there any cases where the actions of the NSA are in any way beneficial to US citizens? Can they show that they have ever done anything positive at all?

Yes, and quite easily.

If that's your only testing criteria for whether a government agency is useful then NSA will pass with flying colors.


It is posts like this that make me think twice about reading the comments on NSA stories. For starters how many intelligence agencies publicize their successes? Do you think other four eyes purchase full page ads detailing the highlights of successful intelligence operations?

Anyway lets skip the banal "intelligence agencies failures are public and the successes are private" and get to actual examples:

DES SBoxes

SELinux

NSA Academic Centers of Excellence

VENONA


Exactly my point. An organization like the NSA can say "Oh, yeah. We've had great success. We can't tell you what it was, of course. Just take our word for it. What's that you're typing there? Don't want to tell us? It doesn't matter, because we already know."


And yet off the top of my head I was able to provide you with four programs/ways NSA has benefited the lives of US citizens. Why did you ask for examples if you were going to completely ignore them?


I don't trust a cryptography tool that the NSA says is secure when they openly and proudly spy on US citizens. Centers of Academic Excellence? They are probably also involved in the School of the Americas. Just calling something a school doesn't make it a good thing.


Lots of education: Conferences, teacher training, student scholarships. Also, selinux.


Was it though? The NSA's job is to spy on behalf of the country. While keeping the bug a secret put people at risk, there is an argument to be made that it was a useful tool. Law enforcement regularly makes the decision to allow low level criminals to continue to commit crimes in order to catch their leaders even though doing so puts people at risk. There are always tradeoffs.


Their job is not to spy on behalf of the country.

Their job is to keep us safe.

Letting us all run around with humungous holes in our security for years was a risk to our national security. How do you think the Chinese were able to clone our weapons systems so well? Shit like this.


POSIWID.


Yeah NSA's job is to fix open source bugs, whatever.

NSA is a spy agency, expecting them not to use vulnerabilities they find is like sending them into a gunfight with a pocketful of rocks. Ask the Palestinians how that works out in the long run.

I think much of the NSA's surveillance is unconstitutional and should be rolled back by at least 2 orders of magnitude. That doesn't have to entail turning the world over to Russian and Chinese hackers.


>That doesn't have to entail turning the world over to Russian and Chinese hackers.

That's exactly what they do when they allow bugs like this to continue to exist, rather than working to fix holes, not exploit them.


>>That doesn't have to entail turning the world over to Russian and Chinese hackers.

Boogey man FUD, I'm not worried about any hackers from [Insert_forgein_country_elites_want_you_to_hate]. The USgov, NSA and corrupt law enforcement are the only terrorists I'm worried about.


Pretty interesting statement.

Why? Nations have track records of not killing/spying on each others?


Nations certainly have a record of killing large numbers of their citizens - particularly in the past century.

It's reasonably to be suspicious of government.


Right, because putting THE ENTIRE NATION AND IT'S CITIZENS at risk EVERY SINGLE DAY is comparable to allowing low level criminals to operate....

Why do we let awful people like this be in charge of our well being? Is power too entrenched that no matter what happens, we can't do anything about it?


"The NSA's job is to spy on behalf of the country."

Spying on the country and spying on behalf of the country are not really the same thing.


I think you missed the part where the NSA claimed it prioritized protecting the data of Americans and American companies. Nobody denies that exploiting the bug would be useful for the NSA.


It would require the assumption that, for example, the NSA knew about it but other foreign authorities - who no doubt attempt to spy on various Americans and US corporations - did not know about it. I don't like those odds.


Where do we draw the line? When millions die and billions of dollars in irrecoverable damage is done?

Who gets to decide whether the risk is acceptable? To whom do we turn to when it's found that their risk assessment was flawed, and we require compensation for their recklessness and negligence?


One could make the argument that given the depth of the NSA's capabilities they were in an unique position to know who, if anyone, also knew of the bug.


So we should just blindly trust an agency that has repeatedly been shown to have abused that very trust for self-serving and hypocritical ends?


All governments are hypocritical. We have nukes, but you can't have them; we can spy on you, but don't spy on us; etc. It's the nature of self-interest.


Reworded slightly: "Evil is evil. You can't change it. Get over it." I don't buy it.


I doubt such subtlety entered into the decision. The NSA plays offense, and this was a win from that point of view.

The only way to get a different result is to re-allocate resources away from NSA and build an agency from the ground up that is geared toward securing systems.


We've all been waiting for that argument for a while. When are you going to finally reveal it to us?


Evidence? And if so, pretty much what we expected and exactly why this behaviour is terrible


>> The U.S. National Security Agency knew for at least two years about a flaw in the way that many websites send sensitive information, now dubbed the Heartbleed bug, and regularly used it to gather critical intelligence, two people familiar with the matter said.

(emphasis mine)

It's pretty weak IMHO but I don't really doubt it.


The bug only existed in the wild for two years and less than a month. I’m not sure what the “at least two years” means in that context. The NSA can’t have known this bug for a lot longer and “at least two years” implies to me “at least 24 months and possibly many more”, not “at least 24 months, at most 25”.


The trick is to put yourself in the context of a potential leaker. Is this person technical? Would they have the skill to distinguish between heartbleed and another equally powerful exploit? What "at least two years" means to me is not that they knew specifically about heartbleed shortly after it was introduced, but that there may be another equally damaging bug the NSA exploits that a non-programmer could easily confuse. After all, I'm sure they don't have this exploit labelled "heartbleed" in their database.


Furthermore, if a piece of software is responsible for protecting a huge percentage of the internet and is known to be a mess, you can be absolutely certain that there is not just one or two researchers, but possibly several teams responsible for probing that code base each and every day looking for exploits.

I would be surprised if every single commit to OpenSSL doesn't get dozens to hundreds of man-hours of attention from people very good at breaking secure systems.

With that in mind, you can also be sure that the NSA isn't the only government agency that is putting tons of money into exploiting OpenSSL and other critical software. I'd be surprised if it took them more than 1-3 months to find this exploit after it was introduced. If they found it in that time, you would expect that other government agencies found it nearly as quickly and have been exploiting it as well.

When you have million and billion dollar budgets used to find and exploit bugs in software, you can be certain that the average person is losing out big time.


Probability that story is true | Bloomberg reporting it == Probability that the sources are right * Probability that Bloomberg isn't lying about having sources ~= 80%.

The sources could be lying for many reasons. As a prank, to discredit Bloomberg when they report on other NSA stories, because they're embarrassed the NSA didn't know earlier, etc. But Bloomberg knows this and presumably required some evidence to satisfy themselves before reporting. So the deciding factor is really Bloomberg's reliability.


Don't forget the priors.

As soon as I thought about Heartbleed and the NSA (well before this story), I figured there was about a 99% chance that the NSA had found it and had been actively exploiting it for a decent portion of the time it was in the wild.

Stacked against that, Bloomberg's reliability doesn't really matter at all. If the sources are good, great! If the story is crap, it's still probably right by accident.


Why do you think Bloomberg was any more thorough in its Heartbleed investigation than Newsweek was in outing Dorian Nakamoto as the author of Bitcoin?


Newsweek was purchased by some shady people and hasn't been famous for investigation for... ever?; Bloomberg is one of the leading financial periodicals which is a major part of the Bloomberg empire and hooked into all sorts of circles. Would you be so skeptical if it was being reported on nytimes.com?


I've heard of the NSA, and I've read that xkcd comic on how Heartbleed works. Can I be quoted as a person "familiar with the matter"?


[deleted]


Lol, exactly. If this was on The Intercept [0] I'd feel differently but "two people familiar with the matter" doesn't inspire much confidence.

[0] https://firstlook.org/theintercept/


I've been seeing a lot of comments recently along the lines that we "need" more evidence before we assume that the NSA took advantage of heartbleed. I don't get that at all.

I'd love to have harder evidence of what the NSA has been up to. I get that. But here are some things we know: the NSA believes its mission is to collect 100% of the world's data, with the possible exception of data that definitely belongs to US citizens. The NSA has boasted internally of cracking SSL implementations as part of its work. The NSA employs more people who are qualified for and tasked with finding this kind of exploit than anyone else. The NSA's leadership is willing to lie under oath to Congress -- let alone to anyone else -- about its activities. The NSA's secrets are about as heavily defended as secrets can be -- actually providing the kind of evidence requested here is widely considered treason against the United States. And now an investigative reporter with a serious reputation says that he has two sources who can confirm that the NSA knew about heartbleed shortly after it was created.

So let's assume you might behave differently in some way -- in any way -- if the NSA knew about and exploited heartbleed. You have imperfect information and you have to make a call. What else could you "need" before you decide to behave as though this article is accurate?

I think we "need" to assume that the NSA took advantage of heartbleed starting shortly after it was introduced. We'd just "like" to have a little more confirmation about what the hell they've been up to.


No fucking way. This is disastrous PR stuff, second only to the Snowden revelations.

It should be clear by now that the NSA does not restrict themselves from anything... and should be disbanded.


I don't know how "disastrous" this really is.

NSA knows approximately 1 zillion vulnerabilities we don't know about and won't know about. They range from RCE's in Windows and Apache to flaws in cryptographic hash functions.

It's NSA's charter to stockpile these things, and, yeah, to use them against foreign adversaries.

It's bad though, because this one was so easily exploitable. It's the kind of thing a reasonable organization finds out about and wants fixed ASAP.


> It's NSA's charter to stockpile these things, and, yeah, to use them against foreign adversaries.

I don't see how leaving American companies vulnerable fulfills the NSA's charter.


American companies are vulnerable to literally hundreds of vulnerabilities NSA knows about; that's something that was widely known (public, in fact) almost a decade before Snowden.

I agree that this bug is different, but that might have been a subtle case to make inside the organization.


The worst problem with the NSA knowing about Heartbleed is the total lack of accountability.

If I were any US-based company CEO whose customers got hacked by Heartbleed exploits, I'd drag their corpses to the court if necessary.

Sidenote: People have asked "Why are you doing JS-based cryptography on passwords if you have HTTPS?" - here we have the ideal answer. Encrypting the passwords using public-key crypto in addition to HTTPS and doing the decryption in RoR/PHP/nodejs would at least have spared the users from the need to change their passwords.


There seems to be a lot of confusion about what the job of the NSA is, with this most recent Heartbleed incident being the most recent example.

The NSA's primary function is performing signals intelligence. To perform that function, they've spent the past ~60 years building up cryptanalytic capability (pretty much unmatched by any other single organization either governmental or private).

Because of this, they have a secondary function, which is to serve as subject-matter-experts for other government agencies. They provide advice, mainly in the form of influencing NIST standards (overtly by providing recommendations, and as we've come to learn, covertly by fucking with standards). This is a side-effect of their primary function however.

Asimov's first law of the NSA is to intercept and process signals intelligence. Any other function is secondary, and certainly will not take precedence over their first function.

What the Snowden revelations have shown, is that there's a conflict of interest between their primary function and being tasked with providing advice. I think it's a reasonable argument to be had that they probably should get out of the business of providing guidance to other agencies, as now all that advice is tainted.

The security of "US-based companies" is so far down the list of priorities that I hesitate to suggest it exists at all. Reporting vulnerabilities to vendors is at best orthogonal to their primary function, and at worst, counter to it. If you want to argue that someone in the government should be responsible for helping companies fix security issues, that's also a good argument. But it certainly shouldn't be the NSA (and definitely not now that we know they have no compulsion about misleading everyone).

I'm going to ignore your side-note about JS browser-based crypto. Unlike the people on here who diligently try to explain the fundamental issues with doing JS-crypto, I'm now of the opinion that you can't reason with these people.


It wasn't traffic that was revealed, it was server memory. Which could just have easily contained the decrypted passwords as the encrypted ones.


Heartbleed only leaks SSL-related memory - not program memory!


Consider: what if the NSA's sensors are so extensive that they know the exact moment anyone other than them tries to exploit certain bugs?

That changes the risks/rewards of early-patching quite a bit. They can be confident it's their own trump card for quite a while, and learn about (or strategically mislead) any teams that arrive later to the same knowledge. When it's really "burnt", and in use by the NSA's enemies, then they can help US companies patch... and possibly even assure them exactly how much damage (if any) occurred.

(In the extreme, with say a big friend-of-NSA telecom or defense contractor, that could even be: "Hi, American BigCo. In the 48 hours between the beginning of enemy exploitation and your patching, we saw about 13,000 suspicious heartbeats directed at your servers. If you don't have raw traffic logs to do your own audit of exactly what server memory was lost, we can share our copy with you. It's a pleasure doing business with you.")

In fact, perhaps the reason for the synchronized reveal from US and non-US discoverers just now is that the first non-NSA probing (by either malicious actors or researchers) was just recently detected, starting the race to patch.


I don't see how that protects the people whose data was stolen.

"Here's the license plate number and home address of the guy who just ran over your grandma. Sorry for your loss."


It limits the corporate risks: they know exactly which passwords to change, accounts to lock, and other data loss to ameliorate.

And if the time window of exploitation is kept small, the exact same magnitude of data loss could have happened in a rapid-disclosure and patch scenario. (Two years ago, were practices for rapid response better or worse than now? Would the time window of public-knowledge-but-incomplete-protection have been any smaller - or maybe larger?)

So why not let it break later (and maybe never), rather than earlier? It's like any kind of "technical debt" analysis... oftentimes it makes sense to defer fixes, because by the time the issue becomes critical, it may have already been rendered moot, by larger changes.


That doesn't give me, as a user, much comfort. But I can see your point from a corporate standpoint.

This whole things just sucks.


Eventually, the bad guys will find all of these bugs. And do massive amounts of damage (in this case, potentially billions of people who should change their password, and hundreds of thousands of administrators having to swap certificates).

And all the time, the NSA had the capability and knowledge to prevent this damage. What a great service they did to their country, indeed.


Eventually, the bad guys will find all of these bugs

That's not really true. The NSA has incredible resources that other bad guys do not.

I think a critical step on this logic chain is "once we find all the bugs, we will be safe." Given that step, you would obviously want the NSA to tell the vendors about every single bug they find.

But it's really not the case that we will ever "find all the bugs." Even if the vast resources required were needed to be spent to find all the bugs in version 3.1415, there would be new bugs in 3.1416.

I can kinda-sorta buy the full-disclosure argument that "if an independent researcher can find X then so can the bad guys." That argument doesn't apply to the NSA. They have a much better reason to believe "we found this exploit, and it will take a long time for someone else to find it out."


Problem is that "the bad guys" also includes Chinese and Israel intelligence agencies. The Israelis are known to have massive cyber ops going on (Stuxnet is said to have a large Israel contribution), and the Chinese cyber-ops exposed (iirc) one year ago only learned from their uncovering.

I would not be surprised at all if Israel and China didn't also know about Heartbleed.


Exactly, I don't mind them having the capability for this kind of thing. What bothers me is the lack of due process and rule of law!


There's plenty of both. NSA is wrapped with layers upon layers of process and oversight both, which is something re-confirmed in the wake of Snowden's revelations.

What people are shocked about is that they didn't understand what the law permitted, or how quickly mixing the law of induction with datacenters full of computers can led to global-level surveillance.

With all that said, I would mind if it's true NSA knew of this bug and left it alone. It's a powerful weapon for SIGINT to be sure, but it's too easy to find by other state spy agencies doing code review; NSA would have had to assume other nations knew about it at well and were putting US government and private-sector comms at risk.


The DoJ decided to ignore the law. https://www.eff.org/foia/section-215-usa-patriot-act The NSA secretly collects the private data of people they know are innocent, which is the opposite of due process.


The FBI collects the private data of thousands of people they are innocent every time they use a mass-intercept warrant on a particular cell phone tower.

An attorney general collects the business records of thousands of people they know are innocent of wrongdoing in the course of routine investigations into fraud by large businesses.

Both of those are still considered "due process" because in both cases they are following a set process. Due process doesn't mean the government won't look at you if you're innocent, which seems to be a big misconception. It essentially means that the government should handle cases with similar particulars in similar ways.

So in the case of the NSA, if that collection happened under the same type of legal authority, after going through the same FISC review (if needed), was held to the same standards of reasonable and articulable suspicion (or whatever standards were required to be met for §215) then the collection that would happen from there would have been given due process.

As for your link, while its accusations are damning enough, they don't appear to support your first point or your last one. It's hard to argue DoJ is "ignoring" the law when the EFF themselves make quite clear that "The language of Section 215 allows for secret court orders to collect 'tangible things' that could be relevant to a government investigation – a far lower threshold and more expansive reach than a warrant based on probable cause. The list of possible 'tangible things' the government can obtain is seemingly limitless, and could include everything from driver’s license records to Internet browsing patterns."

So don't take my word for it, take EFF's.


It is? It would be shocking if the NSA didn't use Heartbleed. This is basically the equivalent of doing a news story about an alcoholic who drinks the beer they have in their refrigerator. It's exactly what you'd expect them to do.


The NSA protected us by not disclosing to us a serious security vulnerability in our software. It is hard for me to wrap my brain around reasoning of the intelligence agencies.


To be fair to the NSA, it's not just them. Many other government agencies operate under the assumption that society is better off if people are protected from themselves. It's why we have FCC censorship and the war on drugs. Questioning this core assumption is verboten in these organizations because it is equivalent to questioning their reason for existing. So when they take criticism from the public they naturally retreat to their core assumptions and values, even if they have to dress it up with doublespeak.


> two people familiar with the matter said

As much as Snowden has shown us the amount of effort NSA puts into this kind of stuff, I think we need more evidence than this article is giving.


Yeah, that's not good.


[deleted]


The real crazy bit about Heartbleed was that it was worse than a man-in-the-middle attack. It's a "give an unrelated third party on the side your plaintext" attack, rendering your SSL connection less secure than an encrypted connection.


HN, I'm sorry to have deleted my comment before noticing this reply. For the record it said something about 1) being put at ease by the Cloudflare challenge, suggesting to me no MITM attack was possible, 2) and then bemoaning the fact that the NSA "is the man in the middle"


Cloudflare's challenge is specific to nginx's implementation of OpenSSL. They hypothesize that stealing keys from Apache is unlikely, but possible.

http://blog.cloudflare.com/answering-the-critical-question-c...


Thanks for the extra info. While I don't find it encouraging, I appreciate having a better understanding of the issue at hand.


This is according to "two people familiar with the matter". While nobody would be surprised that the NSA had exploited heartbleed, this article gives no compelling proof.

I wish newspaper articles had a bit of metadata that indicated whether the sources are verifiable. Then we wouldn't have to waste any time reading them when they aren't.


“It flies in the face of the agency’s comments that defense comes first”

The NSA needs to be dissolved. It is a costly liability whose actions work against the nations interests as a whole.


Good luck trying to wiggle out of this one, NSA.


The NSA is denying this report.

> Statement: NSA was not aware of the recently identified Heartbleed vulnerability until it was made public.

https://twitter.com/NSA_PAO/status/454720059156754434


Full statement: http://twitter.com/ajamlive/status/454724369429045248/photo/...

The NSA has a good history recently of lying through their teeth, and the bit at the end "Unless there is a clear national security or law enforcement need..." is a pretty damned large asterisk.


So is there a single shred of evidence besides something unquoted by `two people familiar with the matter said`

because if not this is just straight up link bait.


It's going to be pretty hard to say you're playing "defense" with a straight face after this one.


It certainly seems believable, but do we have anything more concrete to go on than "two people familiar with the matter?" Is that even two people with top-secret clearance at the NSA?


Well, consider what that would look like. Given the way that the US Government has pursued Snowden and other whistleblowers for embarrassing them, if you had privileged information indicating that the government was deliberately leaving nearly all Americans' online information exposed, would you want your name attached to it?


I think lauradhamilton's point is that "familiar with the matter" is too subjective.


Oh, I totally get that, and agree. I'm just saying that even if these are bulletproof sources with deep insider knowledge, it would be incredibly risky for them to associate any credentials other than "familiar with the matter" with the story.

It's wiggle-room a mile wide, but I'm not sure if there's a better alternative given the current climate.


While there's no evidence (yet) that the NSA knew about or exploited this bug, I would not be the least bit surprised if they did. Honestly, my first thought when reading about Heartbleed was "I wonder how much the NSA paid the contributor. Or did they just threaten his family?" It seems there have been a lot of "oops" errors being found in critical security systems these days, and every single one of them is directly beneficial to the NSA and its mission to "h4ck the plan37!"


A reasonable policy upon discovering this type of bug is to allow the agency a fix period of time to exploit the bug and then require that they provide support in fixing the bugs for as many major US companies and institutions as possible as quickly as possible.

If they are given carte blanche to use the exploit indefinitely, they will keep it forever and let the world discover and exploit it as well. If they have a finite time period like 1-3 months, they will prioritize exploiting those systems that are actually valuable for national security. While they are doing so, they should keep an auditable log of all the systems they use the exploit against so that oversight may be performed in hindsight. Furthermore, they should absolutely be barred from using any exploit against a target with a US-based IP, or possibly even any IP address in allied nations.

It is far less likely that the agency will have the opportunity to abuse exploits if they are forced to prioritize targets due to a fixed deadline on disclosure.

During the deadline period, they should also be working on a plan that minimizes the amount of damages once disclosure is forced. i.e. there should be a list of people and companies that get the information first and everyone on the list should be people in charge of protecting computer systems (i.e. no one involved in offensive activities is on the list). Companies like Google, Facebook, Akamai, Apple and the package maintainers for all the major *nix distros should be on that shortlist of those that get priority notification.


> The agency found the Heartbleed glitch shortly after its introduction, according to one of the people familiar with the matter,

Presumably if the anonymous sources here were discovered, they'd be in big criminal trouble, right? I am curious how far the government goes to try and discover them.

And I think there is no way these anonymous sources would have contacted the journalists without Snowden going first, to establish the context and interest. Snowden's actions continue to benefit us all, cascading.


Not necessarily, this might actually be an approved ass-covering leak.

You may think it's awful that the NSA knew for 2 years, and didn't push fixes... but their funders and overseers, in Congress and the DoD, would be more likely angry if, given the NSA's massive budget and mission, the NSA didn't know about this bug right away via code audit/analysis. Knowing vulnerabilities first is the whole job of the "Cyber Command".

And, the best defense isn't necessarily a panicked fire-drill of preemptive patching ASAP, if you're sure you're the only ones who know. It could make tactical and economic sense to simply prepare contingency plans, and wait for the first evidence of a 2nd-discoverer.


A possible smoking gun in this case would be to check if most of the NSA's and the DOD systems over which it has oversight were not vulnerable this entire time while the private sector remained vulnerable.

If they truly didn't know about this, you'd expect that a lot of the US government infrastructure has also been vulnerable this entire time.

I've seen the Alexa top 1000 list showing who was vulnerable. I wonder if there is a similar list for the top 1000 computer systems and networks over which the NSA has security oversight.


Whether it's true or not, I think the correct thing for the NSA to do would be to say that they knew about it for years and exploited it. That is their job, after all.


NSA has two jobs:

1. Gain signals intelligence on specified foreign targets. 2. Protect U.S. signals from having the same done to the U.S. by other states.

The second responsibility is why there are things like SELinux, NSA "Suite B" cryptography, etc. It has also led to security bugfixes to open source code (such as X.org) used by NSA or within government.

The reason that both duties are held in NSA is because the best way to defend against the best SIGINT hackers in the world is to have the expertise of the best SIGINT hackers in the world. It's why NFL teams have their offense practice against their own defense and vice versa.

In this case the flaw is so completely egregious (and relatively easy to spot for other states' spy agencies reviewing commit diffs) that the duty of NSA would quite clearly to have been to get OpenSSL fixed, if only because so much government IT could be affected by this.

The bug was introduced pre-Snowden as well, so it's not like NSA didn't have other cyber weapons to use to achieve the effects they need. So if it's true that NSA knew about the bug and let it remain open to protect "methods and sources" then they definitely chose wrong and someone needs to explain how they came to that choice...


Your friends tell you about your flaws and shortcomings. The people who keep quiet or even exploit your flaws? They are not your friends.

So, what's to keep some organization that runs a package repo from publishing OpenSSL packages that claim to be like OpenSSL 1.0.1g but actually display the heartbleed bug? I also ask myself, would the NSA seek to implement such a thing? They would, though that is an entirely different question from if they have.


This is why you can, on Debian/xbuntu, always run a apt-build from the source and verify the patch is present in the code. Or on Gentoo, it's building from source anyways.

(Or just use Debian in the Gentoo flavour from the beginning)


I hesitate to believe that in 2 years time, the agency hasn't found another backdoor to the web. OpenSSL might be patched, but what else is still vulnerable?


I'm wondering if any State Attorney Generals are tech savvy, don't like the current administration, and want some publicity[1] enough to start an investigation? I would imagine a subpoena asking for the financial records of the OpenSSL contributors would be a first step (to find Gov payments). I can see a very scary witch hunt.

1) that part might be a little rhetorical, every AG likes good publicity.


"Financial records of the OpenSSL contributors"? How extremely silly. Knowing about Heartbleed? Easy to see. Not sharing Heartbleed? Easy to see. Deliberately introducing a vulnerability that every single US adversary could trivially find? Beyond unlikely.


My personal opinion is that I doubt the NSA introduced it, but if it allowed other countries to exploit US citizens and the NSA knew about it, then that is indefensible behavior.

That being said, it would be a pretty standard tactic for an AG to look at who paid the people who did the work. It is a pattern in a lot of different types of investigations and familiar to an AG. Will they find anything? Doubtful. Will that really matter? Doubtful.

I should at this point say that I am thinking of a scenario that would occur to an AG (the pattern happens a lot). I am not advocating such behavior. It would have a huge chilling effect on public source code of any type and probably generate some seriously evil legislation (certifications or liability insurance). These concerns have never been part of most politicians concerns.


I think it's hilarious to see people who are ostensibly zealous advocates of privacy lobbying to get prosecutors to subpoena the financial records of the people who invest their free time in building privacy-protecting software.

I am at the same time comfortable filing this under "things that will never happen".


Hold up, do you think I actually want the scenario I stated to happen?


Presumably, any State Attorney General will have gone to law school, and will thus know that the Federal Government is immune to suits from the states.


They are not actually immune, states sue the federal government (or at least departments) all the time. Look at the ACA cases for an example. They can also go after the individual people involved as long as they are not serving in the government.


The states can presumably go to court to keep from being compelled to comply with an unconstitutional law. They cannot sue the state for damages.


The states can go for a variety of reasons when the feel the federal government is overstepping their bounds or has committed a constitutional violation. They can open investigations into federal behavior.

I never said anything about damages.


Who said anything about suing the USG, unless you are already assuming someone contributing to OpenSSL was hired by the USG in some relevant capacity. I don't see what would shield contributors to OpenSSL from a criminal investigation.


[deleted]


Again: they can challenge compulsion to adhere to unconstitutional laws. Note that MA didn't sue the USG for damages.


> The SSL protocol has a history of security problems, Lewis said, and is not the primary form of protection governments and others use to transmit highly sensitive information.

> “I knew hackers who could break it nearly 15 years ago,” Lewis said of the SSL protocol.

Anyone know wtf he's talking about?


TLS has been subject to a lot of attacks:

http://en.wikipedia.org/wiki/Transport_Layer_Security#Attack...

Plus, since the whole thing is based on CAs, if you can get an intermediate cert (and does anything think the NSA can't?) and you have the means to MITM someone, that's as good as breaking it, too.

15 years ago, we were using keys with much lower entropy, as well, which may have simply been outpaced by computing power.


The US government seems intent on destroying the viability of the Internet as a commerce platform.


FWIW, we asked the NSA and NSC; both deny:

http://www.nbcnews.com/tech/security/nsa-denies-it-used-hear...


> highlights one of the failings of open source software development.

Sorry? Paid programmers writing closed code with probably less review and auditing have been shown to create less bugs? What are they trying to say?


What data did you use to estimate the probability of code review? Would be interesting to see if anyone has studied this.


I've never understood theories about NSA capability. Everyone complains that Government officials are barely competent, if at all, yet when it comes to NSA, those same people think NSA staff is at least ten times as brilliant as the general population.

Everything I've seen NSA do is largely based on the same techniques Google uses, except 10 years later, much more expensively, and with much uglier PowerPoint presentations. The only thing NSA has that private organizations don't is the compelled cooperation of telecom companies.


What upsets me the most is that they new this existed, and that a lot of the US economy relies on our tech companies, and they did nothing to inform the companies about the security flaw.


Now would be the time to start looking up the backgrounds of the people who implemented heartbeat support. For instance, the same guy responsible for the Heartbeat spec was the author of the OpenSSL implementation.

While we do not want to make this into a witch hunt, now that the NSA is involved in Heartbleed, we should definitely rule out malice by checking for direct ties between contributors of known flawed/malicious code related to the implementation of Heartbeat.


We absolutely do not want this. We rely on the goodwill of a lot of really smart people to produce the open source security software we rely. Even if the person who introduced this bug did in fact conspire with the NSA, we would do far more damage by going on a witch hunt. The majority of people working on this software are well-intentioned and if contributing to open source means that they risk being subject to a witch hunt for their contributions they will be far less likely to partake.

The only reasonable course of action is to apply oversight of the NSA. If people in open-source are moles, the only reasonable way to discover this is via oversight of the agency. If there are moles, there are records within the agency showing this to be the case. Getting ahold of those records is how you prove this and you get those records by getting congress to do their job and provide oversight.


Sounds like an great plan. Perhaps you could provide an example from history where your idea works? I'm having difficulty finding one.


100% agree. Here's a related question: How many people do you think the NSA employs to work on open source software and blend into the community, establish credibility, etc. Is the answer zero people? Given what we know about the NSA now, zero seems unlikely.


> The U.S. National Security Agency knew for at least two years about a flaw in the way that many websites send sensitive information, now dubbed the Heartbleed bug [...]

Interesting that they say "at least" two years. The bug is two years old, so they could have also chosen to say "at most" two years or "up to" two years. Least biased would be to just say "since the bug was introduced, two years ago".


s/Flawed Protocol/Flawed Implementation/


Well this was flagged fast.


Sounds plausible to me. I would think the NSA, and other spy agencies, pour over every release of a security package to see if any exploitable errors were made.


Do we have anything that leads us to believe the NSA was aware of heartbleed at all before we found out, other than speculation because of their resources?


We have "two people familiar with the matter" which is to say sources that Bloomberg thought were credible enough to lead a story with.


Who is "familiar with the matter"? NSA insiders? People who've read the Snowden docs?

I wonder if Bruce "probability is close to one that every target has had its private keys extracted by multiple intelligence agencies" Schneier is one of them.


Well, Bloomberg appears to have two sources.


Why would you give them the benefit of the doubt? If anything, the base assumption should be the reverse.


You can assume that any bug in open source software that could have been found using systematic and automated analysis has already been found by the NSA.


My first thought: if this is the case, then why did they try so hard (and get "trolled" in the progress) to get the SSL keys from Lavabit?


Because NSA didn't try to get Lavabit's keys at all; DOJ did. Two very different organizations. Not as incestuously linked as people think they are.

Also, worth mentioning: it's not particularly easy to get private keys out of servers with the bug.


That makes sense, I guess. I'm not American so I don't know much about the inner workings of these institutions.

Is it unlikely for FBI to ask NSA's help?


It is probably illegal for FBI to ask for NSA's help in gaining evidence for a US criminal case, although pointing that out is sure to start a raucous subthread about "parallel construction".


If it's "probably illegal", surely you can cite the law?

Notably, the FISA cell phone record orders passed information to the NSA through the FBI, so no hint of a "chinese wall" there. Secret bulk collection for 'national security', everyone bathing in the same pool of data.

I do suspect it's policy not to casually request, or become overly dependent on, such NSA-to-domestic sharing. And – as we've seen in the "parallel construction" revelations you allude to – it's definitely policy to try to obscure any such sharing when it happens.

But to reassure people that it's "probably illegal" is flimsy hand-waving when you can't name the law, and there's public evidence that it happens to the contrary.


One of the only things that should make you feel optimistic is that there are a bunch of intelligence/law-enforcement agencies and they all hate each other and hate the idea of working together or sharing information with each other.


Get the proof via nefarious ways and then construct a "legitimate" way of obtaining the information you already have.


As an example from history, see the story about the zimmerman telegram http://en.wikipedia.org/wiki/Zimmermann_Telegram and the british interception and decryption. They faked a theft to hide the fact that they were reading the traffic with the help of Room 40.


Yeah. Whether that happened here or not, I have no idea, but it makes perfect sense if you want to create plausible deniability about what you have the means to accomplish.


Maybe because their plan all along was to shut Lavabit down?


[deleted]


It didn't? That seems unlikely.


Legal versus illegal means?


This is flamebait. Sad it's getting so many upvotes.


I agree, in the first 3 paragraphs I shook my head 3 times

"two people familiar with the matter said." <- so you made it up

"threatens to renew the rancorous debate over the role of the government’s top computer experts." <- you made this up as well

"Heartbleed appears to be one of the biggest glitches in the Internet’s history" <- Not even close.

"as many as two-thirds of the world’s websites" <- less than 17% of SSL based webservers, certainly not two-thirds of the entire freakin internet.

"it’s possible that cybercriminals missed the potential in the same way security professionals did, suggested Tal Klein, vice president of marketing at Adallom"

"this article is shite, suggested ikt, junior vice president of Compuglobalhypermeganet"


The point about it being VERY serious, I can accept. The rest... yep, pure bull. Also "btw Open Source sucks, proprietary good".

The article is rubbish.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: