Hacker News new | past | comments | ask | show | jobs | submit login
Attacking Tor: How the NSA targets users' online anonymity (theguardian.com)
654 points by brkcmd on Oct 4, 2013 | hide | past | favorite | 176 comments



One heartening aspect of the Snowden revelations as a whole is that they have pretty much just confirmed that the things we thought were strong (public crypto research, tor) are in fact strong and the things that we thought were iffy are in fact iffy(Certificate Authorities, Unvetted Crypto, Cloud Services, The Wires, Implementations). This bodes well for the prospect of navigating out of this whole mess successfully since on the whole we seem to have good instincts about what is trustworthy and what is untrustworthy. I think that it actually has tended to clarify thinking about security so that fewer and fewer engineers are able to delude themselves into trusting something that they know deep down is really untrustworthy.


One iffy part I would like to add is government itself. It was generally thought that government would not keep security vulnerabilities hidden, prioritizing to protect citizens rather than having a minor advantage in hacking.

Together with the earlier leaks regarding sabotaged security standard, US government is the most damaging entity to computer security today. Anything they do need to be viewed under the understanding that NSA primary priority is to be able to hack other peoples computers. Be that a encryption algorithm, or a kernel module, NSA priority is 100% clear.

That used to be a tin-foil hat idea just a few months ago, and we know better now. If NSA comes carrying gifts, it warrant being very careful in accepting them from a party with such hostile priorities.


> That used to be a tin-foil hat idea just a few months ago, and we know better now. If NSA comes carrying gifts, it warrant being very careful in accepting them from a party with such hostile priorities.

Well, not really.

The "tinfoil" idea is that NSA is breaking into crypto so that they can blackmail politicians, black-bag innocent citizens, etc.

But it was never widely assumed that NSA wasn't trying to break every bit of encryption they could. Besides the fact that such activities are literally their job, it's one of the few things they'd just as likely tell you directly if you asked them.

"Q: Are you trying to break cipher/cryptosystem FOO?" "A: Yes, we're trying to break all of them, to protect our SIGINT capability".

NSA has spent literally decades analyzing and breaking the military-grade ciphers of other nations. So I don't know where people got the idea that just because civilians obtained access to military-grade encryption, that NSA would suddenly stop with cryptanalysis efforts. But it has nothing to do with civilians per se; the military and national security opponents are using our civilian crypto too!

Is that inconvenient for civilian cryptography? Sure. But let's not act like people are having something chipped and taken away from them, that they've always had.

Before RSA and DH there was essentially no widely-known safe cryptosystems that we could use. You used DES, or you could make up your own Vigenère implementation perhaps (have fun with key exchange!).

And that's just discussing computer communications. Your phones were all tappable, international telegrams easily read if it suited NSA, and good luck if you used one of those new-fangled cell phones.

The claimed threat is that computers make NSA more capable of surveilling the people at large, but the evidence shows that systems like Tor are putting up an exceptional fight, and even cryptosystems like TLS with many known weaknesses mostly work against global passive surveillance.

You would have to get on NSA's specific shitlist to have to really worry, but being on that shitlist 20 years ago meant anything you said would be picked up... and now, even that is not so certain.


There is a difference between trying to break cryptography, and prioritizing breaking cryptography over protecting civilians.

This is true for almost everything in the world. I want for example that the police try to stop criminals, but I do not want them to go around with minigun's and spraying the street with bullets. I want the police to prioritize the safety of civilians.

Same goes for NSA. They are perfectly free to try break hostile entities encryption, but they should not sabotage US civilians security while doing so. When they sabotage standards, or keep vulnerabilities secret so they and criminals can break into peoples computers, then NSA is not prioritizing protecting civilians.


> When they sabotage standards, or keep vulnerabilities secret so they and criminals can break into peoples computers, then NSA is not prioritizing protecting civilians.

Even the standards that they have been shown to sabotage (Lotus Notes, Clipper, Dual_EC_DRBG), they have sabotaged it in a way that should have reduced the security of the system against NSA, but not in general. I'll note that I disagree with this concept (I'm not a mathematician but it seems to me that it is difficult to prove theoretically that the NSA private key could never be derived when you know the plaintext and ciphertext). However even on these NSA was trying to maintain the security of the cryptosystem itself, it's not as if they introduced a deliberate backdoor where the thing falls apart if you guess the right 8-letter password.

I see your point about knowing about software vulnerabilities and not acting on them. But the problem is that software will always have vulnerabilities, and the citizenry at large isn't exactly good at keeping always up-to-date. So if NSA divulges every 0-day they know, then they don't help the public that much, but do help the enemies of the public protect their software that much better.

You could almost argue that the NSA "buying up 0-days" is directly beneficial to the citizens, by ensuring that at least those vulns don't end up in the hands of someone who'd actually do something rotten with them.


> something rotten with them.

Like spying on us?


They're doing it to spy on the rest of the world, which is something that they've done for their entire existence. It's one of the two major reasons they exist at all.

It happens that now the rest of the world is using the same crypto we're using, but that's not NSA's fault. Nor is it a major degradation over a status quo; the government has usually been able to "spy on us", it's only been a short time comparatively speaking that it was even possible for the average citizen to completely encipher their communications. Telegrams, for instance, were copied and read as a matter of course if they crossed international boundaries.


The NSA shouldn't just be an attacker it should also provide defence. If one of their many contractors can leak details to the press for idealogical ends it's pretty safe to assume that much worse secrets have already been leaked to other nation states (China, Russia etc....) for financial gain.

I think it's entirely reasonable to assume that a lot of exploits the NSA has discovered and not revealed (because it thinks they are "secret") have actually been sold to other governments by it's own contractors. By not revealing these exploits to citizens they are actually leaving them open to attack by foreign governments. Large companies trying to defend against industrial espionage are probably most at risk.


> The NSA shouldn't just be an attacker it should also provide defence.

Uh, it actually does exactly that. That is the second major mission objective of NSA, is to ensure that the USA's own communications are secure. For example, the SHA-1 hash standard that underpins much of our cryptosystems was developed wholly by NSA as an alternative to MD5 (which was apparently even at the time thought to be weak at NSA).

However there's a difference between ensuring that the theoretical underpinnings of COMSEC are adequate and releasing 0-days. There will always be exploits in web browsers used by people, so NSA is not "helping the citizens" by releasing each and every one of those secretly to browser developers. They can effectively only hamstring them own mission goals by doing that.


If one of their many contractors can leak details to the press for idealogical ends it's pretty safe to assume that much worse secrets have already been leaked to other nation states (China, Russia etc....) for financial gain.

Especially as the agency in question appears to have no compartments or levels of access. I've been wondering how a comparatively junior contract worker could access so much information...


They're very compartmented, as it turns out.

But Snowden was a sysadmin and successfully managed to digitally impersonate persons actually in the right compartments, among other things, in order to get access to the data he wanted.

I suppose it's better to say that NSA is too reliant on contracted systems administrators to handle what should be inherently governmental functions, and that they don't properly compartment sysadmin functions. But then again, is it even possible to completely protect a computer network against an insider sysadmin threat?


Unfortunately it's politcians in 6 countries who try to dismantle the now totalitarian levels of surveillance who end up on this shit list too. Then soon it will be you and me.


The NSA gives a lot of advice to civilian cryptographers. It used to be tinfoil to assume the advice was deliberately bad. Now we know (some of) it is.

The NSA also has been found to give good advice sometimes, so just doing the opposite of what they say doesn't work either.


> It was generally thought that government would not keep security vulnerabilities hidden

Was that what people thought? Were there vulnerability reports in open-source software that were coming from the NSA or thought to be coming from the NSA? Surely everyone knew that the NSA was capable of finding exploits in software, and I would think that it would be hard to keep secret whether or not they're being reported.

> That used to be a tin-foil hat idea just a few months ago, and we know better now.

It's well-known that the NSA pushed to have DES limited to 56-bit keys. There were suspicions about Dual_EC_DRBG long before there were any leaks from Snowden. In the 90s, they pushed the Clipper chip, in which they'd engineered a back door. I think that everyone understood that the NSA had somewhat of an interest in weaker cryptography. That's why the cryptographic standardization processes happened in the open and when constants were needed, they were taken from the digits of pi or some such sequence.


There's a video from the RSA conference in 2011 with Dickie George, who was the director for Information Assurance at NSA when DES was being reviewed. He claims that the agreement between NIST and NSA was that: 1, NSA would only change things if they could find a specific problem with the cipher, and 2, NSA promised that DES would have security equal to its key size. The implication is then that they decided that 56 bits was how secure it was, and then picked that as the key size.

You can believe him or not, but I don't see any particular reason not to.

link to the video, the relevant bit is ~8min in: http://www.youtube.com/watch?v=0NlZpyk3PKI


Thanks for the video. I ended up watching the whole thing. My interpretation is a little different than yours: I think George is saying that there's no point in having a key longer that 56 bits given that the goal is 56 bits of security, but he's vague about where the requirement for 56 bits of security came from. In any case, the video certainly supports my larger point that the idea that NSA would sabotage a crypto standard was mainstream within crypto circles, even in the '70s.


> It was generally thought that government would not keep security vulnerabilities hidden

It depends on which they find it on, according to this talk, https://www.youtube.com/watch?v=E4Zx5rQFk4U , If vulnerabilities are found on secure systems they are immediately classified, For them to be able to report they have to refind and document the vulnerability on a non secured system.


DES was about speed. DES was in an age when computers were slow, DES was slow, and the NSA was already helping defeat a cryptanalysis attack on it.

Limiting the keys to a sensible number means it can be used in a practical sense.


At a DefCon (15, I think?) I got to ask a panel of FBI/CIA/NSA bigwigs a question at an open Q/A panel. I asked how they made the decision of which exploits they'd keep for themselves and which they'd help the project patch.

The response was 100% boilerplate. "We have a system for evaluating it," was the basic answer, in more words than that. I didn't really expect anything more, but it was worth a shot.

I've never believed that their "system" was in any way primarily for the public interest. I can't point to any specific evidence, it just never felt like the type of thing they would do. Good exploits just seemed far too useful to be worth giving up.


I never thought that. I always assumed all cyber-war capable governments had hidden caches of 0-day vulnerabilities.


Prior to this one should not have (and arguably should sill not) assumed Tor is safe against the NSA.

Tor was explicitly not designed to protect against a global passive adversary. That's the price it pays for low latency. With the amount of network data the NSA has, they probably constitute such an adversary.

It is actually rather surprising that Tor gives them this much trouble.


> It is actually rather surprising that Tor gives them this much trouble.

I am not really convinced that what we have seen demonstrates conclusively that it does. There is the possibility that we are looking at parallel construction, or that these attacks are genuine but they are sitting on more dramatic capabilities for targets they think are worth it (perhaps because the Chinese continuing to trust and use Tor is a better situation for the NSA to be in than the Chinese doing everything the old fashioned way with microfilm and dead-drops).

The best way to go forward is to continue to assume that Tor does not present any significant difficulty to the NSA.


It's a question of opportunity cost. The NSA has extensive resources, but it's unlikely that they can employ overwhelming resources (such as would be theoretically necessary to break tor) for every situation where overwhelming resources specifically directed are a theoretical weakness. At the moment, implementations are a much easier target, and so I don't necessarily think that it's surprising that they do have trouble with strong but imperfect systems like tor.

Perhaps once all implementation issues are removed from the security equation (I'll hold my breath while I wait...) it will be necessary to think up better systems. But right now, what's hard for us is hard for the NSA, and so that should be the guiding principle for strengthening current systems and developing new ones. I find that an empowering idea.


Yes - exactly. Opportunity cost is something that is not discussed enough. Conceivably, any "target" is vulnerable to every communication at the right price point. From technology solutions (provided by NSA), to in-field solutions (provided by CIA), we shouldn't believe that we can be totally "safe" from unwanted eavesdroppers.

It's not "if" Tor (and friends) are vulnerable. We should assume and operate like they are, but with some level of acceptable tradeoff. It's like a safe or ATM - neither of these guarantees perfect security; they just provide enough security for the expected loss of their contents.

The problem - it's just very hard to evaluate the opportunity cost, since we don't really know how wide-spread or "easy" it is for privacy to be breeched. These types of revelations help establish the "market price" for which we can use as a basis for evaluating our options for communications (including traditional man-to-man transport).

I personally don't have any communication which I consider privileged enough to warrant the extra hassle of running Tor, etc. I consider a TLS connection with my bank secure enough for my concerns and I don't have the desire to pull otherwise questionable content from any type of onion router. Therefore, I enter the market with a different expectation of features and cost I'm willing to pay.


The weak point as usual are the endpoints. The attack vector described in these documents is JavaScript via some library called E4X. Makes me wonder why Tor bundle doesn't come with NoScript enabled by default.


There is an answer about this in their FAQ that basically states that having NoScript on by default breaks too much of the web.


Utopistically, how nice would be if the whole web provided no-javascript versions of the sites? In the end 90% of the cases javascript is used just to do fancy things, while actual functionalities could be achieved with much less pain (and vulnerability).


I think this would have been true a few years back, but I think more and more web-sites are using javascript in irreplaceable waves. I suspect javascript will become increasingly necessary as frameworks like angularjs become more popular. That said, if you are just interested in buying a pizza, or reading a blog post, then maybe javascript will never be really necessary.


I agree with your post generally, but has Snowden said anything about CAs? I did expect to hear that at least one has signed anything the NSA put in front of them, but I don't recall Snowden providing "proof"* of this.

* I'm in no position to verify anything Snowden leaks.


We didn't need these revelations to know CAs are not generally trustworthy. We already had proof. http://en.wikipedia.org/wiki/Certificate_authority#CA_compro...


The main thing is that CAs are centralized proxies for trust combined with the revelations that confirm that the NSA directly targets such central entities. There was a lot of general uneasiness about the reliance on CAs before the Snowden revelations, and I think the fact that NSA documents show that it leans on such central entities confirms the wisdom of that unease.


I don't know about the NSA, but I've personally negotiated a deal with a CA to add whatever domains we wanted to a certificate without validation. They just "trusted us."


not to be a downer but I do feel these systems and exploits are designed by us the hackers we so much want to belive are good, but it looks like most hackers have a price and probably derive joy from designing these systems for the government.

we know what is trustworthy we know how to build and do the right thing. yet look there is tens of thousands of brilliant minds working for the nsa against everybody else.


The disheartening things is, though, we don't really have novel technologies (quantum crypto?) to guarantee security anymore and the existing ones will soon be exploitable on a mass scale. This is bad for internet commerce, and for internet itself as a medium. In the eyes of the layman, the internet is untrustworthy. I won't be surprised if in the future we will see closed, privately owned physical networks that guarantee security to their customers.


I think Facebook proves definitively that the layman doesn't care about "trustworthiness of the internet".


They do for anything that requires money and beyond. That's why banks etc. try hard to persuade people they have high security standards.


That will be a problem once you can by a quantum computer from the local IT shop, not while their are 5 of them in the world and you need a team of physicists to operate them.


quantum crypto is going to be available to governments first and to civilians later, if at all, ever. we should be be making plans on how to protect ourselves from the cracking powers of quantum computers with our traditional computers instead: http://www.pqcrypto.org/


This accompanying article has useful context: http://www.theguardian.com/world/2013/oct/04/nsa-gchq-attack...

> But the documents suggest that the fundamental security of the Tor service remains intact. One top-secret presentation, titled 'Tor Stinks', states: "We will never be able to de-anonymize all Tor users all the time." It continues: "With manual analysis we can de-anonymize a very small fraction of Tor users," and says the agency has had "no success de-anonymizing a user in response" to a specific request.

So only with "manual analysis" can intel agencies have any success, and that appears to be with a small subset of users who have other vulnerabilities. But when targeting a specific user, the NSA appears to have had no success in de-anonymizing them.


This needs to be higher. I think this was the best scenario anyone who knows Tor could hope for. The attacks against Tor, when used correctly, are well understood. And, assuming this presentation is accurate,the capabilities of adverserial semi-global attackers aren't much different from what we were expecting.

I would love to see if they have similar slide-decks for I2P, which is often compared with Tor for Hidden Service/eepsite usage.


On page 5 of the 'Tor Stinks' full document is a clipart picture of a terrorist.

So... Somewhere in the bowels of the NSA is a graphic artist that slaps beards and guns to stock clip-art. fun job.

http://www.theguardian.com/world/interactive/2013/oct/04/tor...


The more we learn about the NSA's capabilities, the more it seems like the Manhattan Project. They are developing the "cyberwarfare" equivalents of weapons of mass destruction. This exploit delivery network goes so far beyond any legitimate purpose it might serve that it belongs in the same moral category as hydrogen bombs.

EDIT: The above is somewhat hyperbolic and unclear. The NSA's capabilities may have legitimate uses. Similarly, there may be legitimate military uses for nuclear weapons. But building nuclear weapons creates the risk of worldwide nuclear destruction. Similarly, building this kind of highly efficient exploit system creates the risk of destroying all Internet security. The potential destruction far outweighs whatever good the weapons might accomplish. That is why I said they belong in the same category.


I think that's a pretty serious exaggeration. Designing tools to let you spy on Tor traffic has to be in a separate category from designing bombs that could kill millions.

Besides, are there no ends that could justify these means? I think the means are altogether reasonable given the ends. Put aside whether you think the NSA is genuinely pursuing its national security mission: If it were, wouldn't it make perfect sense to figure out how to attack Tor?


The Stasi and the Gestapo were genuinely pursuing a national security mission. They also did more self-inflicted harm to Germany than the A-bomb did to Japan from the outside. He's not exaggerating the amount of damage an intelligence agency can do.


I feel like you've just invoked Godwin's Law, and yet in this case the comparison actually seems apt...


It is important to realize that all useful comparisons to the Holocaust will be made against situations that are not as dire as the Holocaust.

If there is a situation as dire as the Holocaust, then rhetoric about things being as bad as the Holocaust is no longer useful. Useful points made in a situation that horrifically dire are made with machine guns and bombs, not rhetoric. The proper time for rhetoric is well before the situation ever evolves that far.

We should therefore consider carefully whether a comparison to the Holocaust is out of line or not. Blanket judgement about such comparisons (such as the standard interpretation of "Godwin's Law") are not useful.


Yeah, that's my point. That there's a widespread convention that a thread is over once a comparison to Nazis is made because, well, where do you go from there? - and yet in this case, the comparison is factually very similar to where the Nazis were in the early 1930s, before guns and bombs became necessary. And yet we got the Holocaust and WW2 because nobody intervened back when it was "just" a surveillance state and a bunch of economically disenfranchised people looking for a scapegoat.


To think about that a bit:

Actually, there have been several pretty brutal genocidal events in history that have points of comparison to the Holocaust. In no particular order, it is instructive to look at the Holodomr of Ukraine, Pol Pot, Stalin's purges, the Hutu-Tutsi conflict.

There's no sense in calling forum moderators nazis in general, which is why Godwin came about. But when considering large-scale genocide and surveillance societies, comparisons to Nazi Germany do become relevant.


There have absolutely been genocides that can be compared to the Holocaust. I probably mis-emphasized my above post.

What I mean is that statements comparing incidents to the Holocaust lack utility if the situation has escalated to the level of brutal genocide. Any sort of statement delivered with words is useless at that point, that isn't the sort of situation that you can talk yourself or somebody else out of. If you want words to have an effect, you need to use them before the situation ever escalates that far.

A house-fire can be prevented with a stern lesson about deep-frying turkeys indoors, but once that actually starts happening, your lecture is of no use. At that point, you need to call in the fire fighters.

Talking about genocides can conceivably prevent a genocide, but talk about genocides can never stop a genocide.


How is it apt? If the genuine national security aims pursued by the NSA can be aptly compared to those of the Gestapo, we're well past the point where it makes any difference exactly what techniques they're using to achieve those aims. If their aims are more reasonable, then, again, what's wrong with a spy agency trying to spy?

In other words, the U.S.A.'s national security interests bear little resemblance to those of Nazi Germany (I can't believe I have to type that).


I think a lot of that depends on exactly who's being targeted, and for what reasons. Those revelations haven't made it out yet. The only information we have is the NSA director answering "Not intentionally, no" when asked if the NSA ever spied on American citizens, along with Snowden's allegations that there are no checks and balances if an employee of the NSA believed that they did, and Russ Tice's claim that the NSA spied on Obama.

It's often forgotten that there were many people in Nazi Germany or Stalinist Russia or Communist China or North Korea that believe those countries are completely justified in their actions as well. The reason they are widely reviled is because they lost: people outside of their culture came in, beat them down either economically or militarily, and said "That's not okay what you do to your own citizens."

If I look at the facts of what the Nazis did in the 1930s (before the gas chambers and concentration camps), it was that they turned a large portion of their state security apparatus inward and devoted it to controlling a portion of their population that the ruling class deemed undesirable. I don't know whether the same thing goes on at the NSA; I hope it doesn't. I do know that there should be checks & balances to make sure that it doesn't, because it can become an awfully slippery slope.


A lot of those questions simply tie back into the age-old debate over capability.

You can talk about checks and balances all you want, but if you hand a "trusted soldier" a rifle he may yet kill many of the wrong people before he can be stopped. Yet people don't typically lie awake at night staring at the ceiling worrying about the local National Guard violating Posse Comitatus and imposing martial law.

But on the other hand the capability still exists, so we do make at least cursory efforts at mitigating this risk. We perform nominal screening of recruits into the military, we keep most weapons locked up in the armory when the soldiers are at garrison, we train soldiers at all levels of where their allegiance lies, what Posse Comitatus means, etc.

So it is with the NSA. Let's say they determine that the NSA needs the ability to perform surveillance, but that Snowden's revelations have demonstrated that better checks-and-balances are needed, even though there's no evidence of "Stasi-like activity", just to be safe. They go and add these required checks-and-balances.... do you still feel ultra-threatened by NSA?


There are plenty of checks and balances against a local national guard violating posse comitatus, namely that the federal government would then send in the rest of the army - all of whom are sworn to protect and defend America and its citizens - to right the situation.

The problem with the NSA is that if they are abusing their power, nobody will ever know about it, because everybody they deal with is sworn to secrecy. Abuses are far more likely to happen in this situation, not because of any inherent maliciousness, but because whenever you have a large organization that never has to face an opposing viewpoint you end up with groupthink and a large possibility of ill-considered decisions.

I'd be satisfied with detailed congressional oversight. Unfortunately, the last time the NSA director was called before a congressional subcommittee, his statements don't seem to match up with the actual operations of his agency, as they've been leaked since. I hear some congressional reps are pretty mad about that.


In other words, the U.S.A.'s national security interests bear little resemblance to those of Nazi Germany (I can't believe I have to type that).

Fine, I'll concede the singular moral uniqueness of the government of Germany from 1933 to 1945.

How about the Stasi? The KGB? The COINTELPRO-era FBI? The Star Chamber? These were all arms of governments as legitimate as mine or any other, and their aims were exactly that of every other internal security agency. The harm they did was not some sort of moral corruption, it can't be cured by being the good guys or on the side of the good guys.

These were evil organizations consisting of evil people because of what they did, not why they did it. An East German government without the Stasi is just yet another poorly run postwar client state. The Soviet Union without the KGB (and a few other atrocities) is just a large developing country with some ill-considered economic policies.

Post Church Commission America is just a better version of America. When the children of ex-NSA employees lie to their friends that their father left when they were young because the truth would be embarrassing, it will be a better America still.


> When the children of ex-NSA employees lie to their friends that their father left when they were young because the truth would be embarrassing, it will be a better America still.

What the FUCK, man. Is this seriously how you think? Are all of your moral questions so easily placed into neat little bins?


Attacking Tor by passive analysis is one thing. Installing spyware, creating a botnet, and making the infection process quick and easy is another. There might be some justification for the former. The latter is too risky.


It's not a "Manhattan Project" if it's within the capabilities of any decent-sized organized crime syndicate. People here have short memories. In the 1990s, teenaged hackers owned up the backbone.


I called them analogous because of their potential effects and their development in secret by governments. I don't think a crime syndicate could do it so effectively; when the NSA "owns up the backbone", even if the operator discovers the intrusion, it stays owned.


The Manhattan project involved gobs of never-before-done of engineering, new understanding of physics. Owning up the backbones is simply a matter of scale and access.

I don't think there is really an analogy here.


Heh, I wonder what places they work now.


Actually, I'd say there is more justification for the later. Passive analysis nails everyone. It's a scary capability to have.

Malware is a fundamentally targeted endeavor since known exploits get patched.

The question of course, is are the targets legitimate?


Sucks you are being downvoted for not agreeing with the hyperbole, but I think you are correct.

The NSA's job is to spy on things. TOR represents a place where illegal things occur, so it is a perfectly reasonable thing that they would be tasked with trying to stop such illegal things there.


Hardly. The NSA's techniques, as described thus far, appear to be your basic computer security fodder. The same techniques that any modest black hat could do.

The difference is in the scale and dedication.


Maybe the NSA cyberwar effort did not produce new earth shaking insights; the manhattan project did that of course. Now both efforts may be compared in terms of their price tag: both did cost billions of tax dollars to implement.

Interesting that instead of reaching out for the stars we turn inwards - snooping as the new frontier that is pushing technology forward, now here is a great prospect ...


Metacommentary:

I've taken a jaundiced view of "liberation tech" efforts in the past and this is as good an illustration as any of why. Among "amateur" libtech projects, Tor is about as good as you get --- an active community, extremely widespread use, technical people with their heads screwed on right and as much humility as you can reasonably expect of people whose projects are (candidly) intended to thwart world governments.

If Tor can't provide meaningful assurances (here, there's a subtext that Tor actually made NSA's job easier), you'd need an awfully convincing reason for how you're going to do better than they are before "liberating" the Chinese internet, especially given that it your users who assume the real risks.


I didn't read it that way at all, in fact, it sounds like Tor is sufficiently robust that a good number of NSA employees were tasked with finding exploits. In terms of the exploits found, it looks like all were against the browser.

  Tor is a well-designed and robust anonymity tool, and 
  successfully attacking it is difficult. The NSA attacks we 
  found individually target Tor users by exploiting 
  vulnerabilities in their Firefox browsers, and not the Tor 
  application directly.


Tor enabled them to filter down Internet traffic to a subset, and then they simply violated the security premise behind real-world Tor usage (that the rest of the stack was secure) to pierce the veil completely.

I'm not indicting Tor. The opposite. But in Iran, China, or Belarus, you don't get to call a foul ball when your libtech stack breaks somewhere you weren't working on.

And again, my concern isn't Tor, but the (far more amateurish) things people come up with as new Tor alternatives to e.g. "circumvent the great firewall".

The principle I'm trying to communicate is that there's a degree of chauvinism implicit in amateur libtech --- that despite the billions of dollars any real country can leverage against Internet privacy, indie developers have a fighting chance against Iran, because after all they're just a tinpot dictatorship.

The other more general principle I try to communicate is that it doesn't matter how nice, or even how necessary, any given bit of security technology is. What matters is the engineering: will it work as deployed. Not having a better answer doesn't change the engineering fact of whether the best current solution is viable.


tptacek, I'm not sure I understand: do these new revelations really indicate indie developers don't have a fighting chance against Iran?

U.S. - absolutely, no fighting chance.

China - chances look slim.

Iran, Belarus - are you familiar with the technical achievements of their NSA equivalents, and so came to the conclusion they're likely as good as the NSA? Or maybe what the NSA did is just generally easy to do in your opinion?


Iran spends ~10bn/yr for the "on the books" part of their military. How much vulnerability research do you think $500MM buys? Answer: a lot.


Sure, still got hacked by Russian usb's though. What I'm seeing (the slides, Mr Alexander and so forth) is a huge list of incompetent dinosaurs in key positions. Sure there are skilled - very skilled - people all over the place, NSA, Iran, India, China, Australia, Cyprus (you name it). Sure the NSA employs more mathematicians than anybody else.

They have vision of hackers with AK Rifles on their back, wearing masks? Logos with a planet and a huge eye spying on it?

The flops these Agencies do, might surpass the successes by far. The thing is that you need dig in order to find out the real story. Hollywood even makes movies, advertising epic failures for wins (i.e. Argo, seriously???).


Sure, still got hacked by Russian usb's though.

It seems unlikely that similar systems in any country would have remained unpenetrated in the face of that attack.

Just because one entity in a country got hacked, it doesn't mean other entities in the same country can't hack others. At the moment attack seems much easier than defence.

We don't know if the NSA has been penetrated, but given that Google's law enforcement search system was penetrated by unknown parties originating from China it would surprise me if the NSA has remained free from breaches.


Yes, but I'm not sure that justifies the end conclusion:

1. Do we actually know Iran spends $500MM on vulnerability research? What about Belarus?

2. Suppose they do. So they have zero-day exploits, sure. IIUC, you need MITM capabilities to execute an attack on tor like the NSA did. This sounds costly, and I'm not sure it can be outsourced like buying zero-days. It also requires, ummm, "being on good terms" with telcos, backbone providers etc., which I'm not sure Iran is.

So I'm not saying it's inconceivable that Iran can attack tor users, but the opposite also sounds plausible.


Would you bet your life on the answer?


No, I certainly wouldn't. That doesn't make the answer clear, though.

If your point is that Iranian and Belarusian dissidents need to be aware of the risk, I agree.


I don't think your fears are justified by the article. The first thing you read is a pull-out quote that says:

>Tor is a well-designed and robust anonymity tool, and successfully attacking it is difficult.

Maybe you're referring to:

>The very feature that makes Tor a powerful anonymity service, and the fact that all Tor users look alike on the internet, makes it easy to differentiate Tor users from other web users.

Your kind words about the Tor project are accurate, but they have never claimed that it's possible to reliably hide Tor use.

The next sentence is:

>On the other hand, the anonymity provided by Tor makes it impossible for the NSA to know who the user is, or whether or not the user is in the US.

There is no Tor exploit or new information here. The NSA has enough resources to recognize Tor users in the USA enmasse, as well as single-out individual connections. From this point on, FoxAcid works the same whether you're using Tor or not.


>(here, there's a subtext that Tor actually made NSA's job easier)

I'm not sure how you reached that conclusion.

The slides mention that Tor is:

* Very difficult to identify on the network-level, since Tor-tls traffic is indistinguishable from Apache-tls traffic as of 2011

* Impossible to fully deanonymize

* Only exploitable via a handful of browser exploits.

Further, later in the "Tor is the King" slide deck, there's this rather glowing endorsement of the TAILS livecd:

"Tails... adds severe misery to CNE equation."

...which is what you'd expect, given that TAILS is entirely ephemeral, and so all of their callbacks and APT-style attacks are useless against it.

I had previously considered TAILS a rather "amateur" system myself, because of the glut of livecds bundling Tor. But it turns out they're actually adding severe(!) misery to the NSA's exploitation team! I'm downloading the TAILS cd now so I can switch over to using it in a VM rather than running Tor Browser Bundle on my own machine.


Will tails still only use ram and no disk within a vm? If not, you'll just have a slightly better tor browser bundle (plus other features) right? I always thought the "ram only" portion of tails was one of the biggest anonymity wins.


If the VM doesn't have a disk, then yes...


Even TAILS worries me slightly. Why? Homogeneity. The same thing that makes a freshly booted TAILS "clean" and exactly the same as any other freshly booted TAILS also means that it's a "known quantity" to an attacker.

A lot of obscure vulnerabilities that would normally require a "perfect storm" to be used together to compromise a system are much easier to construct once you know a lot about the target system. And it would be well worth the time for an attacker to develop an exploit that would work against all TAILS users.

In the same way the Firefox heap spraying attack was specifically targeted against users of the Tor Browser Bundle. There, homogeneity was a large part of victims' downfall. TAILS is arguably many times more homogeneous.

ASLR and related technologies are a (very very basic) start but we may not have better answers to things like this until we have the likes of binary diversity as described in http://lwn.net/Articles/565113/ being usable (Even then, a final binary compilation stage would need to be taken by an application user before use).

Edit: and yes, you don't need to point out that the TBB vulnerability did heap spraying in Firefox's JIT and so binary diversity would probably have been minimally effective.


TAILS will detect it is running inside a VM and warn you not to do it.

I know quite a few folks who are sitting on escapes for popular VM products. They are not at all uncommon.

I would be absolutely shocked if the NSA's little toolkit didn't detect virtualization, pop out, and backdoor the host OS.


And if you run it on your main machine it could exploit it and mount hard drive. You need another diskless computer just for this...


Not just diskless, but somehow incapable of flashing the BIOS, rewriting the CPU microcode, and loading new firmware into the NICs and other peripherals.


CPU microcode is volatile. BIOS flash used to be jumper protected on old PCs, and many NICS you could remove the flash if you didnt want them to be bootable.

Used to have some machines with zip flash sockets you could remove while the machine was running (useful for flashing linuxbios aka coreboot in the old days).

Not really your modern laptop though.


Are there any small, cheap laptops with optical drives?

Another alternative is having enough 1gb usb sticks to be able to throw them away on a regular basis. Sort of like burner phones.

(Personally I'm not sure I want to go so far as to buy a separate computer for private use now, but I might as well know how to do it.)


You can get various flash types with read only switches, though whether these can actually stop writes I dont know. Optical drives are harder to get now, but old computers are widely available I suppose, and less traceable.


...or run with trustworthy full disk encryption.



I wouldn't really consider Tor an amateur libtech product since the basics of the underlying technology (onion routing) was developed by the US Navy.

Here's the original patent from 1998:

http://patft1.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=...


here, there's a subtext that Tor actually made NSA's job easier

Are you reading anything from that subtext beyond, "Tor has a high concentration of the kind of users we're interested in, so let's keep it a juicy target rather than squeezing too hard?"


As I understand it, which admittedly isn't well, it made surveillance jobs easier when its users mistook anonymity and privacy. That is, sending something through the tor network means that it's more likely that your traffic is going through a node belonging to a group that records everything than if your traffic randomly found a point to point route across the internet.

I don't see how using the Tor network could make you less anonymous, unless as you point out, it's use suggests a user's greater likelihood of sending and receiving interesting information.

It hurts the system that exit nodes have been targeted for content that other users were responsible, but from how I have read, Tor can provide people meaningful anonymity that is difficult breach.

As an aside: What is the effect of such parenthetical statements? I think they just create a vague idea of uncertainty and fear. If there is a vulnerability, there has to be a mechanism, not just a sense of omnipotent government surveillance.

Maybe that mechanism is the probabilistic likelihood of an organization controlling a large portion of the Tor nodes' ability to identify users. Maybe it's a flaw that has been surreptitiously put into the source code. I'm pretty sure more people who know would suspect the former as far more likely than the latter. It's easier to address the questions when you know what the parenthetical utterance was even referring to in the first place.


Side note: no intermediate node in the Tor network can see your traffic in clear. They just see the encrypted data. The only node that could see what are you sending is the exit node, but that can be solved easily using HTTPS.


HTTPS is not to be trusted against an NSA-level adversary, since the CA system is too weak.


I don't know that it really is subtext here, but I suspect that Tor has many people communicating electronically where they would otherwise refuse to do so at all. In other words, "Normally I would refuse to talk about this on the internet and would meet you in the back of the bar instead, but I trust Tor so let's discuss this now."

It therefore puts communications that previously would not have been available to the NSA into the realm of things that the NSA can access.


Wouldn't those people have used encrypted communications over Usenet[1] or burner cell phones? Or sometimes even anonymous remailers?

And all those are just as tricky. De-anonymizing alt.anonymous.messages (http://ritter.vg/blog-deanonymizing_amm.html)


Certainly, many would. Would as many though?


What chance does a ragtag team of people from various backgrounds working part time on a open source project have against a determined enemy with billions in funding and hordes of PHDs working full time with single goal of violating privacy of netizens.


A better chance than one would think from that phrasing, as there are a lot of highly motivated and intelligent people working on privacy tech.


It doesn't matter. Protecting privacy is too big a goal.

In the end, why does the NSA do what it does? To get specific, actionable intelligence. Everyone in the world's privacy is just collateral damage.

But, turn that around. Protecting your privacy (never mind anyone else's) is too big a goal. It's too easily breached. Instead, think of what specific information you feel you need to keep private, and how you might do that.

Of course, the goal of keeping specific, actionable information secret leads to the question of why you'd want to do that. A general desire for privacy doesn't mark you as one of the "bad guys". Having something specific to hide leads to questions of why? To what end? Should we be worried?


Thats why we all should keep as much privacy as possible, if not for the sake of feeling a need for it ourselves, but to help out those who feel so. It is easier for them to hide among the mass of encrypted traffic and privacy-aware activities rather than stand out there on their own, shining like a light bulb to the NSA and other adversaries.

Boycott surveillance, encrypt everything.


"Having something specific to hide leads to questions of why? To what end? Should we be worried?"

Why do you close the doors when you go to the bathroom? What are you hiding? Should we be worried?


Because it's polite and it limits the diffusion of bad smells. If you really want to see my dick all you have to do is go on chatroulette. No need to follow me to the toilet. Unless you're interested in more than just looking.


Point is not that you have an altruistic reason for wanting privacy for certain parts of your life (which I'm sure you do). The point is that once the very act of wanting privacy starts becoming suspicious, we've moved away from a free society towards the totalitarian end of the spectrum.


And some of us are quite happy on that side of the spectrum.


That's entirely not the point. Even with 1000x the amount of funding, TOR would still be theoretically proven to be insecure. Only physicially secure hardware (including cables and routers) stands a chance against side-channel traffic analysis.


Speaking of which, now is probably a great time to mention that the Berkman Center is doing an open call for fellowship applications:

http://cyber.law.harvard.edu/getinvolved/fellowships

And I'm sure the EFF is looking too.


This is one way the NSA can attack Tor. if they just want to de-anonymize a connection, not get access to the content, (.e.g to locate the Silk Road Sever), in theory they can just analyze all their passively collected data form major fiber backbones to identify and locate the user.

Tor, including hidden services, was never designed to protect against someone who could observe all or almost all traffic in the Tor network. Given that data, it's rather easy to correlate timing information. Indeed, Tor fundamentally allows this since it aims to be a low latency network.

Given the NSA's extensive tapping of key fiber lines, we should assume they can actually observe the necessary traffic.From the original paper announcing Tor: "A global passive adversary is the most commonly assumed threat when analyzing theoretical anonymity designs. But like all practical low-latency systems, Tor does not protect against such a strong adversary." --- Tor: The Second Generation Onion Router [0] [0] https://svn.torproject.org/svn/projects/design-paper/tor-des...


Is nobody slightly concerned that the date shown in the PDF file which sparked this commentary ( http://www.theguardian.com/world/interactive/2013/oct/04/tor... ) shows the PDF as being created in 2007?

It looks like they had some trouble picking out users 5 years ago... lord only knows how easy it must be for them now.


I think this depends vastly on the number of rogue tor nodes. However, picture this: NSA isn't the only organization going after TOR right? Probably there are others.So if you are China, Iran, Syria, Russia, etc. What do you do? You set up your 'own' poisonous tor relays. What you end up doing is disrupting and diminishing the potential of a single agency or a group of agencies of controlling a big % of tor traffic.

So all in all, might be a good thing and way more difficult thatn it was 7 years earlier. Not to mention that at the time we were browsing through tor at 50 kb/s while now we browse at 400 kb/s.


Sounds like, if you're going to do something very sensitive on tor, you need to:

- always have an update to date version of tor bundle!

- compile the bundle yourself from source

- run it virtually, and always roll back to a clean snapshot (before installing it tor) when done

- if possible use from a network that is not your own (open wifi, public wifi, etc.)

- spoof your mac address

- do not run JS, Java applets, etc.!

I know this seems extreme, but from what I read, it's the best you can do to protect yourself.


If you are doing something that would make the NSA interested in you (and I would highly highly discourage that), you'd need to focus more on tradecraft. Get the laptop from a source that can't be traced to you, like a thrift store in a city where you don't live or normally frequent. Disguise yourself, pay in cash, and either make sure there are no security cameras or wait a good year before you do whatever you are going to do (nobody keeps camera data longer than that). When you do whatever you are doing, use a Live CD like tails. Disguise yourself. Wear gloves. Go to a city you don't live in or frequent regularly, and only use cash during the trip. Park a long distance from your wifi source where there are no cameras and walk to where you will access the wifi. Use a cantenna to hit an open wifi some distance away, preferably a public connection like a busy coffee shop. Do whatever you are going to do. Walk back to your car, drive to a nearby town, smash the laptop and dispose of in a dumpster. Drive home.


That isn't sufficient.

The NSA might be able to query their databases for anyone who recently visited the city where the wifi involved is located, and you might match that if there were license plate scanners on the way, even if you paid for gas in cash. If that information isn't collected by the NSA today, it probably will be tomorrow.

The NSA might be able to query their databases for anyone who "went off the grid" for a day or two around the event they're interested in. That's not good enough to id a suspect, but it narrows the pool. If you stopped making google searches from your normal internet connection within a day of the event in the other city, and you normally use your computer every day, or if your phone was off within a day of the event, that's suspicious. Enough of those kinds of data points and you become a suspect.

Even simpler, and a staple of crime fiction, stuff happens that you have no control over that can place you in the vicinity at the time of the event. If you have bad luck and get a ticket or get in a car accident in the city in question, for instance...

Far from suggesting that you simply need to be more careful, my view is that you can't take sufficient precautions to get risk down to a tolerable level if whatever you're doing brings you to the attention of the NSA.


What if you ran scripts on your phone and computer so that it would appear as if you were browsing the internet and using your computer during your regular usage times?

Also using public transportation (and paying for it in cash) will help mitigate the first issue your brought up.


Personally I had the idea a while back for a sort of time-release dead drop. Stuff a Raspberry Pi into a fake power strip, put your seekrit information onto the SD card, and go plug it in somewhere in a city you 'happen' to be passing through, near to a public wifi spot.

Then a year later it wakes up and uploads the data publicly via Tor and self-wipes. Even if it's traced back to the Pi, they'll have to trace the Pi back to you (you bought it untraceably, right?).


How can you buy a Pi untraceably? Last time I checked you could buy them from e-stores using credit cards..


Pay a stranger like $300 to buy you at $25 Raspberry Pi?


Until CCTV is combined with facial recognition!



I think that what you are saying is true, but there is always a level of risk. It's more about mitigating it than eliminating it - you can't really do that.

Again, this is all hypothetical, don't go and do anything naughty.


Yes, that has always stopped me from doing some things, I would like to do covertly and aren't exactly ok. But there is no safe way to do it. How do you make sure that you won't get into traffic accident when going on mission or returning from it. It would be really nice to hear how you make roads 100% safe.

I'm also too security oriented and been monitoring this field for over 15 years. So I know how hard it is to be absolutely anonymous. I also know that my Finnish & English aren't exactly textbook examples, so I can be profiled easily out even if I would be technically 100% anonymous.

I always surf the web from virtual container which is fully reset after each session. I also don't ever process, email, im, web, archives or what ever on host system. I also have completely separate (hardware), similarly safe configuration for handling PGP/GPG encrypted messages, which is connected only via serial-link so I can view the ASCII armored payload before sending it for processing. Anything else than ascii armored payload isn't being sent over that 7 bit link ever.

It's also obvious that I have prepaid dumb phone(s), one for each identity, which are circulated on random schedule. I only use those phones at single location (without other tracking devices), because moving with those would allow linking my (moving) position with my other phone(s). Making it easy to correlate those. Yes, I know this is non-optimum solution, if you're expecting someone to hunt you down. But it's good for generic privacy as long as you don't expect anyone to be there waiting for you.

Getting rid of habits is also very hard and requires huge effort. That single thing (service, program, password, etc), word or phrase you just used, will single you out from larger group.


This is such random advice. What threats are you defending against here?

"Wear gloves": Why? Are you thinking someone will pierce the veil of all these other precautions but then be stymied when they find a smashed laptop with no fingerprints on it?

"Sir, we followed him for a year, watched him buy a laptop and use it in a park, but when we recovered the laptop from the dumpster, there were no fingerprints on it!"

"Curses, our plan is foiled!"


They find your smashed lappy in the dumpster. "Oh, look, his fingerprints are all over it". There is no national DNA database outside of the penal system (yet) so your dandruff won't do you in. But a print will, if you've ever done something that got you into NCIC.

The idea is not to foil people who suspect you, it's to to keep them from suspecting you in the first place. If you are a NSA targeted suspect and you did something naughty, you've already lost the game.


Who is "they", why do "they" even suspect the dumpster? "They" find fingerprints on it, and no hard drive (you used Tails, remember?)

So after all that work, you link a guy to touching a smashed laptop. Not very helpful.


I'm glad I have no reason be that anonymous. Sounds stressful. :)

Still, I completely agree with you.


Yes, you do have a reason: the practice and preparation to be anonymous takes months/years. I don't care if you're Bruce Schneier and Richard Stallman rolled into one, you will slip up the first time you try. 100% guaranteed.

The standard for needing this kind of anonymity is anticipation of a use case. By the time the use case is at your door, it is too late and if you are unprepared your best bet is to bend over / run away (as the situation permits).


"Park a long distance from your wifi source where there are no cameras"

This implies that you've been driving round (in your disguise of course) in a car. With a registration plate.


Have you tested this approach?


Hah, of course not. I don't do naughty things, I just like thought experiments. And anyone who does do such things would be pretty stupid to draw attention to themselves on Hacker News by posting a proposed method for avoiding surveillance.


But that's exactly what someone would say and do to draw attention away from the fact that they may be doing something sketchy :)


Yeah, I was wondering if a virtual machine is safe from malicious attacks, though. Can anyone comment on the feasibility of this method as fail-safe?


Ideally you'd want to be running Tor with transparent proxying of all traffic on a physically separate (and locked down) host. I believe there are guides on how to do all that on a raspberry pi out there.

On your primary browsing/whatever machine, I believe (but have not exhaustively researched) that it would still make sense to run inside a VM/container, because that would provide a much more 'generic' set of system characteristics (MAC address, clock jitter stats, CPUinfo, etc) than your actual hardware. It does provide a greater attack surface, so you'd have to weigh up the value of potentially masking physical identity vs likelihood of gaining root due to VM exploits.

There's also the risk of overconfidence because of these measures, which might lead you to overlook important details in the host OS, or in your communication habits.


Another option is to run an amnesiac OS on a material that is not re-writable (CD-R). Note this would replace the VM, not the separate Tor machine.


There are plenty of ways to breakout of a VM. What if the VM has a filesystem that is readonly by the host?

Drive by download, cookie fs drop, etc. Attack the indexing server, file previews, etc.

You really want to run the VM on an external host like a raspberry pi and the VM should different than the host running Tor.

Tor should really be rewritten in a Coq proven Haskell program.


It's probably the best you can do, but it still doesn't prevent your anonymity from being compromised. As soon as the malware is installed, it can phone home, even if you end up wiping it after you are done.


The malware would have to escape the virtual machine. The VM needs to be firewalled off from the host and NOT have the host <=> guest tools installed.


Ok. So what you're talking about is a VM that is only able to route to the internet via Tor, so it would be impossible for it to make a non-Tor connection (thereby compromising anonymity).

- If the host <=> guest tools are installed on the guest host, then it would be possible for the malware to install them itself.

- If the host <=> guest tools can't be enabled/disabled on a per-VM basis, then that could be an issue, as you would probably have VMs that you wish to use in a less convert capacity.

- The malware would have access to your browser for the duration of that session. Presumably any information that you accessed during that session is compromised. If they are consistently able to compromise you during every session, then any slip-up with PII during any session could compromise you.


Here's an openbsd VM with tor and a bunch of web browsers preinstalled. There's packet filter rules so even if the vagrant user gets owned, it cannot transmit traffic on the outboard network interface. https://github.com/WIZARDISHUNGRY/openbsd-hiddenfortress


I meant specifically that the VM should have low privs when it comes to the host, it shouldn't be able to port scan, map drives, the host MAC can be found via ARP. Just thinking about what happens when the NSA p0wns the OS running the Tor browser.

   (vm-tor-net
     (vm-tor-browser))
Even if the gui VM that is used for running Tor has been compromised it should still be impossible to determine where the Tor client node is running. That is goal right?


Something like Qubes [1] and its concept of security domains might help here.

[1] http://qubes-os.org/trac


> Once the computer is successfully attacked, it secretly calls back to a FoxAcid server, which then performs additional attacks on the target computer to ensure that it remains compromised long-term

It would be nice if somebody could honeypot them to find out the vulns and malware types they are using.


How so I get on the list of most interesting persons so I can setup my honeypots? do I have to be jacob appelbaum or assange?

what freaked me out is that they deliver sensible exploits for techie people. go damnit.


edit: removing meta discussion about flagging. the story should get the attention. apologies for the distraction.


Maybe it's time to make flagging public, Quora-style?


I don't think this is being flagged. I flag soap opera NSA stuff for instance, but wouldn't flag this.


It was at mid second page with 10 up votes after 35 minutes when I made this post originally.


There's another article from the Guardian on the NSA regarding Tor in the #6 slot on HN. This is in the #3 position.

I think you're overestimating the impact.

Edit: This article is now in the #1 position. Flagging isn't hurting these articles any.


1) It's currently first on the front page 2) Complaining about voting is really tedious to read about


Almost as tedious as reading complaints about complaining about voting :)


So how does Tails[1] stack up? It seems to thwart most of those attacks.

It block non-anonymized traffic and makes permanent changes difficult. OTOH, privilege escalation bugs happen frequently on Linux.

https://tails.boum.org/


According to the article,

> "Tails... adds severe misery to CNE equation."


Wait, so simply by using Tor the government will install malware on your computer. How is that legal?


My interpretation of the article was that they identify prior to attacking.

I suppose they could use a "spray and pray" attack on anyone using Tor, but that would be easily detected.


At least according the the slides, Tor appears to be safe for the most part. Which is good.


[deleted]


Actually the slidedeck states that Tor Browser Bundle defeats some of the attacks they use that plain Tor+Vidalia is vulnerable to.

And if you're referring to the previous Freedom Hosting attack, that only affected users of TBB on Windows who had ignored "Security Update Available" messages for over a month.


I've been playing with vagrant and ansible to create a new server in a snap. Here is a good weekend project:

Instead of having just an Tor/browser bundle, build a vagrant machine specification that installs the Tor bundle. This virtual machine would be destroyed and recreated from time to time. Now put the machine specification in GitHub and let anyone use it.


That's a great idea! Please let us know how that goes.


So how does one determine which sites are being intercepted through Tor and served malformed code? Start doing CURLs from within Tor and outside of it and comparing hashes?


If someone makes disposable Raspberry Pi Tor exit and non-exit nodes sealed in hard plastic resin, we could all buy them and drop them off in random places throughout the world on open networks. If enough people the world over does this, we would make it a lot harder for a global passive attacker to succeed.

Tor's biggest vulnerability is the risk associated with operating exit nodes means that the number of exit nodes remains relatively low at ~1000 worldwide. If hundreds of thousands of exit nodes started popping up all over the globe. It would be very hard to counter.

I'm also curious if enough governments unhappy with what is happening could go as far as hosting many tor nodes outside the control of the NSA. Is the Global Passive Adversary threat still valid if there are many of them that are non-cooperative with one another (i.e. China can't monitor US and Russian tor nodes, Russia can't monitor US and Chinese nodes, and the US can't monitor Chinese and Russian nodes)? My intuition tells me that the global passive adversary would have to be able to monitor most of the nodes, but if others came on the scene doing the same, they would dilute the percentage of nodes that any single global passive adversary could monitor.


Can one use something like Lynx with Tor? I doubt there are very many exploits for it.


Sure these folks are smart and have all sorts of powerful weapons; what are the odds that someone out there could successfully repurpose some of these weapons? What is the likelihood that vulnerabilities exist in the NSA's systems? We can never know since it's all secret. If someone does take over these systems we wouldn't know that either.


Historically, different nations' intelligence agencies have often infiltrated each other. I'm sure someone will eventually gain access to the NSA's weapons, but I think they would be more likely to steal details to add to their own systems than "repurpose" the NSA's.


I am loving every minute of this NSA-Gate or Snow-Gate. Nothing like holding GOVT accountable for decisions they make behind closed doors, decisions that had an impact on the whole world not just US citizens.

Its also great all the technical details that are being released about how they Intel Agencies collect data. Its all fascinating.


The NSA is like Tor's pentesters, except Tor doesn't get to see the results.


Given that the US government is Tor's main funder, the first part may be more accurate than the second part.


Foxacid sounds like an NSA version of BeEF (http://beefproject.com/), which hooks browsers that would then be monitored from the Lockheed-Martin-style SOC (https://www.youtube.com/watch?v=x1tCJfy_iZ4 :-).

However, for those with more limited resources, Ryan Barnett is working on an open-source monitoring system for BeEF (https://vimeo.com/54087884).


It appears that the NSA has been able to target only Tor users that are using the Tor - Firefox bundle. So if you are using Chrome or some other browser - configured to use Tor, you would be safe from these exploits. Wouldn't most sophisticated hackers - or other high value targets most likely to be of interest to the NSA - be already doing that, rather than using the Firefox+Tor bundle?


Unless you put a lot of effort into the integration, I'd advise against doing that -- the Firefox included in the bundle is specifically set up to avoid leaking information, while a standard Firefox or standard Chrome will phone home or do something else (like make a DNS request over the public network) that will quickly compromise any security you thought you had.


In the slide titled "Exploitation: Shaping" the status says "Can stain user agents working on shaping."

How do they achieve to make tor use NSA/GCHQ nodes? If they achieved to do this 5 years ago (the PDF is from 2007) would it then be reasonable to assume that since then they have managed to modify the TOR source code in a way that nobody remarked to do exactly this?


This kind of news should encorage people to create and use better tools for find and fix vulnerabilities in software.


> FoxAcid tags are designed to look innocuous, so that anyone who sees them would not be suspicious. An example of one such tag [LINK REMOVED] is given in another top-secret training presentation provided by Snowden.

Anyone knows what these tags look like?


Should really make a packaged vm in vm failsecure tbb equivalent. Nothing is really works from a usability standpoint while giving reasonable protections against this kind of endpoint attack.


what about the nonsense on the quantum system? i think the reporter left some key info out.

why does speed is a factor to mitm attacks? the slide shows a proper mintm diagram... or is this quatum thing exploiting a package arriving before the honest response? and why they would need to do that if they are in a position to do a proper mitm attack and not expose themselves for someone who monitors man-on-the-side attacks?


I remember somebody from Mozilla thinking out loud "we should integrate Tor in Firefox". Glad that didn't get done.


I'm more glad that they didn't do it the other way around - considering how confident the NSA is about being able to keep finding new vulnerabilities in Firefox.


Why? Because it seems that Tor actually does what it says it does. One of the biggest issues with it is that using it singles you out; if we could get more people using it then it would be less useful as a differentiator.


Apparently, John Grisham works for the NSA, naming its programs.


don't forget that Tor publishes their exit nodes--they make them freely available to anyone. So a simple membership test on a client IP against that list of exit node IPs identifies that client IP as either having come through Tor via the onion router or else they are an exit node themselves.




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: