Hacker News new | past | comments | ask | show | jobs | submit login
Tracking One Year of Malicious Tor Exit Relay Activities (Part II) (nusenu.medium.com)
270 points by hacka22 on May 10, 2021 | hide | past | favorite | 147 comments



I'm a long time exit node operator, I operate X currently and all are in Asia where they're most needed imho.

I would not be opposed to having some sort of operator validation of exit nodes. Where you can actually validate who runs an operator node, get a person behind them. And perhaps rate those higher than others.


> I operate X currently and all are in Asia where they're most needed imho.

Interesting/Awesome. Just curious, what day/event specifically motivated you to get started with this?

To be honest, my impression -- which could be wrong -- is most exit node operators do so for nefarious reasons, Pr0n (hence your username INT-Penis), or are Fed.

(to be clear, appreciate what you are doing regardless)


I just think anonymity should be a right. I don't even remember when I was introduced to tor first, probably sometime in the early 2000s.

I'm not stupid, with anonymity comes abuse. But I don't think that's a reason to get rid of the option to be anonymous.

I'd say your impression is wrong about tor operators, I've met a few of them at various events. (Not including the tor operators who try to subvert anonymity of course, whoever they are.)

But the tor network is absolutely mostly used for illegal activity. I can't be dishonest about this, that would mean denying human nature. Give humans a way to be anonymous and they will absolutely abuse that.

But I've also met one reporter in person who thanked me for the tor network, that's enough for me.


I'm curious how you reason about something that you say is used mostly for crime. Especially when there are other options for anonymity? Is there anything which can move the scale the other way, or is the fact that it's a human right in your opinion always going to win?


First of all anything that is widely adopted will be used for crime. That's just human nature. Tor is widely adopted right now but there have been alternatives like Freenet for example.

So the choice is either anonymity and crime, or no anonymity at all. And crime won't go away, it just won't be able to use an anonymous channel for communications.

And it's not like criminals are getting away with it by being anonymous either. If you go on tor to commit crimes then you will be hunted down by the international police. And if you commit heinous crimes against children then you will find no refuge. That's not a good life to live. Odds are you will end up in prison sooner or later.


What are the other options for anonymity that don't provide the same cover for criminal activity?


One option that I have theorized in the past is to simply run the tor network at a very low bandwidth per connection.

As in, 28.8K or 56K or so ...

This would force tor into being a text only communications channel which would obviate many of the most egregious illegal use-cases.

You'd lose nothing with regard to freedom of speech and thought and communication. For instance, you can publish, and consume, a site like HN without any issues on 56K.


I like that idea. It's so simple but it would totally get the job done. Of course this would have to be a throttling per connection so 100 HTTPS API calls don't congest a relay node.

But I don't see it working in reality because there is the argument that you might need to share leaks anonymously and that would take days.


Crime is subjective, hence every country on earth having different laws. I'm guessing but I imagine the vast majority of the crime is just drug sales which is becoming less and less criminal as society awakens to the realities of the underlying problems that cause drug use and the facts about its relative danger.


I think the far more serious crime in tor is child pornography.


Well, thank you. Wish you best of luck and appreciate you taking a stance. Not enough of people do and agree nothing is perfect. My comment was never intended to knock you and enjoyed reading your insight here.


Considering that I see a large number of Tor nodes running from the same addresses as many pool.ntp.org nodes, I think your view is a bit uncharitable. Some people believe in Internet freedom and privacy, and see Tor as a way to help bring that vision to the world. In my opinion, it's just people contributing another piece of infrastructure run as a public service.


I had ran exit node for several years because it, you know, a good thing to do in life. Like donating to charity or publish GPL code. Freedom sometime is underrated, but if you live in authoritarian state you may understand.

Helping people around make me feel better.


> Pr0n (hence your username INT-Penis)

Wait I can't tell if you're joking, do you really think their username is a reliable indication that they run Tor nodes for pornography, and not a stupid internet pun? Because if you're joking I lol'd, but if you're not I ... I'm worried about you, I guess?


As a username with a stupid pun/portmanteau with a dick joke in it I appreciate this.

I wanted to use the name NaturalsticPhallacy, because it's a prevalent fallacy I see people fall for, and humans have found dicks funny since recorded history but it's too long for HN so I had to shorten it to this one.

I do not build or operate any sort of porn or even porn adjacent software or service and never have. Not that I wouldn't if the right job came along, but I never have and currently don't.

But I digress. Usernames are generally best ignored. The content of their writing is what matters.


The username refers probably to INTP-enis, INTP is a personality type (Myers-Briggs).



I feel obligated to note that operating a Tor exit node is very much to type.


Meyer-Briggs is bunk.


It's certainly lacking in both rigor and utility, but the "Jungian" cognitive functions that it builds upon are fairly well-established to my knowledge (not a neuroscientist).


Yes, I am wondering why the person with the username containing the word “Penis” runs Tor exit nodes. It is curious and agree could easily be totally unrelated.


It doesn't have to be a reliable indication to be an unsurprising correlation.


> Pr0n (hence your username INT-Penis), or are Fed.

This part of your comment is completely unnecessary and unwarranted.

You already asked your question, just wait for an answer instead of jumping to uncharitable conclusions.


How do you deal with people abusing Tor for illegal activities? I assume that operators would receive a lot of attention from the police.


Trying to figure out what makes MITM'd exit nodes valuable.

Sure, as an attacker it's interesting, but cost vs. how interesting isn't clear. The law enforcement case for specific investigations makes some sense, general counter intelligence value of keeping track of which web sites are attracting people who take precautions, maybe there is a general list of suspected dissident minds states maintain?


Part 1 says that they use SSL-stripping attacks to replace cryptocurrency addresses with their own address, allowing them to capture e.g. transfers to a crypto mixer.

https://nusenu.medium.com/how-malicious-tor-relays-are-explo...


The thing that confuses me about that is if you have not installed the malicious MITM's root cert in your browser isn't that going to fail?

Or are these MITM's somehow signing stuff with well known root certs? That seems like it would be a much bigger story. Or are TOR users really accepting self-signed certs when passing around their bitcoin addresses?

Maybe there are bitcoin clients that don't validate the chain when doing TLS? Given the sorry security posture of so many exchanges this is somewhat more plausible.


They MITM connections that aren't encrypted and prevent them from becoming so.

Many bitcoin mixers are not HSTS preloaded. And to avoid creating a trail, TOR Browser doesn't save frequently visited sites, history for autocomplete, cached redirects, or cached HSTS headers between sessions.

And as Tor users prize secrecy, many don't bookmark their bitcoin mixer. Instead they key in the address manually - and sometimes they're used to doing without the https://www. prefix. And by convention, browsers use http when you do that.

The exit node then removes the http-to-https redirect, and presents the bitcoin mixer over http, with the bitcoin addresses replaced.

The result looks like this: https://imgur.com/otaBerJ

No MITM of encrypted connections needed.

It's almost impossible for the Tor project to detect this, as the attackers only target a small whitelist of sites - so the Tor project can only detect attackers by guessing the sites on the attack whitelist.


I’d say the first step could be switching http-https around. Attempt to connect to https and fallback to http if the user agrees to being less secure.


HSTS is this, but without the fallback.


Most sites redirect all http traffic to https to make sure the traffic is encrypted.

Here's an example with HN (notice the protocol in the req/res):

  $ curl -v http://news.ycombinator.com
  [...]
  < HTTP/1.1 301 Moved Permanently
  < Location: https://news.ycombinator.com/
However, the first request is over http, before it gets redirected and encrypted. This is where the malicious relay node would intercept and change the response.


This is actually what's going on. It's what HSTS and HSTS preloading protects you against, it's why Chrome is moving to just assuming HTTPS when you type domain names without specifying, and it's why Firefox now has "HTTPS only mode" where it goes further and just rewrites all HTTP as HTTPS (even in random links you follow) and gives you an interstitial caution page to decide if you really want to try HTTP when HTTPS fails.

People have all these fancy high-tech Hollywood-style theories about how they imagine things being attacked, but the reality is almost always far more boring.


Yeah. And for anyone unaware, this technique, SSL stripping, was made well-known (and perhaps pioneered?) by Moxie Marlinspike of Signal with his tool sslstrip back in 2011: https://github.com/moxie0/sslstrip. I believe that's what he was most famous for before Signal.

It's unfortunate that this very simple attack remains extremely successful even a decade later. I'm surprised Tor Browser didn't enforce HTTPS Everywhere for all domains by default years ago. HTTPS Everywhere was released in 2010, before sslstrip, even. HSTS and HSTS preloading helps, but individual site owners still have to explicitly submit their site to be added to the preload list.


HSTS preloading is hierarchical, so it's not necessary for individual site owners to submit, if the domain above yours opted its entire hierarchy in, you're in.

So if you own example.foo or example.dev you don't need to do anything and indeed can't choose, because Google (owners of the foo and dev top level domains) preloaded the entire TLD.

http://some.example.dev/ can still exist, but you can't go there in a typical modern web browser, it will take you to https://some.example.dev/ regardless. So software that knows it actually wants the plaintext protocol can use it, but your ordinary users can't get SSL stripped.


Ah, thanks, I wasn't aware of this. I might put future projects under a preloaded TLD.


> and perhaps pioneered?

i highly doubt that. in fact i knew about ssl striping before i knew moxie or even sslstrip and this attack was probably already well known when someone came up with a seperate url scheme for https...


Yeah, "pioneered" was too strong of a word. I'm sure there's no way he could've been the first person to come up with the idea. He was just the one who widely popularized the attack and released a convenient tool for it.

For anyone who remembers it, "Firesheep" also had a big impact, too. It didn't do anything special or novel whatsoever, but it was a really easy-to-use tool that drove home to the average person just how dangerous plaintext HTTP was. Lots of people immediately started using it in school classes and logging into everyone else's Facebook and Twitter accounts. I'm not sure if it was the direct cause, but I know not long after that, all the big services began switching to HTTPS for everything rather than just login and payment pages.

There's probably some startup lesson buried in there...


Ssl stripping usually means replacing https links with http (when on http) and blocking TLS so users retry with http.

Moral of the story, if you a are a site operator use HSTS. And if you're on tor, you should maybe consider configuring things so you only use tls.


That makes sense. I know the MITMproxy they mentioned re-signs the traffic, but it will not work unless you install its self-generated cert so I thought it was weird that the malicious exit nodes were using it.

Also, if someone is running a bitcoin exchange that has port 80 open for anything more than a redirect I would not do business with them.


You can just tell people to install the cert.

Verizon puts an MITM proxy from Mcaffe on people's routers (with their consent) that does this.


Why would installing a ssl cert inside a router change anything? The only way is if you could install the cert inside the OS or the browser...


They put software on the router that terminates people's SSL connections and tells them to install a cert on their computer.


> you have not installed the malicious MITM's root cert in your browser

PROTIP: Your browser already comes with all the malicious root certs a Five-Eyes-aligned attacker would ever need. The Tor Browser uses Mozilla's Root Store if you want to go see what's in it. To pick a random example look at VeriSign's root, the company that runs dot-com and dot-net, and manages dot-gov. Do you think they might be Best Fwends with the DoD/NSA/etc? https://www.ntia.doc.gov/page/verisign-cooperative-agreement

I also think it's a pretty safe bet that many many other roots are compromised many times over even if nobody ever willingly cooperated with anything.


This invokes a really stupid conspiracy theory, to achieve a very marginal goal, in a space where it would be easy to produce evidence if it was real and yet of course no effort is made to even look for such evidence...

> To pick a random example look at VeriSign's root

But why though? Verisign is not in fact operating a trusted CA, so that makes as much sense for an example as looking at some root you just minted on your laptop.

Most likely, as so often with conspiracy theorists, you didn't stop to see if the facts line up with your beliefs, after all "VeriSign" is named right there in a certificate Mozilla trusts, surely that's a smoking gun right?

Er, no. DigiCert owns the business behind that, collecting rights to names for a whole bunch of long obsolete CAs. The "smoking gun" CA that has the "VeriSign" branding is only trusted by Mozilla to sign S/MIME email certificates, something you likely couldn't care less about and certainly won't be using in the Tor Browser.

This all reminds me of what ekr said about this years ago, the most likely explanation for why we do not see practical attacks on security protocols like TLS is that it's almost always easier to find a weaker link elsewhere, see the parts of this thread explaining much simpler tricks that we know work.


I am not any sort of conspiracy theorist and am very offended at your insult. Why do you go online if you aren’t going to be nice to others? Intent is obviously unknowable, but here you are doing exactly that to me.


What you have proposed is a conspiracy theory, there is no practical evidence and tales of 5 eyes subversion of the root trust stores (in a world of certificate transparency) is just FUD. I’m sorry if you were offended by the parents reply, but I support their position and this is a virtual house of science, math and reason — I think you earned a serious push back when making a wild easily detectable and debunkable myth, when the practical explanation of the attack does not require such fantasies.


IMO you are a fool if you think every USA-based CA has not been NSLed for their private keys. I am saying it is impossible to know so good OpSec demands we assume the worst. Feel free not to act accordingly, but I will act like all TLS is broken all the time.


And now you are being offensive — This is a wild claim, and use of HSMs would prevent simply handing keys over in such cases. The most popular CA is let’s encrypt, it is offensive to claim that they are compromised, or that they have not taken steps to build a system that was difficult to compromise. One could argue they could be compelled to sign an arbitrary csr, however this would be detected by the CT infrastructure, and a big deal when discovered. You are free to act like everything is in the clear of course, but don’t cast negative FUD on good projects that actively protect the world. Trying to convince the world TLS trust system is hopeless broken is a very dangerous conspiracy theory to be peddling.


I support your right to discern your own risk level and act accordingly, and I hope you will allow me the same. After all, the fool is both the highest and lowest-value card :)


The concern here isn't state actors; just lowly exit node operators looking to skeeze a buck. Check the other comments for how it's actually done.

More importantly, I think your fear about state actors abusing trusted root certificates is unfounded. As soon as a malicious cert is found, the issuing root cert will be nuked from orbit by all the major browser vendors. It's not a viable option for state actors, especially when they have much better options (like the NSA tapping Google's internal networks, for example).


> To pick a random example look at VeriSign's root, the company that runs dot-com and dot-net, and manages dot-gov. Do you think they might be Best Fwends with the DoD/NSA/etc?

Even if that's true, why would the NSA exploit it for such a stupid and shotgun application as MITM'ing Tor exit nodes? It would basically be leaving a calling-card that results in a ton of pointless friendly-fire damage, and I don't think spies like to do stuff like that (you know, the whole cloak part of cloak and dagger).


Well one theory is that Tor Exit nodes aren't the only place that such an actor is "tapping" into network connections and can MITM the traffic. I.e. They might be tapping into the network at the ISP-level.


There's certificate transparency, it's required for all certificates, so if any root will issue fake certificate, you can catch and report it. So I'm not sure if that's a pretty safe bet.


Logging (for Certificate Transparency) isn't a policy requirement. In fact last time I looked, there are (special purpose, typically in industrial settings so their clients aren't web browsers) Intermediates under some roots that just aren't outfitted to be capable of logging at all. Their existence is not a policy violation.

Clients (most particularly, popular browsers such as Chrome) can and do require SCTs (effectively proof the certificate was logged) to accept a certificate, but that just means if you issue a certificate under a trusted root without logging it, it just won't work in such browsers until somebody logs it.

You can even do this intentionally, if you're Google for example you get yourself (unlogged) certificates for shiny-new-product.google.example and shiny-new-product.example on Monday, and you don't need to worry that some eagle-eyed journalist spots that in the logs before your official product launch on Thursday evening, live in front of millions of people. You can log the certificate yourself minutes before launch, then attach the SCTs and it'll work.

[Google even got this wrong once, mistakenly using a certificate they didn't have enough SCTs for due to a bug. Chrome rejected these certificates and so, for a brief period until they fixed the problem, Google's own sites didn't work in Chrome]

Now, that last part is technically not trivial to do correctly (chances are your existing web dev tooling can't do SCT stapling, or at least you'd need to go read a bunch of instructions that you aren't going to bother with) and so when you get a Let's Encrypt cert, or you buy something cheap from a reseller, it is already logged for you, the SCTs are baked inside the certificate you get -- but that's just because there isn't a big market for unlogged certificates, not because such certificates can't or mustn't exist.


Thanks, I didn’t know it’s not a requirement. Hopefully it’ll be a requirement for every CA in the future.


The NSA is probably not stealing your bitcoin.


Hopefully the Tor operators have improved their process for handling this since a few years ago when the e-mail address for contacting them about malicious exit nodes went to someone whose email provider bounced any emails containing the names of some of the main targets for these attacks and who didn't seem to be able to understand the attack once you did somehow get through to them...


These days I mainly use tor for hidden services. It's hard to use it for normal surfing anyways


I use it for so many different purpose:

1) When I want to make sure a site doesn't get saved to my network/client profile on search engines and content sites.

2) When I need to verify that something is up/down compared to what I or a customer is seeing.

3) When I need to force IPv4 (tor is ipv4 only)

4) Hidden services.

5) Hotel/Airport wifi.


> 5) Hotel/Airport wifi.

Remember that Tor only routes TCP. It's not a substitute for a VPN in many circumstances.


Why don't you just use a VPN for this (self hosted or 3rd party like NordVPN)? Especially given the additional risk of tor users being attacked, which the author refers to in the opening paragraphs of the post.


Stop and break that down... "Why don't you just send your browsing history to NordVPN instead of risking using a compromised exit node....."


Exit node does not know your source IP and will only see your connections for 10 minutes. NordVPN knows your source IP and will see your entire connection history.


For the first usecase, the #1 problem in privacy security is that databases get leaked at some point in the future. Some VPN's has been caught logging way to much, and then either having to disclose it or have it leaked. Three hops with with no logs with my name and banking information, and only a gate node that has an ip address is fine enough for privacy sensitive visits to regular (legal) websites.

For 2), Tor browser is a single executable that I can just start and run on any computer, even through a remote control if I want to very the network through a customers own computer. No credentials, no payments, no waiting.

Don't know enough about nordvpn for 3).

4) Hidden services is tor only.

5) Nordvpn would work fine for that.

Different security threat need different security measures. The biggest risk to my own security is not that someone mitm my tor connection because I do not use tor for services which I have an account with, and would never do banking on a tor connection. My bank can more or less find what my network is anyway by looking at my transaction and which of those is an ISP. Leaks from companies however seems so common that one get posted here on HN every month, and haveibeenpawned feel more relevant today than antivirus.


Nord VPN is incredibly bad for a multitude of reasons. Look for a reputable VPN, and ignore the shills.


Just to say, I'm not a shill for NordVPN - no affiliation with them. I was wanting to reiterate a hosted option vs self-install and it was the first one that came to mind. Noted they are not good!


Why are they bad?


Last time they had a breach, they took 6 months to notify the public and did everything in their power to blame anyone else. [0]

The breach was limited - but it doesn't inspire confidence.

[0] https://www.techradar.com/news/whats-the-truth-about-the-nor...


I get your point here, but its been years since that happened and they kept clean since then as far as I know. That server didn't store any user data just as none of their servers do, I've also read their audits and no evidence of logs were ever found. Even with that breach, it was not directly their fault, but a data center that left a backdoor. Since then they cut ties with the and nothing similar happened again. What I'm trying to say is that no one is 100% safe from a breach as the tech world changes daily and new exploits are growing just as fast. Once company can stay breach free for a decade and then get one. All that such companies can do is work to constantly improve and keep such problems under control.


No one is immune to a breach. You're absolutely right. Which is why the response to the breach is what is so important.

NordVPN left the backdoor open themselves - they left a remote admin console enabled. Then, they proceeded to hold their silence for _six months_, before informing their customers... And take no responsibility. They struggled to even admit they got their dates wrong.

That kind of behaviour, and lack of transparency, is the problem. Not that a breach occurred.


Sleazy marketing promises makes me dismiss them outright.


yeah captcha's are so user hostile


hostile to some users, but most bots, so they're widely used.


The almost willful lack of tradecraft, scale of deployments, small time-frames, and "loudness" of action the highlighted entity displays, combined with the technical knowledge required to take part in this narrow space, suggests that someone is tolerance-checking the system rather than actually seeking to inhabit it.

Or they really are just shitty and impatient Russians, I could go either way.


You're suffering from the Toupée Fallacy. You assume that these people must be intentionally making themselves noticeable, because there's no way the average malicious Tor node operator could be this dumb.

But there's no rule saying that these are average malicious exit node operators. They could just be particularly stupid ones. We don't know about the competent ones.


I'm not assuming this actor must be doing this intentionally *because* these are a lot of stupid things, nor that it is indictive of the average, malicious Tor node operator. I'm arguing the opposite.

Not only did an actor commit a string of seemingly sloppy and unrelated "mistakes", where they had correctly executed that same things n times before for x amount of time, but they then brought their own existence to the attention of a technically empowered group to see how many of those seemingly unrelated and sloppy mistakes the system tolerates.

I'm not sure how this is an example of the "toupée fallacy", as I'm just positing as ti why this toupée would look so intentionally bad; to figure out the tolerance for a bad toupée and discover what about it made the toupée "bad".


The attacks they want to carry out are inherently loud. They're attacking visits to well-known websites at scale in ways that require changing the response returned when visiting those websites in a way that's trivial to detect (the websites should immediately redirect to the SSL version but the attack has to remove that redirect in order to modify the contents in malicious ways, regardless of the exact details of the modification or how sneaky they make it). There's no real way around that. It's easy to just scan the entire exit node list for nodes launching this attack if you know what sites they're targetting.


Why does tor even allow plain http by default. The internet has changed, most sites support https now, seems like a better default is in order.


As the article notes, Firefox has an HTTPs-only mode now and Tor Browser is based on Firefox ESR, so there's a chance they might add that feature in the next major version update:

> When Tor Browser migrates to Firefox 91esr we will look at enabling https-only mode for everyone, but there remains a significant concern that there are many sites that do not support HTTPS (especially more region specific sites) and the question of what messaging Tor Browser should use in that case.

Source: https://lists.torproject.org/pipermail/tor-relays/2021-April...


If you stay within the Tor network and don't exit, you dont really need the cert for encryption-- the traffic is encrypted end-to-end and decrypted on the hosting server already. Most onion sites are http. For these sites, proving identity to get a trusted cert is the barrier. If let's encrypt had an onion service, that could solve some of this.

Edit: clarified


I meant for clearnet sites. There is essentially no benefit for an onion site to use https. Maybe if you want an extra layer of security potentially using different ciphers than the rest of the tor network. I suppose EV certs to prevent fake sites, although i think its debatable how well that works and its inherently not practical in many usecases.


Out of curiosity, is there much of a (legal, sensible) community on i2p nowadays? I think its crypto is stronger than tor's, but unfortunately when I looked (many, many years ago) it was an absolute cesspool of humanity, of the "oh god, I am uninstalling this now" variety.


FWIW, the security researchers I talked to about I2P--such as the authors of the paper I link after this paragraph, which is an example that comes to mind readily--mostly had felt that there was no reason to write papers about it anymore as there had been so many attacks on it already and that none of them had been taken seriously (unlike the Tor people, who care deeply and fix things quickly) that it wasn't fun or pointful.

https://sites.cs.ucsb.edu/~vigna/publications/2013_RAID_i2p....


As a believer in the Tor mission—-how do I run a non-evil exit node?


AFAIK

1. Live in a country in which law enforcement follows the law and the law does not prohibit running tor, as noted in a response.

2. Hire a lawyer competent on cybercrime, intellectual property and freedom of speech.

3. Set up a non-profit or other legal entity with the explicit purpose of running tor exits/relays (stated in the articles of incorporation or similar founding documents, depending on the country and type of legal entity). Make sure its address is not your home address.

4. Purchase or rent the necessary hardware through the legal entity (don't ever do anything unrelated to the tor exits from this entity). Make sure you co-lo it in a datacenter, do not run any exits in your office and especially not in your home. Avoid having any hardware you rely on not being seized in close (physical -- same rack or logical -- e.g. same network) proximity. Explain to your host that you'll be running tor exits. Clearly label your systems as tor exits in any possible way you can manage, including physically on the cases/bezels. Run a web server on their public IPs with a page explaining that this is a tor exit node run by such and such legal entity, set WHOIS data with the same info if possible. Set up reverse DNS with hostnames that clearly state this is a tor exit node.

5. Be ready for trips to the PD in order to explain what tor is and why what you're doing is legal and that it's not you that sent that phishing e-mail, etc. It is a matter of when an illegal activity will be traced back to y̵o̵u̵r̵ the legal entity's exit and no amount of labelling will deter law enforcement from summoning you as a representative of the entity. Reasons being incompetence, desire/requirement to investigate thoroughly, or plainly using inconvenience as a way to discourage you from running the nodes (in the end, tor both creates more work for law enforcement and is a big obstacle to them so they'd rather not have to deal with it if possible).

This is the gist of it. The details need to be discussed with a lawyer. And again all of this relies on the law enforcement and justice systems to follow the law and the law to not prohibit tor. Don't do this in a country in which there's risk of you being black-bagged or held legally responsible for running tor or not keeping traffic logs.

Source: my poor understanding my country's and EU's laws. IANAL.


> Be ready for trips to the PD in order to explain what tor is and why what you're doing is legal

(I am not your lawyer) AFAIK this is still up in the air for U.S. persons - many states make it a crime to help criminals, including when not in the event of commissioning the crime, so you might be considered an accomplice to said crimes by running the exit node (or even just a relay). This isn't exactly a hot issue nor a clear-cut one so I would doubt D.A.s are interested in bringing you to court after a few times of being explained the situation.


I can't think of a case where this was addressed, so I agree that one might think of it as up in the air, but are you aware of any prosecutor who maintains or expresses this position?

Closely related to this, why wouldn't this position create criminal liability for running an open wifi network, if it turned out to have been used by a criminal? How about for a public library that allows unidentified members of the public to use public computer terminals? How about for running a commercial ISP?

Is the likely argument some kind of common-law imputed duty to not provide too much more privacy to network users than the average ISP does?


I think enough lawyers see this as an area to 'CYA' in, given the amount of hotels and other public wifis with captive portals that only require you to accept their T&C before you get access to 0.0.0.0/0, eg https://myhotels.com/guest-wireless_terms-conditions/ and https://mcdonalds.com.au/wi-fry/terms-and-conditions .

But you're right in that, in reality, police departments aren't going to blame a library or a fast food joint for letting illegal activity happen given they didn't know about it and that Wi-Fi is usually not used for such actions. I just think the law gives enough leadway for an extraordinary event to occur where [for example] some U.S. actor gets prosecuted simply for running nodes, likely as the only way for a state actor to take down some criminal enterprise in the event of there not enough evidence to convict of a major crime.

I also did some case searches regarding public wifi and there are not many results in general. https://scholar.google.com/scholar?hl=en&as_sdt=80006&q=%22p...


Addendum to 1. Tor also has to be legal in that country


Given the sizable investment of time and money required to run and independent exit node, it might be worth considering throwing your support behind one of the existing non-profits providing exit nodes

https://blog.torproject.org/support-tor-network-donate-exit-...

https://2019.www.torproject.org/docs/faq#RelayDonations


running an exit node is a really bad idea. someone is going to do something dumb on Tor and the local PD isn't going to know anything about Tor and will come knock on your door. i used to run a relay and even that became too much of a hassle. first my bank blocked me (they block all tor traffic even from relays) and then my companies IT did an audit and saw traffic "coming from tor" to them and politely asked me to stop using Tor. that was the last straw for me, and i took it down.


I've had a similar experience just connecting to my school's Wifi network. Someone left a threatening message on the now dead 'anonymous' chat app(YikYak) using the campus Wifi. Campus PD checked the IP address, saw one of my devices now had that IP, and gave me a call. I spent 5 mins trying to understand what this app was that they were even talking about, and another 10 mins explaining how IP addresses work to them.


But why would you do that from you home IP and not rent a server somewhere?


Makes sense in hindsight. But I never thought I would have any issues running a relay. Also just got a raspberry pi v1 and was looking for a project.


It's that dude who can't figure out how to use ssl?


Quick question: if you use Tor to send and receive crypto are you at risk of MITM?


In most cases, it should be okay, it's a specific scenario where MITM is possible. The issue arises if you're using Tor to access a website which gives you an address to send crypto to, and you trust that address is correct.

If it's a hidden service you're connecting to, it's fine, there's no way for a malicious exit node to alter what's sent to you. If it's a normal website (i.e: not .onion) that you're getting the address from, then the exit node could perform SSL stripping [0], an attack in which a website which would normally be served over HTTPS is served to you via HTTP, and so the malicious exit node could alter the content. In this case, the attacker could change any cryptocurrency addresses present in the website to convince you to send currency to the wrong address. It would be visible in your browser that the website is being served over HTTP, not HTTPS.

It should be noted, this scenario is getting rarer with the introduction of HSTS [1], especially in conjunction with HSTS preloading, which prevents your browser from accessing the website over plain HTTP. Tools like HTTPS Everywhere [2] can help ensure that you never access websites over plain HTTP also.

Also, this isn't a vulnerability in Tor per se, the exact same is possible without Tor, it's just that when you connect to a website via Tor, you're deliberately introducing extra hops between you and your destination, which wouldn't normally be there.

So, things that would need to come together for this attack to work: First, you're not connecting to a hidden service. Second, the website you're connecting to doesn't use HSTS, or you've not connected to them before & they're not in the preload list. Third, you aren't using a tool like HTTPS everywhere and you don't notice the website is coming to you over HTTP. Fourth, you don't verify that the address you've been given is correct independent of the website before sending a payment. This seems to me to be a fairly rare set of circumstances on the modern internet.

0: https://security.stackexchange.com/questions/41988/how-does-...

1: https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security

2: https://www.eff.org/https-everywhere


If you aren't in the habit of answering yes to big browser warnings about self-signed certs it seems like it shouldn't be an issue.

If the MITM operators have stolen a well known root cert then we have a much bigger problem.


SSL stripping allows attackers to avoid the big browser warnings, yet view and tamper with your data.

https://blog.cloudflare.com/performing-preventing-ssl-stripp...


HTTP is marked as " Not Secure". It's not big, but it's noticeable if you're paying attention and you definitely should pay attention for financial operations.


To be honest I just take it for granted that all exit relays are either run by Feds or at least compromised by Feds. If you use Tor for anything you wouldn't want Five Eyes to know about, you're an idiot.


Isn’t Tor designed to ensure anonymity despite a snooping exit relay? I thought that even if you compromise an exit you can’t do much without compromising the in-between relays.


Its designed so that a snooping exit node can't identify you, but it can see all traffic.

Which is why you should generally only use https when using tor. The last leg may be snooped on so you need to use encryption during it. (http is fine with hidden services though)

Its important to keep in mind that anonoyminity and data integrity are separate properties. You can have one without the others.


Hmm, just trying to make sure then. If they can see all traffic would using a tool hide this in combination with Tor help?


Using https would help. If you totally disable http (or only use http on .onion sites) the described attack won't work. Similarly, if the site in question enabled HSTS the attack would be prevented.

Think of tor like the open wifi network of dubious origin at a black hat hacker convention. You are probably fine if using https, but plain http is a bad idea.

Using a vpn is more questionable. Generally a paid vpn already knows who you are so hiding you origin ip with tor would be pointless. Also sometimes combining vpn technologies can cause traffic congestion algorithms to interact poorly and make things really slow, but that will depend on which technologies are in use.


My colleagues and I made a chart about this issue when I was working at EFF:

https://www.eff.org/pages/tor-and-https


Designed by the US Navy to ensure anonymity of US agents abroad from other countries via the DoD-birthed Internet, sure.


DARPA's also brought us the Total Information Awareness project. Those project ran concurrently and they had diametrically opposed goals.

But it doesn't mean much as DARPA's seems to fund a lots of projects with seemingly opposite goals.


What would you use instead?


Purchase some hardware with cash and distribute it around the world to tunnel through. Then expose them as public proxy servers (or even Tor nodes) so that a fair amount normie traffic passes through them.

If you seriously feel paranoid about being watched then you'll want to own the hardware you're actually passing through. And I assume that any large organizations that demand this level of invisibility (cartels etc...) have essentially done this - likely locating some of those servers behind armed guards that will protect the physical device.

That said, I think it's unlikely that Tor has been majority compromised at this point, but as it fades from the minds of folks and becomes more and more niche the probability will escalate.


I would have paid some homeless guys to get me bunch of SIM cards, use em once, and proxy via some hacked webcams, after cleaning the rest of the malware off em…

I mean. That’s what I uhh, would do if I was doing something dodgy on the internet…

Edit; with a second hand android bought from a pawn shop running nethunter as an ap ofc…


> what I uhh

Send me your address and I uhhh


>Purchase some hardware with cash and distribute it around the world to tunnel through.

how do you keep the hardware physically secure? What prevents a gov actor replacing it with their own mitm proxy?


Given the immense barriers to setting up an exit node, I would find it rather surprising if the majority of exit nodes are not already controlled by state actors, either directly or by proxy. My personal opinion is that if anonymity on Tor is to continue, it will be the result of competition for control of the network between opposing states and not altruistic non-profits.


Use a VPN and don't do anything that would get you on a terrorist/cybercrime/pedo list in the first place?

There is no safe when it comes to determined state actors.


> don't do anything that would get you on a terrorist/cybercrime/pedo list in the first place?

This is going to be difficult: <https://arstechnica.com/information-technology/2014/07/the-n...>


That says nothing.


>Use a VPN

Pretty sure that gets you on a list?


You're pretty sure? It should be easy to find a source for that claim then.


Use an open Wifi hotspot with a spoofed MAC.


This didn't protect Ross Ulbricht.


Neither did linking his dark web identity to his real life identity via multiple forum posts or his other 1,000 opsec failures.


I suspect those “opsec failures” are just parallel construction. The FBI almost certainly used a zero day on him and then waited to see how they could construct a feasible explanation for having identified him from there.


> almost certainly used a zero day on him

I "like" this explanation, but are you going with your gut on that or do you have any concrete signs that point in that direction?


Parallel construction is not new for US intelligence when it comes to solving high-profile crime. We know US intelligence both hoards and uses zero days, especially on users of Tor. As such, we can be reasonably certain that parallel construction is used to capture cybercriminals in high-profile cases, since it immensely simplifies solving the crime to a matter of using the exploit and merely observing for gaps in opsec.

Furthermore, using a zero-day on Ulbricht would be optimal as he is no security researcher. You are unlikely to “burn” a zero-day unless you are using it in a dragnet sort of fashion while a vigilant security researcher is watching.

By definition, it’s hard to find proof of parallel construction. However, former intelligence officials have confirmed its use as a “bedrock technique” for catching criminals [1].

[1]: https://en.wikipedia.org/wiki/Parallel_construction


This is really interesting, and sad. Thanks.

> simplifies solving the crime to a matter of using the exploit and merely observing for gaps in opsec

By this logic, could one get away with a "crime" indefinitely given good enough (perfect?) opsec?


Perhaps, but no one is perfect. Keep in mind that perfect opsec also encompasses physical security+surveillance, where intelligence agencies are much better than this than criminals.

People say that part of Ulbricht’s shitty opsec was that he left his laptop unlocked, but think of this - the FBI was already ready to grab his laptop the very moment he left it alone. Clearly, they knew he was the criminal well beforehand, and were just lying in wait for him to slip up just one single time.

All in all, this is really cool work. I wonder what it would be like to work for the FBI or NSA solving high profile cybercrime. I imagine it would definitely feel more impactful than my current FAANG position, even if the compensation would be lower.


> People say that part of Ulbricht’s shitty opsec was that he left his laptop unlocked, but think of this - the FBI was already ready to grab his laptop the very moment he left it alone. Clearly, they knew he was the criminal well beforehand, and were just lying in wait for him to slip up just one single time.

Is there another laptop of his that they physically accessed somehow prior to distracting and arresting him? (I don't understand how someone could think from that story that the laptop seizure played any part in initially identifying him, since it was done by FBI agents in the course of arresting him pursuant to a warrant.)


No, the theory is that a zero day was used on Ulbricht and they knew he was guilty for a long time. Things like seizing the laptop was just theater to construct a parallel trail of evidence for the courts.


I understand that theory, but I don't understand what leaving his laptop unlocked has to do with it. As the FBI already had a warrant to arrest him when they encountered him in the library, they had already made a probable cause showing to a judge by that point. The probable cause showing isn't the same standard as the "beyond a reasonable doubt" needed for a criminal conviction, but clearly the FBI already believed he was guilty before they seized his laptop, whether or not they accurately told the judge about all of the evidence and evidence-gathering methods that led them to that conclusion.

It's unfortunately entirely possible that they didn't tell the judge about all of it, but it's still not as though seizing his laptop was the event that convinced the FBI that he was guilty, or even that they claimed to be particularly unsure about their suspicions before that.


I agree with this. They just wait to find a small breadcrumb trail and then use that construct a case. The identifying the suspect is done through hidden means.


Even if that's true, Ross certainly made it easy for them


Try public WiFi + spoofed MAC + directional antenna.

What if you live 3 blocks away from a public library but a few floors higher? With direct line of sight and some wireless networking gear?

Would they really try to triangulate the client packets? It is a large leap past "oh he is in the library, let's go find him". You aren't triangulating the AP, you need to logically isolate the packets from the client, calculate their dB and somehow triangulate on just that.


>Would they really try to triangulate the client packets? It is a large leap past "oh he is in the library, let's go find him".

This is smart, and a good idea. But it really just adds a step. Once they go to the library and don't find him, they'll start looking for something 'smart'. And doing 'smart' things like this really get the hackles of the feds up because they start thinking exciting things like 'state actor', and "I'll get a promotion out of this".

The best place to hide something is right out in the open. Preferably behind a SEP field.

Not hating on your idea, just exploring it further.


Arguably it did protect him, but Ulbricht compromised himself by making several major opsec blunders including linking his personal Gmail address to his pseudonyms.


Because he walked away from his computer and left it unlocked. Wear a hidden bluetooth device or something to lock your computer and use USBGuard if you're that worried.


I feel that, at the point where the fbi is trying to distract you by making out so they could steal your laptop, its already too late and you are very screwed.

Maybe a bluetooth autolocking thing could have delayed the inevitable, but it would just be a delay.


Not really. Wish proper encryption and a USB safe list, once the computer is locked there isn't much they can do.


They can watch you for the rest of your life, interrogate you, etc.

Presumably they acted the way they did because they had reasonable belief that their plan would work. If Ross behaved differently i assume they would have a different plan of action


IIRC he was still sitting at the computer. They just distracted him to turn around and then they swiped it.


They literally snuck up behind him and swiped it out of his hands. He was seated with his back to the door, one of his many opsec failures.


Regardless, USBGuard and a hidden bluetooth device to automatically lock when it leaves a certain radius would have likely prevented any issues.


Ulbricht was arrested in 2013. USBGuard and usbkill were released in 2015.

Tough to do encryptluks2-approved opsec if you have to use tools that don't exist.


USBGuard is a newer tool, however the functionality has existed through udev or other integrations for some time.


I don't really do anything worth hiding from state-level attackers, but if I did I wouldn't do it over the internet at all.


so you'll do it in meatspace where there are witnesses and facial recognition/ALPR cameras everywhere?


There's one kind of tech that's good enough to protect your privacy from corporations that want to profile your behavior or keep you safe from malicious hackers who want to steal your data by luring you into digital spider nets.

Then there's another kind of tech (and tactics and practices) that could hope to keep you safe when you are targeted by state-level actors in both digital space and meat space.

Tor barely belongs in the former category.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: