What I think really needs to happen is for the Tor group to make setting up hidden services much simpler.
Maybe I'm just stupid, but there didn't seem like an easy "type a command and we will set this all up for you" kind of way to do it.
Getting it setup, getting it to run as a daemon, and getting the service to work on multiple ports (allowing you to serve :80 and :22 for web and ssh). It seemed like a nightmare to me.
It's sad because I'm very interested in hosting a tor relay/service to make sure I can get to my important documents, even if I need to travel to another country that blocks services like dropbox and google docs.
Hm, the problem with this kind of tools is that if you're not willing to read the documentation to get a good understanding of what you're doing, you might end up thinking you're secure instead of being secure which is the worst case scenario.
Because normally, the context here is that you're trying to set up an existing internet service over the Tor network. Your web server, for example, typically doesn't know anything about Tor, and will happily serve up pages to normal internet users unless you configure it not to.
Services designed for Tor don't have this issue and can be secure by default. Ricochet[1], for example, advertises itself as a hidden service automatically and doesn't communicate outside the Tor network.
Sure, but you could always have something like a script that takes a Vagrant VM or a docker container and turns it into a hidden service on Tor. The script would take care of making sure the only access to the VM is through Tor and that the VM learns nothing (under normal operations, I am not even thinking about patching side-channel attacks and escalation-to-host attacks here) about the host's identity or location. I am thinking something like:
vagrant up --provider=tor my-service
Where my-service is any Vagrant node (a config file for setting up a generic VM with whatever software / conf you specify) and the vagrant command outputs the Tor hidden service address in the last line, after loading the VM locally on top of VirtualBox or similar.
The Tor control port protocol let's applications setup a hidden service automatically; Bitcoin Core recently released support for this, automatically using hidden services for incoming connection to your Bitcoin Core nodes.
The real problem is that you _shouldn't_ be running bare Tor in front of a hidden service, at least not if you really want to be private. You need something like Whonix[1] to protect you from all kinds of server information leaks.
It would be useful if someone wrote a wizard that could install a personal disk server without the user needing to know what software is involved or how to install it. A single click where for example owncloud is installed, linked to tor hiden services address and given to the user and/or installing a usb stick with tor-browser and a bookmark to the service. It would be outside the scope of tor project, and more in line of a useful native debian package.
By spending 5 seconds on google, or alternatively by using your intuition.
Googling "hidden service multiple ports" instantly answers every single one of your questions, and the answers are rather obvious.
Who would've thought that adding a new port is as easy as just adding another port definition?
Honestly, if this is a problem you shouldn't be trying to host hidden services by yourself anyway. Even if Tor took literally one click to set up the other software will still fuck you, like apaches mod-status.
What you are saying is "it techs 5 minutes to somebody who knows what he is doing", which is precisely what's OP would like to change. You want more less tech saavy people to be able to create those nodes.
No, it's all very well and clearly explained in the official Tor Project documentation, in the section on hidden services. That link was already posted in this thread, yet the original poster still complains about insufficient documentation. His subsequent question about multiple ports was also clearly answered in the same place.
I think the problem here is simply that some people refuse to read documentation, even after they are provided with a direct link to it. Sounds about right from my personal experience with online tech communities.
If that doesn't tell you how to set up multiple hidden services and multiple ports, you should seriously get someone else to set up the hidden service for you. Some people just aren't competent enough to do it, just as not all of us are heart surgeons.
If you can't configure Tor, you certainly won't be able to sufficiently harden the applications that you're trying to hide.
That is rather hostile, but in any event that is only one of the things I asked. There are two other questions.
How do I get multiple .onion domains and how do I back up my keys so I can keep my domain?
Still, there are a lot of things that aren't very evident. It's just a bad experience in general.
We can have obscure documentation only accessible on the tor website, or when you install tor you could get a nice command interface like something presented by UFW.
One will lead to better configurations, the other will lead to mistakes and loss of data.
I was curious to see if it is possible to donate funds towards the operation of "safe" (eg, non government controlled) exit/bridge nodes. According to the donation faq for the tor project[1], it appears that funds are not used for infrastructure.
If there were a way to fund exit nodes without running one myself I would definitely be interested in participating. If not, this might be a great idea for a crowdfunding campaign.
[1] The Tor Project spends about $2.5 million annually. About 80% of the Tor Project's spending goes on staffing, mostly software engineers. About 10% goes towards administrative costs such as accounting and legal costs and bank fees. The remaining 10% is spent on travel, meetings and conferences, which are important for Tor because the Tor community is global.
In addition to NoiseTor that @garrettr_ mentioned there is torservers.net [0]. Both are mentioned [1] as ways to support infrastructure by the Tor Project.
I think the Tor project would agree with me in saying that donations are all well and good, but the best way to contribute is to operate a high-capacity node.
This is not about moral philosophy, but practical matters. Tor's anonymity depends on diverse ownership of the running relays. As it stands, the organizations accepting donations to run Tor relays (torservers, noisetor, etc) already control a sizeable chunk of the total relays, and that's why the Tor project would rather encourage people to run their own.
Of course, many people can't or don't want to run an exit node. In that case, it's much better to donate to those organizations than to do nothing. But the Tor exit relays are not soup kitchens, and increased security for the Tor network due to more diversified operator group is not easily convertible to a dollar value.
The underlying hardware of the provisioned nodes would still be under the control of easily-bugged machines in large datacenters from the perspective of government level actors.
Yes, the Tor Project effectively did this for years, since no organization or organizational structure existed to take your sanctimonious "unit of caring" and turn it into geographically disparate non-colluding exit bandwidth. Sometimes the real world, or "territory," is more complicated than the "map" you find over at Less Wrong. Take a minute to ponder this in between the daily Neoreactionary Discussion Group and the hourly Why Aren't More Women Rationalists/Rationalist Pickup Artistry thread.
I almost didn't want to dignify this with a response because of the incredibly unnecessary tone it was written in. (Perhaps my original post came off as more matter of fact and arrogant than it was meant to. It's not a rhetorical question, I'm surprised to hear that nobody has found a way to turn money into diverse exit nodes.)
>Yes, the Tor Project effectively did this for years, since no organization or organizational structure existed to take your sanctimonious "unit of caring" and turn it into geographically disparate non-colluding exit bandwidth.
So this seems like a solvable problem, in one way or another. Some ideas that immediately come to mind:
- Perhaps people could be incentivized to be exit node operators for a small amount of money every month? (Estimated liklihood: Not that great, but it's worth a try.)
- I suspect that many technically savvy people would like to run an exit node, but are afraid of being the first person to have to take the things that happen on their exit node to court. Perhaps any time somebody inquires about donating money for exit nodes to the network, they could be redirected to a legal fund to be set up in advance for anybody who gets sued in a precedent setting case over their exit node. A quick google search shows this doesn't exist and I'm sure it would calm some nerves if it had gained a sizable sum over the years. (Estimated liklihood: Honestly, I think at first it wouldn't do much and might even do damage because it wouldn't be very much money. But over time and depending on how often people are willing to donate money it might significantly help with somebodies hypothetical legal fees.)
----
From the Tor FAQ:
"Will EFF represent me if I get in trouble for running a Tor relay?
Maybe. While EFF cannot promise legal representation for all Tor relay operators, it will assist relay operators in assessing the situation and will try to locate qualified legal counsel when necessary. Inquiries to EFF for the purpose of securing legal representation or referrals should be directed to our intake coordinator by sending an email to info@eff.org . Such inquiries will be kept confidential subject to the limits of the attorney/client privilege. Note that although EFF cannot practice law outside of the United States, it will still try to assist non-U.S. relay operators in finding local representation."
So as a practical matter the EFF would probably step in for a precedent setting case, but it would be much better if there was a legal fund just for this that promised it would step in for a precedent setting case.
----
>Sometimes the real world, or "territory," is more complicated than the "map" you find over at Less Wrong.
Well, yeah. Duh. Speaking of reality being more complicated than you've imagined...
>Take a minute to ponder this in between the daily Neoreactionary Discussion Group and the hourly Why Aren't More Women Rationalists/Rationalist Pickup Artistry thread.
LessWrong Political Opinions By Affiliation And Sample Sizes On The 2016 Survey:
Moreover I wasn't linking to LessWrong's opinion on charity it was Eliezer Yudkowsky's opinion on charity. I'm particularly annoyed about him getting slapped with the Neoreactionary stick when his stated public opinion is that he thinks Neoreaction is stupid and if he were still moderating the main LessWrong site he'd ban them all as part of cleanup:
(Eliezer Yudkowsky can be pretty uncharitable with his critics, I don't endorse that.)
How can/does Tor propose to handle government level subversion (which must surely be happening and continue to happen with ever-increasing depth) where "sponsored" computers begin to form a majority of worldwide exit and relay nodes, with modified Tor running on them that looks actively for attacks, and leaks of information?
Current evidence suggests it's doing OK for now. The slides from the Snowden leaks showed the NSA was unable to compromise the core infrastructure by controlling relay and exit nodes, excepting a few cases. However, there are attacks a government-level entity can mount that Tor explicitly does not protect against, such as large scale passive scanning for traffic confirmation. It is not believed to be possible to beat such monitoring without compromising latency.
Is there a certain time period after which we can say it's not FUD? Like if we go 10 years without any additional clarification after the Snowden slides, is it still automatically FUD?
No claim of present NSA ability was stated. Noting the age of the data in question is a fact. Stating that an absence of new data creates a state of unknowing is a fact.
Fear of an oppressive government hacking the connection and arrest you.
Uncertainty of agencies' capabilities given that they operate under top-secret levels with unlimited budgets.
Doubt that they would stop working on or advancing the state of the art at defeating the security.
It seems a little unreasonable how much you are being downvoted, the parent comment uses uncertainty to cast doubt on the idea that TOR is safe from the NSA. That can easily invoke fear.
Unfortunately that is largely what we have to go off with the NSA.
Maybe the NSA was unable to compromise the core infrastructure, but the FBI was able to compromise enough to hack into a web forum and log the location data for its users, followed by subsequently impersonating the administration of said forum for two weeks or so, then arresting nearly 1,000 individuals.
They did that by attacking the forum software/the server itself, not tor and the underlying transport layer in general.
It's another layer entirely and does not give information about their capabilities in attacking tor itself (except maybe through the fact that if they had to do that, then yeah, tor is probably still relatively secure if used properly).
Layer 5 would be HTTP, layer 6 would be the content (JPEG, MP4, etc.), and layer 7 would be the application serving or sending the HTTP requests/content.
The combination of watering hole attacks and internet scale packet timing collection is pretty big problem for the security of Tor users.
Fortunately Internet wide timing attacks are mostly a Five Eyes and domestic Chinese capability. Chaff, padding etc can help here.
Compromising the servers of target services and using that a platform to distribute anonymity stripping malware is also a problem. The Firefox codebase that TBB is based isn't awesome from a security point of view. Hopefully the Firefox code base can catch up from a security perspective and give them something better to work with.
Internet scale packet timing collection only works when the traffic is not messed with when routing through TOR. It's quite a headache when {proxy}.appspot.com is used, or technical documents are exclusively routed through GoogleCache, or even CoralCDN for that matter.
There's even users who configure BitTorrent to use TCP instead of UDP so that it's very difficult to write DPI rules to parse out the TOR traffic. Couple this with meek, VPNs and traffic shaping tools and it's quite bothersome for them.
Timing correlation attacks would be much more difficult if Tor users were more willing to tolerate higher latency. But they're not. Everyone wants their cake and eat it too.
If they're going to use random numbers to enhance security, they should make sure that at worst, if the numbers are predictable and controlled by an attacker, it's no worse than the current security.
The randomness will be used to defend against knowing in advance what nodes are responsible for the HSDir entries in the hashring (allowing DoS and statistics gathering). If an attacker knew the next numbers, then this protection would be broken (but none of the other important protections would be broken).
Right, and this answers the parent's question. Currently, the layout of the HSDir is deterministic and therefore predictable by anyone, which allows for a number of potential attacks. See "Non-Hidden Hidden Services Considered Harmful" for context [0].
In the context of using the distributed randomness protocol to randomize the DHT layout, if the protocol were somehow broken then the worst case outcome would be that the DHT layout would again be predictable, which is no worse than the status quo today.
Disclaimer: IIRC there are some proposals to use the distributed randomness protocol for other things besides randomizing the DHT; I cannot speak to how those proposals might be affected if the distributed randomness protocol is flawed.
Tor's design document also warns a nation state level adversary who can view the whole network defeats any guarantees of anonymity. There's no way to know GCHQ/NSA aren't running 90%+ of bridges and exit relays either in addition to having a total network overview at the backbone level.
Tor was designed for strong anonymity guarantees in nations that aren't 5 Eyes Alliance ie: China, Russia, ect.
> There's no way to know GCHQ/NSA aren't running 90%+ of bridges and exit relays
Yes there is. There are currently 857 exit nodes. The Tor Project only has to personally know who runs 86 of them to ensure that 90% of the exits are not run by the NSA.
In fact, since ~90% of traffic exits through the top ~260 relays, they'd only need to know 27 of the people who run those.
This guarantee doesn't scale very well considering the combined 5 Eyes intel budget is ~60 billion USD and throwing just 0.1% of their budget at negating Tor would completely overwhelm the network with hundreds of thousands of stooge relays, plus they have the added benefit of global backbone cable traps. Tor can't give any kind of guarantee against a global passive adversary (5 Eyes) which is why they specifically warn against believing otherwise.
>> "RSA BSAFE is a FIPS 140-2 validated cryptography library offered by RSA Security. From 2004 to 2013 the default random number generator in the library contained an alleged kleptographic backdoor from the American National Security Agency (NSA), as part of its secret Bullrun program."
Disclaimer: My knowledge of the Tor architecture is very rudimentary
It would be nice to see some new tcp/ip protocols that handle point-to-point and cross-network communication more flexibly. Take a p2p router (let's say Gnutella2), but pared down to only do addressing and routing of traffic. Then another proto on top to do handle name resolution, secrets and tunnels. Then maybe tcp on top of that just to make tunneling arbitrary applications easy. Everything written with IPv6/ICMPv6 in mind as the parent protocol to be more future-proof. In this way, we can have both a reusable framework for p2p networks (the first layer) and a repurposeable protocol for doing name, auth and secret management/tunneling.
I believe the second thing is already handled by tor, but I don't know if separating the secrecy from the routing exists currently. Those different layers could be reused for different purposes, while also being written with a "new Tor" use-case in mind.
I wonder if seif project's ideas could be helpful here: https://github.com/paypal/seifnode. I remember Crockford talking about using microphone and camera noise to generate random numbers.
My understanding of distributed commit/reveal RNGs is they need some sort of incentive mechanism. Otherwise, its trivial for an attacker to flood the network with lots of commits and only reveal the ones that give him a useful outcome.
As far as I understand, the distributed randomness will only be distributed on the 11 trusted directory servers (where you get your node manifest from). So you don't need to worry about malicious nodes killing the randomness.
I can't access the website because it's using HSTS and my browser says their certificate is invalid. There is no option to bypass the browser security warning. I'm at a public library. Anyone know what's going on?
Cisco Umbrella seems to be some type of security product for networks. Are you using a computer belonging to your employer or with employer software installed? They could be MITMing you. It seems odd that the Tor project would be using a Cisco product like that.
OpenDNS/Cisco Umbrella is basically a DNS-level security service that analyzes your DNS queries, blocks known malware domains, etc.
For some high-risk domains - depending on some settings - it will also switch to MitM'ing the connection to take a closer look at the traffic and block it on that level if necessary. It might also just be necessary to show the "This domain is blocked" page when you're requesting a site via https. Usually, your employer would pre-install their CA certificate, which would bypass the HSTS warning, but I suppose this might be a BYOD setting (or they just forgot/didn't like the idea of Cisco being able to MitM all the things).
Are you using firefox? I had an issue recently where my max tls version had been set to 1. Min version was also 1, max is supposed to be 3. Check about:config.
Running a Tor Node- should be a form of payment. A user having no talent, requesting help from a open source community, could "donate" his bandwith and machine in return. And this form of contract should come with ease of use.
I still really don't understand why people keep developing Tor over I2P - I2P is clearly the better protocol offering complete untraceable anonymity and a chance to secceed from the stigma of Tor...
Tor is a solution for both anonynmity & privacy and censorship evasion. I2P is oriented primarily towards anonymity and privacy.
I2P has an attractive anonymous service design and can run applications like Bittorrent over it. But it also developed basically by 3 people in New Zealand.
Tor has more funding b/c of censorship evasion features being attractive to funders. Successes in the anonynmity feature set like SecureDrop. A vibrant academic community with conferences etc. Lots and lots of review from the external crypto and security community. A deep well of technical talent.
Respectably, no tool - be it I2P's garlic routing, Tor's onion routing or anything else - could ever provide "complete untraceable anonymity"; that is a huge (and potentially very harmful) misunderstanding of what these techniques can do, I strongly encourage you to learn more about them to correct that misconception.
Both projects have designs which have inspired each other and have relative advantages and disadvantages. Technically, I like I2P, but I accept I may be somewhat biased there. Practically speaking, Tor has a much larger anonymity set because it is far more widely used and receives more support, with very well-established volunteer outproxies. I would never criticise anyone for contributing to either: Tor in particular has the widest practical impact of any tool in this space.
This distributed random idea is a very impressive achievement; I'm glad to see it work in the wild! Congratulations.
I'm not sure what you mean about "stigma". Any reasonably effective solution in such a politically-charged space as the anonymity and privacy of human communication is likely to become controversial to some degree.
I think tor has more marketing and mindshare than I2P, and thats why you see tor more than it. I would like to see a more in depth comparison of the two, do you know of a good one?
Isn't I2P still "peer-to-peer" by default? That is, the fact your IP is connected to I2P is broadcast to everyone. That makes every disconnection an opportunity to trace you, directly and by elimination. It's especially bad with torrents, which are probably the most popular use of I2P.
I don't understand why these TOR guys can't rent like 10-20 cheap VPSs all around the world and do their testing there. They are describing getting 11 nodes like some sort of struggle.
VPSs are truly cheap now, you can get one for $3.52 per year:
Didn't read that way to me. Read like they normally use VMs on their computer to have a "testing tor net", but decided to set up actual distributed nodes for testing this. More like, "Hey look, this is nifty", rather then "Ugh, it was so hard to set this up"
Yep, in the earlier tor days it was obscure enough to be doable, but I haven't run an exit node in so long I have no idea who would actually host one now. On top of that, the bandwidth used can eat budgets super fast!
>On top of that, the bandwidth used can eat budgets super fast!
Bandwidth isn't expensive though, unless you need "premium" bandwidth.
Case in point, I've used petabytes of bandwidth for scanning this year and and probably spent less than $2k total on both the scanning hardware and the bandwidth. Realistically I've only spent a few hundred dollars on the BW itself.
And good luck even maxing a gigabit line with Tor, it's not easy.
> They are describing getting 11 nodes like some sort of struggle.
Where are they doing that? I see nothing the like in the article. Only that this was the first time they did a test of that scale, not that there was anything preventing them from doing it earlier.
Many of the cheap (read: sub $10/year) OpenVZ VPS offerings prohibit ANY type of Tor traffic, even use of a client (such as torsocks) - I've used many of them, and they are quick to detect and suspend based on traffic analysis.
The security of the Tor network depends on diversity of relays and exit nodes. If the Tor project ran all nodes, then that is low sysadmin diversity (but high network and jurisdiction diversity) and thus lower security.
In addition to what other replies have said, using their own computers has the advantage of testing on a variety of systems, environments, and connections. VPS would be fairly monolithic.
Maybe I'm just stupid, but there didn't seem like an easy "type a command and we will set this all up for you" kind of way to do it.
Getting it setup, getting it to run as a daemon, and getting the service to work on multiple ports (allowing you to serve :80 and :22 for web and ssh). It seemed like a nightmare to me.
It's sad because I'm very interested in hosting a tor relay/service to make sure I can get to my important documents, even if I need to travel to another country that blocks services like dropbox and google docs.