Hacker News new | past | comments | ask | show | jobs | submit login
Improving on Tor .onion Address Usability (torproject.org)
124 points by EdgarAllanPwn on April 4, 2017 | hide | past | favorite | 39 comments



The way DNS works in I2P[0] is pretty neat[1]. Nothing in this post sounds quite like it. It provides a great "default" user experience while allowing for finer-grained control and tighter security if a user chooses. To summarize:

- Users have a local "Address Book" which maps friendly names (e.g. forum.i2p) to I2P destination keys.

- There are well-known I2P hidden services providing address book subscriptions. The default install includes a subscription that is maintained (and signed!) by the project maintainers.

- The address book makes it clear where names are coming from. So if you decide to un-trust one of your existing subscriptions, you can still keep addresses you added yourself, etc.

- New sites submit their name & key to popular address book services, but there is an additional trick you can use. Pass people a link like http://mysite.i2p/?i2paddresshelper=<key>. This is known as a "jump" link. Your local HTTP proxy will take you to a page asking if you would like to add the name to your address book, or if you'd simply like to keep the name for this session but not save it.

The whole system is easy to use while staying flexible, secure, and in the user's control. I'm curious if the Tor team has tried out I2P or considered a system like this.

If it's not making sense to you from my summary, I highly encourage you to get I2P and try it out!

[0] https://geti2p.net/en/ [1] https://geti2p.net/en/docs/naming


It's similar to what ideas 2.1 and 2.2 are in the blog post - a local directory that is maintained and authenticated centrally and then distributed to browsers that perform a central lookup.

The downsides are that it is too centralized - it isn't difficult to imagine that a government agency would want to sinkhole silkroad.tor from the default registry.

With an alternate registry, you have the balance between knowing enough about the directory provider so that you can trust them, but not enough known about then where they are open to legal recourse.

ie. i'd trust a registry from riseup or duckduckgo, but that same registry is likely going to be the target of legal and hacking attempts. Likewise any provider who is sufficiently protected from those threats likely isn't well-known enough to be trustworthy.

One of the benefits of the existing names is that they also authenticate the site (assuming you check it correctly, usually out of band from a trusted source like a directory or search engine) - this part can be replaced with certificates and an issuance model that can be identical to what LetsEncrypt does

In terms of hosting the directory - that almost has to be decentralized using a p2p network. Similar to namecoin. Namecoin also solves the issue of distributing names and typosquatting - and it could be adapted to auction names.


> It's similar to what ideas 2.1 and 2.2 are in the blog post - a local directory that is maintained and authenticated centrally and then distributed to browsers that perform a central lookup. The downsides are that it is too centralized...

It is only centralized for users with the default install, who never go into their address book.

I think the real achievement of I2P's name system is that _they have made it easy for users to understand_, and the tight integration in the UX is the main differentiator I see between I2P's approach and any of the approaches in this blog post.

While I think Namecoin sounds cool and all, I really hope Tor considers a simpler approach. I think it's a mistake for us to make this into a technical problem, when it's a UX problem. We're never going to get 100% secure names in a trustless environment, so why not focus on making the default pretty secure, and making the system understandable and useable?


The problem of attaining trust from within anonymity is a interesting one, but the simple solution sounds to be to just have the directory servers make a consensus over what registry should be used as default. The tor network already depend on the directory servers, and if the registry ever get compromised the directory operators can always change the consensus. If a single directory operator get legal problems, the other operators consensus will override any recourse that may happen.


hate to be tinfoil-hat about this, but I2P seems sane in every way that Tor seems all-but-intentionally insecure in the face of sophisticated adversaries.

this seems a good example of "fruitful surface area" for security vulns. another unrelated design decision is for Tor to reuse a single route which can be identified with relatively high accuracy via traffic analysis.


I2P benefits from router's by default also being nodes (not saying Tor should do that though), meaning there's more "spread" of where the nodes are. Most Tor nodes, even if ran honestly, are generally in the same few data center companies; digital ocean, flokinet, etc.

Also, its important to note that I2P's addresses (which are usually seen as something.i2p), can also be used directly in their 'hash' form like Tor, if you don't trust the jump/address book services.


Does that mean when a site links to "forum.i2p", then it's up to your own local address book where that actually goes? (Assuming people don't use the ?i2paddresshelper thing for literally every link, unless that is what is done.)


That is correct. And no, people don't use jump links everywhere--only if their site is new and they are passing the links to folks on IRC, etc.

Another thing I forgot to mention: if a name is unknown, your local HTTP proxy will ask you if you want to try to lookup the name from one of several popular "jump services", which are the same services providing address book subscriptions. This is where jump links come from most of the time.


I don't know how i2p works, but isn't that pretty much they way the "normal" net works? If you click a link to "news.ycombinator.com" in your webbrowser, it's mostly up to your local or ISPs DNS resolver where that takes you, no?


True, but your ISP is expected to resolve domains to the same thing as everyone else, which is based on the distribution setup by ICANN/IANA/whoever. I2P has complete control over what a domain will resolve to, and can choose names based on utility to the user. There's also the factor of ease of use: even if everyone could edit their hosts file to make typing domains easier, few people know about the hosts file, or would think to edit it. Putting the option front and center, with a easy-to-use GUI makes a difference.


> The default install includes a subscription that is maintained (and signed!) by the project maintainers.

This sounds like a legal risk the maintainers need to be careful with.


This is a great step forward, however the nature of all "domains" is centralised. DNS is a great idea, that decouples the centralisation to many parties, but it is central at many points; registrars, ICANN, DNS servers (though can be local). Tor's unique .onion address are the perfect way to fix the centralisation and the security, but comes at the cost of human readability.


I really liked the idea that created peercoin. I.e. a currency that also enabled distributed the registrar's (at least I believe so).


You mean namecoin.


It's best to piggyback on DNS rather than inventing on new scheme (address books, namecoin,..) since this would surely result in a petty holy war and would confuse users. But it would be a nice way to destroy ICAAN monopoly over the domains.

The .onion protocol should behave exactly as https for http, it's just a protocol upgrade. Instead of a green lock, you would get a purple lock indicating privacy.

To do the protocol upgrade, the SSL field annotation seems like the most robust way but I would go with `Alt-Svc` in the mean time since it's easier to implement (server and client side). To mitigate the privacy issue, tor could download a list of popular 'domain->.onion domain' (exactly like DNS caching)


Why they don't explore such an obvious approach to improving usability as various graphical representations of onion addresses? It works for security verification codes in WhatsApp, Telegram etc., it should also work for onion addresses.


Why not use Namecoin? Seems like a good fit.


Larry from Blockstack.

We used use Namecoin and migrated to Bitcoin when we discovered that one miner controls more than 51% of mining power which is a security problem in a proof of work blockchain.

If you're interested in learning more, there's a peer-reviewed paper on it here: https://blockstack.org/blockstack.pdf

Section 3: "Lessons from Namecoin Deployment" may be of interest to you.

There's also an (old) thread discussing the problems encountered with Namecoin here: https://forum.blockstack.org/t/why-is-namecoin-being-ignored...



Bitcoin mining is also controlled by a small number of people. Over 50% of the blocks hashed over the last 4 days are from the same 5 pools. [1]

The difficulty mechanism has been an abject failure and utterly destroyed bitcoin's promise of decentralization. Under that mechanism, commodity mining hardware automatically defeats itself.

We now have a small number of big players who've built custom, super-secret hardware that can't be distributed or discussed or their investment will be immediately destroyed, and normal people are unable to contribute in any meaningful way (not even with GPUs anymore).

[1] https://blockchain.info/pools?timespan=4days


50% of hashrate coming from 5 pools doesn't sound like centralization to me. Afaik many pools don't even have miners themselves.


I don't want to be pedantic, and I do agree that the scenario isn't centralization in a sense that it strongly threatens the network, but I thing there's a point here worth clarifying. Pools don't need their own physical miners to have power over the block generation process. AFAIK, the connected miners are "dumb clients", delegating their block generation capability to the pool in order to share rewards and therefore reduce income variability.

In short: the pool still defines the blocks that the connected miners will mine. They centralize all the collective power of all connected miners.


I haven't really kept up with it, but last I heard most of the big pools are based in the People's Republic of China.

It hasn't been a problem so far, but having so much of the network under the same jurisdiction is just inviting trouble.


Pools delegate small pieces of work to individual miners, who join at will, and assemble the results of their work in order to find blocks. When a block is found, the pool divides the reward amongst the miners that contributed hash power toward finding it.

That's all handled internally. The Bitcoin network sees the pool as any other node; there is a hard stop there. It doesn't know that the answer was found by compounding the hash power of many servers.

As another commenter said, that alone is a risk, because you have (presumably) thousands of servers masked behind 5 central brokers. They are, in effect, the centralized banking cartel, and the pool participants are like their branches, ancillary components that are used to help perform the work, but which ultimately are at the mercy of those in the central office.

Consider that in late 2015, at the Scaling Bitcoin Hong Kong conference, the managers of all of these pools met together to attempt to devise a strategy for the blocksize problem. The controllers of what at the time was over 90% of btc's hash power were in the same room. [1]

Since miners stand to benefit from a limited blocksize and normal consumers stand to suffer, you can guess where their interests aligned. How is this different from the executive teams of the large banks getting together and agreeing to collaborate on other efforts?

Less likely, but much more worrying, is the question of what would've happened if the Chinese government incidentally decided to declare running a bitcoin pool illegal that day (I know that HK has an independent-for-now government, but Beijing has made incursions before, and is getting increasingly aggressive as the 50-year integration timeline narrows). These people could've been arrested and their resources could've been confiscated.

Beyond the concerns of a pool operator getting compromised, we also have to worry about the general matter of pool transparency. There's no way to see whose hash power is being contributed into the pool (afaik). We don't know how much of that hash power is from independently-operating nodes. Guaranteed very little of this is occurring on commodity hardware; most of it is in data centers in China, running custom mining hardware.

Initially, people attempted to produce and distribute consumer-level miners, but that has more-or-less burned out because the ROI is immediately decimated. There is still a small amount of interest in it for the novelty, but no one expects to make money from it.

As soon as the network gets an appreciable increase in hash power, like that which would be precipitated by the wide distribution of fast miners, the miner becomes worthless at the next difficulty evaluation. That's how the protocol works, and it means that consumer-level bitcoin hardware is a pipe dream.

The incentive is clearly to develop the fastest hardware possible for oneself, and then to prevent as many others as possible from getting something similar. This means proprietary hardware. Custom ASICs are extremely expensive to develop, especially at small scale, and only the biggest players are going to be able to do this. btc long ago blew away CPUs, FPGAs, and GPUs.

I don't think that was the intention, since it obviously leads to centralization and secrecy, exactly the problem set btc was trying to solve.

Since we can't see into the pools, we have no way to know where that hash power actually lies. It'd be very dumb to fire up uber-secret hardware with unprecedented hashing performance and not pretend that there's a pool in front of it. It's possible, I would even say likely, that a huge portion of the hash power in these pools comes from a small number of big contributors, potentially even the pool's operators themselves.

btc was originally envisioned as something everyone would run on their home computer, and it would be as decentralized as the internet was. Due to multiple design flaws, it's failed horrendously in that, and is now very centralized (and not just in mining).

I know there are a lot of nuances to btc and I don't claim to be a btc expert, so after you downvote, please correct me. ;)

[1] https://news.bitcoin.com/scaling-bitcoin-workshop-hong-kong-...


That's what they're doing:

>During the past years, many research groups have experimented and designed various secure name systems (e.g. GNS, Namecoin, Blockstack). Each of these systems has its own strengths and weaknesses, as well as different user models and total user experience. We are not sure which one works best for the onion space, so ideally we'd like to try them all and let the community and the sands of time decide for us. We believe that by integrating these experimental systems into Tor, we can greatly strengthen and improve the whole scientific field by exposing name systems to the real world and an active and demanding userbase.


Missed that! Thanks for correcting me.


Pleasure!


ew. gross.

CAs : Web of Trust :: domains : what Tor should do


They're not using a ca-like central authority model. They're using Namecoin (or something like it), which is decentralized authority, verifiable via crypto.


never said they should. saying the kind of complexity presented by a blockchain is an insane way to work in low-trust environments with competent adversaries.

they should be working on reducing the complexity in the trust model. I trust the person who told me about a service. why can't I just get the keys from them, and verify that all of my other friends agree?


> why can't I just get the keys from them, and verify that all of my other friends agree?

You can, that's what a .onion address is. This isn't too great for regular users; .tor addresses make it easier to work with. There's an observation here that people are already trusting google and onion directories to non maliciously give them the right onion address, this is trying to spread out that trust to a blockchain.

The nice thing about a blockchain based model is that while you need to trust the network to be sane, in case someone tries to use lots of computational power to break past this, targeted attacks are still not possible, the attack (redirecting a name) will be visible to the entire network.


Not with a programatic standard I can't. I'm saying they should make the equivalent of key server for GPG -- something simple.

Problems with relying on a blockchain to validate domains against sophisticated adversaries range from obvious to unknown. Not good.


The problems related to blockchains are very well known. Blockchains have been used extensively at this point, for highly critical applications. They're the most censorship resistant platforms known to exist.


Care to name a point where a blockchain stood between MSS and Chinese dissidents? NSA and islamic radicalizers? SVR and someone Putin wants dead?

I'm stoked about blockchains as much as anyone else (heck, I quit a job at Google and spent a year playing with them when Bitcoin first came on the scene). But to say that they are a good thing to build on top of when facing adversaries that have 7 figure USD budgets and capabilities to perform active attacks on non-trivial chunks of the internet strikes me as just a bit naive.


By placing pretty wallpaper over strong-encryption, we are making it more susceptible to damp.


I realize the task of recording your favorite .onion names is a small expense of keeping your traffic private, but why is "taking notes in a text file" considered ad-hoc now? That wasn't the case in 1990, where most computer users had a library of floppy drives with their personal documents, notes, and records. Has the world truly forgotten that you can store data in textual form to your computer's filesystem? With how iOS is designed to hide the filesystem, it seems so. I hear so many app ideas now that could be solved with just a single text file and editor.


Good point about the filesystem, but this sounds like a really annoying and slow task to have to manage manually. If someone wants to write an app that spares me the drudgery of copying and pasting out of a text file every time I want to go on a website, I will use that app. Not to mention the reduced cognitive load from having fewer open windows to manage, and one less thing to have to know where I put it.


Am I missing something obvious, or isn't this something browser bookmarks/favorites would handle? I.e. storing a more human friendly title/description/tags for complicated URL's & paths.

This doesn't help with the potential complexity of sharing or remembering (on a new device) the sites of course.


But then the user has to create a file, in their filesystem. Hard work! They have to remember the name of it, and type things and everything. They have to press enter at the end of lines and keep it all organised. It's so hard.

It's up to you to determine whether the above is satire.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: