Tor is not as secure as it is often thought. It needs to be redesigned and/or adopt some modifications from I2P's "garlic routing".
I suggest a complete end-to-end rethink of how anonymous Internet services are done. Instead of it being a tunnel for generic traffic, think of a specialised private protocol for input/output of information. Throw away the highly complex, insecure HTML5 that most hidden services are interacted through. Instead of slow, bloated HTTP requests, think of sockets that stay alive meaning traffic can be high bandwidth and low latency. We could even throw in some modern cryptography like ED25519 which would give some performance benefit. I had an idea for 'heartbeat groups' too which would prevent timing attacks at the ISP/wider Internet level.
If it weren't for the fact that I have a full time job and don't want the Five Eyes on me, I'd build it myself.
And what exactly qualifies you to make those statements? Especially since what follows IMO reads rather bizarre.
> I had an idea for 'heartbeat groups' too which would prevent timing attacks...
Great! What is this supposed to be? Is it some kind of mixing scheme? How does this prevent timing attacks? How did you model this to verify those claims you make? Do you have a publication on that somewhere?
> If it weren't for the fact that I have a full time job and don't want the Five Eyes on me, I'd build it myself.
In other words, you know better than the people who did the existing implementation, but really can't be bothered right now?
You got me a bit curious, but since you have no description or website link in your profile and a Google search for your nick only leads me back here, I started browsing through your comments, trying to find some details on your background in IT-security. Besides a bunch of strong opinions on practically every topic thrown at you, I didn't manage to dig up anything interesting so far.
Somewhere in your early comments, you said, your work entails managing bare metal machines. Somewhere else you said your job is basically skim reading docs. There are some other references to systems administration topics as well. Are you by any chance some kind of data center sysadmin?
I'm sorry and I'll gladly stand corrected, but for now there are simply too many red flags here for my taste.
What qualifies me? It's my opinion, I don't need qualification to have an opinion and you don't need to care about it. But since you asked, I'm a privacy enthusiast with a good understanding of cryptography (not PhD level and I would never claim it to be). I don't see why my job is relevant.
I've wanted to reinvent hidden services for years and you're wrong about it being a matter of motivation. It really is a matter of both time and not wanting to bring attention to myself.
You were not attacked, your response was simply taken apart and questioned. If having someone be skeptical of you is an attack, that would make talking about subjects where everyone does not already agree very hard.
> What qualifies me?
> a good understanding of cryptography
> not PhD level
> I don't see why my job is relevant.
> It really is a matter of [...] time
Well, I think it is reasonable to ask why you feel like you know what changes to make to make something more secure. A reader now has to assume that you:
- Have not studied the subject in an academic way
- Do not have work experience to make up for that lack of study
- Do not have time to devote to the subject (for example, for self-study)
This would lead a naive reader to the conclusion that you are not academically suited, not professionally suited and not suited through self-study to comment on this subject in a capacity that means anyone should take your suggestions to heart (perhaps incorporating them into the software under discussion). The same would go for people that you would submit changes to (had you had any time for this). That leads me to believe the only point of your commentary is the commentary itself.
I guess it is fair to say I don't understand this. Perhaps GP was getting at that, perhaps not :)
Edit: To be clear, I wish to understand if there is an understanding to be had beyond "I just felt like it".
Not sure what you mean by heartbeat groups, but if I'm guessing correctly this sounds fairly similar to chaffing and winnowing https://en.wikipedia.org/wiki/Chaffing_and_winnowing which lets information from multiple senders (or even bogus, non-existent senders) to be clumped together and broadcast in such a way that every destination is able to pick out only the messages intended for them and cannot distinguish messages not meant for them from uniformly random garbage.
I have an acquaintance that's a mid-to-high level FBI agent. In casual conversation, he's eluded on more than one occasion that the government is able to track people using TOR. I believe his exact words were "we've gotten pretty good at it." At this point, I just assume the government knows about everything I do online.
I suspect they are exaggerating, or that the claims were misunderstood. Yes, the FBI can find criminals who use Tor, but with extremely few notable exceptions from many years ago now, that isn't done by breaking any of Tor's protections, but instead using more traditional police investigation methods (e.g., "we got a tip this person is up to something, and this account only ever posts when that person is at home..."). We know this because details of investigations are public once they go to court in the US. Yes, parallel construction is a thing, but the idea that they regularly make use of exploits and this information never gets leaked in any sort of verifiable way (even with FBI agents who are apparently willing to speak of such secret programs with mere acquaintances ;)) means we can be relatively confident that, if such exploits do exist, they are rare enough to be too expensive to waste on anything but the most extreme circumstances. In other words, no, the FBI does not know what you're doing on Tor (assuming you're not at the far end of a bell curve, and ignoring everything other than what Tor protects).
Regular surfing via TOR is so painful (due to captchas from Cloudflare and others) that the proportion of "suspicious" traffic must be quite high in the network.
There have been studies that show this isn't true -- any particular session from a Tor exit relay is about as likely to be malicious as a session from a random IP, and the most popular sites accessed are things like Amazon and Facebook (the latter of which makes up approximately 50% of onion service traffic, as of ~4 years ago). I, for one, use Tor for most web browsing that doesn't involve logging in. (As a hint for people who dislike captchas: I don't actually see them as often any more, but for static content in particular, you can fall back to archive sites to circumvent them.)
It's really surprising to me that Tor is considered de facto the most secure by many, but whenever I review I2P my impression is always that it's better designed than Tor.
That's a reasonable default for 99% of Tor users. For that last 1%, there is the safe/safer/safest slider bar that is a single click away.
Javascript also has nothing to do with these attacks. Even if you turn off Javascript, KAX17 can still attempt end-to-end correlation attacks. This is much scarier than a Javascript browser exploit.
"... how do they know that these various nodes are correlated with one another ..."
The OP alludes to this:
"... and the fact that someone runs such a large network fraction of relays “doing things” that ordinary relays can not do (intentionally vague), is enough to ring all kinds of alarm bells."
... and the OP is "intentionally vague".
I, also, am very interested to know how they correlated them and what the interesting behavior was that they exhibited ...
> Some of KAX17's relays initially had used that email address in their ContactInfo but soon after these relays were setup the email address got removed from their configuration.
They don't need correlation. Operating large numbers of nodes gives them a high enough probability that all three hops may be under their control thus they can observe the entire route.
Instead of messing with your path selection a better strategy would be just run your own guard nodes that you trust (a guard node is the first node that you connect to in a Tor circuit) and to stick with them. Remember, de-anonymization attacks require the attacker to control both the guard node and the exit node at the same time.
If you want your guard node to be helpful in anonymizing your traffic, you should really make sure it's public and used by some % of the global user base (so that your traffic blends in the noise).
Once you do that though, you will always have to trust that node a little less than you could if it was walled-off so it would only serve you, just because it is another machine serving connections on the internet that will likely be targeted by adversaries who would benefit from turning many of the guard nodes into part of their Tor de-anonymization service.
If I had endless resources and was truly paranoid, what I'd do is build my set of public guard nodes, make sure they're serving Tor traffic, etc.
But then, I'd "borrow" those IPs occasionally for trusted nodes which will only accept connections from me (ideally both sets of machines will be live and routing traffic simultaneously).
In theory, you could apply the same tricks with similar success to exit nodes of course (though as usual, running an exit node is generally a slightly riskier / harder thing to do)
Which relays are guard relays is public knowledge (by necessity, it's how clients know which hops to select). If you operate a middle relay, and you see traffic coming from a relay not known to be a guard, you know it's from a non-standard Tor config, since you should only see circuits being built through guards (ignoring onion service traffic, which is small enough that we can do so in the limit). If you mean "get your relay listed as a guard, but don't accept traffic from anyone else", 1. you're going to lose your guard flag pretty quickly trying that, and 2. you ultimately have the same problem, since the adversarial middle will see that this guard is relaying suspiciously little traffic (relays only get flagged by auths as guards if they can handle above a certain amount of traffic, since they are designed to change infrequently, so a bad one will give you a bad Tor experience for a long time).
> That only works if the attacker doesn't know the guard node is you.
That's not how Tor nodes work. Once you setup a guard node (and it got enough reputation) you won't be the only person using the guard node. Also de-anonymization attacks require you to know the traffic coming to the guard node (and if you run a trustworthy one yourself and you're not dealing with a global passive adversary then there's no way the attacker will be able to see the incoming traffic to the guard node).
Running your own node and "sticking with it" is not a great idea especially if you're the only one using it. You will be spotted and identified pretty much instantly
I guess this approach works fine for an individual, but if everyone has to run their own guard node to be safe, why would anyone connect to your guard node (given it would be risky from their perspective since they aren’t running it themselves).
In other words, if you accept you can’t trust anyone else, why would anyone else trust your node too?
(Edit: Sorry I’m wording it poorly but I hope you get the idea)
Most people don't need to. Most people use Tor when it's a) completely unnecessary, b) not interesting to the sort of well-funded attacker that could run this many nodes, or c) only needed to avoid identifying them to the service they're requesting from.
>I guess this approach works fine for an individual, but if everyone has to run their own guard node to be safe, why would anyone connect to your guard node
By the time a few percent or tor users are running a guard node, KAX17 would be too diluted to be a real threat.
Does an attacker only need to control the guard and exit nodes, or the middle relay node(s) as well?
If the latter, can you configure Tor to use more than one middle relay node, depending on your threat model?
Could Tor do something like overlay a fixed-throughput circuit-switched network on top of the packet-switched network to prevent correlation attacks? Obviously at the expensive of efficiency.
Also: If KAX17 is running nodes on 50+ AS's "including non-cheap cloud hosters like Microsoft" shouldn't it just take one insider at one of those hosts to leak the identity of one or more of these node operators? Come on, guys...
> Does an attacker only need to control the guard and exit nodes, or the middle relay node(s) as well?
No, only controlling the guard and exit nodes is necessary.
> If the latter, can you configure Tor to use more than one middle relay node, depending on your threat model?
Tor makes dozens of circuits in a typical use. You never stick to a single circuit. In the Tor Browser you have first party stream isolation so you get a different circuit (and hence different middle and exit nodes) for each first party domain that you visit.
Do you mean that the service (YouTube) will block the operators of Invidious instances? If so, this already happens, thus the constantly-updated open source projects. If you meant something else, ignore me, haha
KAX17 also employed middle relays. It's not out of the question that they are attacking onion services in some way, like the 2014 "relay early" attack.
For example, if you run a relay chosen as a guard for an onion service, there are a number of clever tricks (sending heavy traffic to the onion service + observing which IP suddenly has lots of activity + eliminating other relay IPs) that can be done to find the IP of an onion service. There are also "sniper" DDoS attacks that can force an onion service to use your node as a guard. I would imagine that many or most onion operators haven't bothered to enable the Vanguard addon that makes this attack a little more expensive.
I really wonder about the feasability of keeping onion service IPs globally secret. It may not be possible. I would certainly not stake my freedom on it. End user traffic to onion services is considerably harder to compromise, though.
> I really wonder about the feasability of keeping onion service IPs globally secret. It may not be possible. I would certainly not stake my freedom on it.
Me either. KAX17 is visible. There are exceptionally well-resourced organizations with I think the motivation to unmask a seemingly target-rich collection of users, and the money and skill to simply infiltrate most relays, probably without being noticed.
I also wouldn't stake my freedom on it because it is simple to detect that you are using Tor, and that puts you in a tiny group, one that is (again) seemingly target-rich.
given that running Tor relay (god forbid - node) is risky for a regular citizen, i suspect that majority of Tor is run these days by the ones immune to such risks, ie. the law enforcement (and the ones made to collaborate). In such a situation the correlation should be pretty easy.
While running exit nodes definitely seems undesirable due to the volume of abusive traffic that is likely to go through the exit node, I bet other kinds of relays would be a lot less cumbersome. Obviously a very well-funded attacker will always have the upper hand, and obviously, if you were to always use your own node as the guard node, it would lose some of the properties of Tor, but I bet it wouldn't hurt for more of us to run nodes. I am interested.
Hetzner is German, which has actually strong privacy laws. In terms of coordination it is easier, but in terms of actually getting to the cooperation it is much more difficult than any alternatives you are likely to be using.
No, Tor and a VPN are not compareable services. The Apple VPN is a closed network run on servers run by Apple, so you have to put 100% of your trust in Apple, as opposed to the open network of Tor, where trust is (ideally) distributed among many different actors.
That's not how iCloud Private Relay works, and it's not a VPN. The ingress nodes are currently ran by Apple, the eggress nodes are ran by other providers, mostly Cloudfare, Akamai and Fastly.
The ingress nodes who you are[1], but they can't see who you're connecting to, the egress nodes see the traffic, but they don't know who you are.
You still have to trust Apple, but not because iCloud Private Relay is ran by Apple, but because the client is proprietary software running on a proprietary operating system that Apple has control of. But this is true if you chose to use macs in general, using iCloud Private Relay doesn't add new requirements to the level of trust.
As for correlation attacks from a 3rd party, they can be both easier and harder compared to Tor depending on the attacker. For a global attacker they are harder, because the circuit is geographically limited in scope and the attacker would have to have access to your particular network, and of course the attacker can't install new exit nodes. However, for a local attacker that for whatever reason has visibility into the network hosting the circuit, they are easier since there are only two nodes in the circuit. And of course, for a government iCloud Private Relay would pose no problem whatsoever, since all operators are commercial entities and you need a paid account[1] to use iCloud Private Relay... But even in that case, it requires the collusion of both the exit node provider and Apple to unmask you.
[1] Certain type of traffic is always sent unauthenticated through the relay, even when iCloud Private Relay is not enabled.
Of course, it's Medium.com policy never to imply ownership in the event of a hashtag. We have to use the indefinite article, "a hashtag," never ... your hashtag.
Twitter users started using hashtags well before the Twitter platform automatically turned them into links. The important part of a hashtag is the syntactic # and not the HTML <a> tag.
FWIW, as much as people love to rag on cryptocurrencies, I feel like this is where they shine. (Of course--as someone like me always shows up--I work on Orchid, a cryptocurrency market for bandwidth that is intended to support various use cases including those similar to Tor.)
The issue is that no one entity should be able to just sit around and dominate the directory of nodes by claiming "I'm a million nodes, wheee!!!", and so that needs to be "expensive". However, for it to be expensive, they also need a way to make money running the nodes.
In our case, people have to lock some money up in a shared pile in order to gain control over percentages of the directory, and what you get in return is that you randomly will get people using you to relay their traffic, for which they will pay you fees (likely close to cost).
(Essentially, instead of Tor's mechanism where, when you want to find a relay node, you randomly select between all entries with equal weight, you would select between market participants linearly weighted by the amount of money they have locked up in the directory.)
And that's really what most of these cryptocurrencies are doing in this generation of new decentralized protocols (which I've seen people hate on): dealing with the reality that there are bad actors and not everyone is going to run the software without evil changes.
This is also the core of Ethereum: you want a decentralized database capable of transactions? OK, well, the order of operations matters (as transactions can preclude the ability for later incompatible ones) and so we need to limit the influence that any one operator has...
...so you build a system where nodes have to spend something--either doing some silly math constantly (proof of work) or locking some money up in a pile (proof of stake)--to gain control over percentages of the directory, and in return they get to charge fees to commit transactions.
(There is additionally often an inflation-based "rewards" component, in the case where not enough people are yet paying fees. I'm honestly not for these and have a way-too-complex-for-here argument for why they might even be "evil" in most cases, but that's kind of irrelevant anyway.)
The reality is that, if you want to build a decentralized system, and you want to figure out how to make it hard for bad people to do bad things, you have to raise the cost of doing said bad things; but, to do that will require programmable money... leading you to crypto.
The adversaries mentioned in the article are highly sophisticated and seem to have access to a great amount of resources. They may be, and some would probably say they likely are, working for nation states.
Given a situation where your adversary is a nation state, how does crypto fix anything? How is crypto not in the end just “proof-of-resources”? Nation state wins.
The article touches on trust models and personally I think it would be a better solution to introduce some kind of manual trust into the routing. If there’s one thing Tor does not need, it’s crypto-“currencies”.
I just-about-concurrently to you asking this question answered a similar question someone else had posed already (using the NSA as their specific example) here: https://news.ycombinator.com/item?id=29466855 .
I just question the value of using 0,5% of the worlds electricity to keep the NSA out if there are other options. For example, like the article suggests, if I could decide what nodes I trust as entry guards and so forth, any malicious actor could throw as much money as they want at the network.
Yeah: as noted there, I agree that proof of work is probably immoral, but it happens to be a network that has already gotten big so we can start to analyze its ability to defend against "nation-state actors"; instead of that, what if it were 0.5% of the world's economy "locked up"? That's "proof of stake" instead of "proof of work", and is where all of these systems are going (including Ethereum for their next major version they might someday finish, or existing networks that I'm personally way more excited about, such as Avalanche).
If you needed 0.5% of the world's economy to run a node, nobody in the world could afford it. The amount needs to be a lot lower.
Let's make things more concrete and say we have two actors: (1) the NSA, and (2) Cryptopunk Joe. Let's assume you need to lock up $1000 to run a node. That's already pretty expensive, so cryptopunks will be deterred, yet the NSA could easily spend a few million and run thousands of nodes. i.e., there is no dollar amount that attracts the Cryptopunk Joes of the world, but is out of reach for the NSA. Would you disagree? How do you solve that problem?
(Aside—sorry you're getting downvoted. I'm generally a crypto skeptic, but I love to see people tackling hard problems in novel ways, and I try to stay open-minded when I hear about a new project.)
> If you needed 0.5% of the world's economy to run a node...
Sorry, I meant "to run 50% of the nodes" (the same as the Bitcoin case: it isn't that you needed 0.5% of the electricity to run one node, but to run as many nodes as already exist combined).
> That's already pretty expensive, so cryptopunks will be deterred, yet the NSA could easily spend a few million and run thousands of nodes.
Sure, but they are competing with not one "cryptopunk"--which is a problematic description here as even bitcoin miners aren't "cryptopunks": they are probably mostly "serial entrepreneurs" who found a cheap source of electricity--they are competing with (one would presume) at least 10-100,000.
Like, the premise in the "if you want to beat the NSA you have to go big" version of the argument is that you actually "went big": Tor is fundamentally at odds with itself because it simultaneously needs to be big for intrinsic reasons (to ensure that algorithms like what XKeyscore was using--did you access Tor? ok, you are on the watchlist--aren't sufficient: you need actual cloaking cover from real traffic) and yet you are considered an asshole if you use a ton of bandwidth from it (to watch movies or whatever) as it is all donated service that should be reserved for important use cases.
Instead, imagine a world where every time someone wanted a VPN or a TURN server or an HTTP cache or whatever, they considered using this shared decentralized market for bandwidth to build their client. Sure, you could use a service like AWS, but they ream you on bandwidth costs! A world where all of these use cases and all of these normal people doing normal things were all using such a market--which I am not at all claiming would happen in the near future, as it took a long time for Bitcoin, but the premise is that it isn't insane... and also, again, as I said before: the NSA is the least of Tor's worries currently--would be a world in which it might be large enough to be non-trivial for even the NSA to buy in and take control.
> Aside—sorry you're getting downvoted.
Yeah, well, it is way too cool to be knee-jerk anti-crypto at this point, as it is entirely fair to say that the majority of the ecosystem is filled with scam artists, and even some of the largest funds in the space--such as now even a16z--are peddling awkward BS constantly as part of their portfolio. I have entire diatribes about how a bunch of the stuff in this ecosystem is inane or even evil, but the reality is that decentralized systems without some form of monetary incentives are going to be subject to all of the same stuff we see in real-world bazaars without some kind of strong government influence: cryptocurrencies are doing something important for the space of distributed systems.
Ok, but if we're stopping the NSA by forcing them to put down 0.5% of the world's economy to control half of the Tor network... then presumably the rest of the Tor network needs to put down more than 0.5% of the world's economy to hold majority?
And if we're shrinking the fractions, don't we just come back to the same point again: making the ownership of the network a "resource democracy" means that whoever has the most resources will own the network: and that will be nation states.
> the NSA is the least of Tor's worries currently
What are you basing this on exactly? And even if we assume it's true, why aren't nation state surveillance operations the biggest worry? Namedropping NSA is for the most part just a placeholder for $state_surveillance.
> it is way too cool to be knee-jerk anti-crypto at this point
I don't think it's "knee-jerk anti-crypto" to question the sanity of introducing a crypto economy into the Tor network.
I am personally skeptical of essentially any crypto currency in existence today, but I am not by default skeptical against the technology itself. I still think there's a tendency, as there is with any new technology and not just crypto, to shoe-horn it into use cases where it may not be sane to use it.
All I'm saying is that from what I've understood so far, introducing a crypto-economy into Tor doesn't sound like anything else than basically giving access to whoever pays the most money -- and sure, you could say that's true today! Buying nodes gives access, that's what the entire article is about, but in that case maybe we should rather be looking at solutions where that is never the case, rather than just making the price higher.
>>"The issue is that no one entity should be able to just sit around and dominate the directory of nodes by claiming "I'm a million nodes, wheee!!!", and so that needs to be "expensive". However, for it to be expensive, they also need a way to make money running the nodes. "
I always feel either I'm stupid or I'm missing something when it comes to crypto, Because I understand words and I feel I understand concepts but I don't understand linkages others take for granted, as it pertains to problem at hand of anonymous communication.
If nodes are expensive, doesn't nsa have more money than any given privacy focused individual or organizing?
If they make money how is it expensive in a useful way
I genuinely don't understand how that solution solves a problem, unless they problem is "we need another way to transfer money (ideally to ourselves and early adopters from late adopters)."
> If nodes are expensive, doesn't nsa have more money than any given privacy focused individual or organizing?
So, two things are going on in this question.
The first is that I doubt KAX17 is the NSA... they might not even be a government! Most systems people build that rely on altruism are so easily attacked that a grad student can take control of them using resources at a University. I appreciate that the bar to prevent the NSA (or the CCP, or whatever other government surveillance network you are most concerned about) taking over your network is high, but the bar right now is just so low for most of these protocols that it should be embarrassing. If the NSA were really the only problem for Tor, I'd call that "a wild success".
The second, though, is that the goal should be to get big enough that the NSA would actually have a hard time dominating the resources of the pool. This requires being pretty large, but isn't insanely impossible. Let's look at Bitcoin for a moment: I am one of the first people to agree that "proof of work is probably an immoral way to solve this problem" (due to the externalized environmental effects of electricity usage and the such), but damn if it isn't effective, right? Bitcoin uses 0.5% of the world's electricity. That means to take control of half the Bitcoin network, the NSA would have to build out the infrastructure for buying (and using) an equivalent amount of the world's electricity... I bet that's hard, even for them.
Take a look at https://www.crypto51.app/ - you basically need to be top 5 to be too expensive for NSA to attack.
And if your network is not in the top 20, it will cost single dollars per hour to attack, putting it within regular team's reach.
FWIW, my personal belief is that when the dust settles in another decade there will only be a metaphorical oligopoly of distributed systems, each of which having generalized in their purposes in order to have "shared security" across a number of different use cases. Regardless, I maintain that Tor doesn't protect against the NSA today (and for even more reasons than come from this article today: the NSA can do attacks that don't even require running nodes, involving stuff like timing analysis attacks that Tor does nothing to prevent), so focusing on the NSA seems like a pointless exercise.
> > If nodes are expensive, doesn't nsa have more money than any given privacy focused individual or organizing?
> So, two things are going on in this question.
> The first is that I doubt KAX17 is the NSA... they might not even be a government!
The author suspects differently, he does suspect a state actor behind KAX17. Do you have better Information?
> Most systems people build that rely on altruism are so easily attacked that a grad student can take control of them using resources at a University.
KAX17 at times had bandwidth capabilities of 150 Gbit/s that is getting awfully close to the full bandwidth of the uni and would get noticed and shutdown pretty quickly. You seem to ignore the fact that running nodes with this capabilities requires significant resources.
>I appreciate that the bar to prevent the NSA (or the CCP, or whatever other government surveillance network you are most concerned about) taking over your network is high, but the bar right now is just so low for most of these protocols that it should be embarrassing. If the NSA were really the only problem for Tor, I'd call that "a wild success".
> The second, though, is that the goal should be to get big enough that the NSA would actually have a hard time dominating the resources of the pool. This requires being pretty large, but isn't insanely impossible.
So your solution is "we just need to be big enough"? In that case are you not solving a non problem? If the tor network was big enough all this wouldn't be an issue, no need for any crypto currency constructs.
>Let's look at Bitcoin for a moment: I am one of the first people to agree that "proof of work is probably an immoral way to solve this problem" (due to the externalized environmental effects of electricity usage and the such), but damn if it isn't effective, right?
Effective at what? Keeping the NSA out? I suspect the NSA (as well as many others) have a good idea about a significant portion of bitcoin ownership. Also mining is highly concentrated, and there isn't even really an incentive for state agencies to try to control it.
> Bitcoin uses 0.5% of the world's electricity. That means to take control of half the Bitcoin network, the NSA would have to build out the infrastructure for buying (and using) an equivalent amount of the world's electricity... I bet that's hard, even for them.
> The author suspects differently, he does suspect a state actor behind KAX17. Do you have better Information?
FWIW, I also expect them to be a government... but not the NSA. They might not even be a government, though.
> KAX17 at times had bandwidth capabilities of 150 Gbit/s that is getting awfully close to the full bandwidth of the uni and would get noticed and shutdown pretty quickly.
You can do this with a botnet of a mere few thousand cable modems, which means that people like me (look into who I am if you haven't) can pull this in our spare time... it just requires being a bit evil, which these people almost certainly are.
> You seem to ignore the fact that running nodes with this capabilities requires significant resources.
No: I just have a realistic model of how large Tor is, which is "ridiculously small".
> If the tor network was big enough all this wouldn't be an issue, no need for any crypto currency constructs.
So, I did cover the things Tor has right now to get this kind of scaling, and while they aren't insane they are clearly precarious: they rely on the scarcity of IPv4 addresses and manual curation by a very small cabal of people whom I would not trust to protect anyone if actually threatened running on a handful of hopefully-secure machines.
If you correctly discount the weight of these precarious solutions, the size of Tor doesn't matter. If you want to build a system where it costs a lot to register a node, you don't do "proof of IPv4"--which might not make sense at all in the future and is easy to forge--you just do something as close as possible to "proof of a giant pile of cash", as that will always have value and always be difficult to obtain in quantity.
> Effective at what? Keeping the NSA out? I suspect the NSA (as well as many others) have a good idea about a significant portion of bitcoin ownership.
I don't understand this... maybe you are trying to draw some haphazard analogy mapping to anonymity networks? What Bitcoin provides is a distributed ledger, and having a sizable control of it isn't about deanonymization but about delinearization: if you own more than 50% of Bitcoin you can reorder recent transactions (which lets you buy something expensive and then undo the transaction). You have to analyze these systems by what they intend to provide based on maintaining their trusted subset.
Sorry if I come across too confrontational, but I'm really don't understand how your proposal offers a solution and I'm trying to understand it.
> > The author suspects differently, he does suspect a state actor behind KAX17. Do you have better Information?
> FWIW, I also expect them to be a government... but not the NSA. They might not even be a government, though.
> > KAX17 at times had bandwidth capabilities of 150 Gbit/s that is getting awfully close to the full bandwidth of the uni and would get noticed and shutdown pretty quickly.
> You can do this with a botnet of a mere few thousand cable modems, which means that people like me (look into who I am if you haven't) can pull this in our spare time... it just requires being a bit evil, which these people almost certainly are.
I would argue that the size of the tor network actually would be an advantage in this case. You would need likely need to double the number of nodes in the network if you want to provide that sort of bandwidth using cable modems, that likely would be noticed quite quickly. However I agree with the general argument that the tor network is way too small to be considered "safe".
> > You seem to ignore the fact that running nodes with this capabilities requires significant resources.
> No: I just have a realistic model of how large Tor is, which is "ridiculously small".
> > If the tor network was big enough all this wouldn't be an issue, no need for any crypto currency constructs.
> So, I did cover the things Tor has right now to get this kind of scaling, and while they aren't insane they are clearly precarious: they rely on the scarcity of IPv4 addresses and manual curation by a very small cabal of people whom I would not trust to protect anyone if actually threatened running on a handful of hopefully-secure machines.
> If you correctly discount the weight of these precarious solutions, the size of Tor doesn't matter. If you want to build a system where it costs a lot to register a node, you don't do "proof of IPv4"--which might not make sense at all in the future and is easy to forge--you just do something as close as possible to "proof of a giant pile of cash", as that will always have value and always be difficult to obtain in quantity.
I still don't understand, if you require "proof of giant pile of cash" you're reducing the number of nodes so how does that help?
> > Effective at what? Keeping the NSA out? I suspect the NSA (as well as many others) have a good idea about a significant portion of bitcoin ownership.
> I don't understand this... maybe you are trying to draw some haphazard analogy mapping to anonymity networks? What Bitcoin provides is a distributed ledger, and having a sizable control of it isn't about deanonymization but about delinearization: if you own more than 50% of Bitcoin you can reorder recent transactions (which lets you buy something expensive and then undo the transaction). You have to analyze these systems by what they intend to provide based on maintaining their trusted subset.
I understand bitcoin, but I don't understand what you claim it is effective for and how it is relevant for this discussion.
> The reality is that, if you want to build a decentralized system, and you want to figure out how to make it hard for bad people to do bad things, you have to raise the cost of doing said bad things; but, to do that will require programmable money... leading you to crypto.
So to create a decentralized system like crypto, you need crypto (as programmable money)?
Also, can you elaborate on locking money in a pile? Do people lock money as the currency? How does it demotivate people (whose money is locked) to do bad things?
> So to create a decentralized system like crypto, you need crypto (as programmable money)?
I was saying to create a decentralized system like Ethereum (a database) or Orchid (a bandwidth market) you need crypto. FWIW, yes: to build crypto you also need crypto, but I would imagine programmers like us are used to bootstrapping issues as we deal with them daily.
> Also, can you elaborate on locking money in a pile? Do people lock money as the currency? How does it demotivate people (whose money is locked) to do bad things?
FWIW, in my descriptions here, I wasn't looking exactly at "demotivation" (which is extremely complex and works better in some places than in others... in the case of the decentralized databases often it is "you lose all of that money if anyone ever manages to catch you telling two people you voted on the order in which to commit transactions in different ways") but just the "it needs to be expensive to be bad" aspect (which involves defending against what people often call a "sybil attack", after the first patient to be diagnosed with multiple personality disorder): if you want to run a Tor node, maybe it should require tens of thousands of dollars per node locked up somewhere (which is proof of stake) or maybe it should require you to constantly burn electricity doing some arguably-"useless" math (aka, proof of work).
The issue is that it shouldn't be easy for me to just claim "I suddenly have a million nodes". The main defenses Tor has for this are "it is hard to get IP addresses"--which is honestly totally fair, and we may as well call it "proof of IPv4" ;P--and "maybe, if we get lucky enough, the cabal that runs their directory will manually notice somehow that you are evil (which I will note the article here claims is increasingly unlikely) and remove you from the list". This already does not seem to be sufficient even in the current world of IPv4, and implies that Tor will kind of need to continue to require all nodes to be IPv4 (they can let you access IPv6 content, of course, but the idea is that to talk to them would require IPv4, to make the IP addresses scarce) even in a potential future world where IPv6 becomes dominant (which would probably crash the market for IPv4 addresses, breaking this mechanism anyway!!).
(A note on that second one, btw: Tor has like 9 people who run the directory servers. Your client talks to them and attempts to get consensus on the state of the directory. If you were to go to 5 of those people with a crowbar and demand that they redirect all traffic to you, you win... and, when I asked Roger Dingledine what he'd do in such a situation when someone threatened his family over it, it was amazing that he somehow had never really considered the scenario. Meanwhile, those are just computers running somewhere which can be targeted for physical and remote intrusion, and there have been serious scares of such in the past! I'd argue that, instead of people whose job is to run the directory servers that you have to trust, the directory should be code running on a shared distributed database protocol--aka, Ethereum--which, of course, is what we are working on with Orchid ;P.)
> I was saying to create a decentralized system like Ethereum (a database) or Orchid (a bandwidth market) you need crypto.
Git uses crypto?
Alternatively: Git has an opportunity to improve its implementation via crypto?
Alternatively^2: Is system being used loosely here? What are characteristics that can help distinguish decentralized-system-type-A, which does need crypto, from decentralized-system-type-B that doesn't need crypto?
I mean, git is, at best, federated: I host my content, and while you can host a copy of my content (and it can be verified to be accurate), there is no way to find that content. This is why it sucks so much that GitHub refuses to support custom hostname mapping, as it means that people get "trapped" in their hosting as these not-very-decentralized hyperlinks end up scattered around hard-coding the name of the hosting provider not only in peoples' websites and working copies but build scripts and .gitmodules files.
You take the premise of git, though, and you add discovery of copies, and you essentially get IPFS. They use a distributed hash table to let you look up "where can I find a file with this hash?"... but distributed hash tables are protocols designed around altruistic nodes collaborating according to protocol and require complex anti-spam mechanisms that end up failing constantly leading to sybil and eclipse attacks.
To protect against all of these attacks on our decentralized "help me find a place that is hosting code with this hash service" I contend you will end up with consensus techniques with incentive structures that force you down the path of cryptocurrencies. This is, in truth, why Bitcoin is so cool: "nakamoto consensus" is probabilistic technique to build a shared database where you don't have to assume a ton of nodes are honest: the nodes are being monetarily incentivized to be honest.
> build a shared database where you don't have to assume a ton of nodes are honest: the nodes are being monetarily incentivized to be honest.
Monetary incentives are certainly one way to do that, but I don’t see any evidence that they’re the only way.
In particular, you should only need to assume that some nodes will be honest, and have a cheap way to verify honesty. Long-lived reputation and cryptographic “proofs of dishonesty” seem promising. Make it hard to build reputation, but easy to lose it, and easy to detect dishonest actions.
I suggest a complete end-to-end rethink of how anonymous Internet services are done. Instead of it being a tunnel for generic traffic, think of a specialised private protocol for input/output of information. Throw away the highly complex, insecure HTML5 that most hidden services are interacted through. Instead of slow, bloated HTTP requests, think of sockets that stay alive meaning traffic can be high bandwidth and low latency. We could even throw in some modern cryptography like ED25519 which would give some performance benefit. I had an idea for 'heartbeat groups' too which would prevent timing attacks at the ISP/wider Internet level.
If it weren't for the fact that I have a full time job and don't want the Five Eyes on me, I'd build it myself.