Hacker News new | past | comments | ask | show | jobs | submit login
Theseus DHT Protocol (2018) (wootfish.github.io)
205 points by killittosaveit on June 19, 2023 | hide | past | favorite | 39 comments



Oooh interesting! I've been reading up on DHT's for the first time recently (I'm very late to the party).

It looks like the original inspiration was an anti-censorship network for sharing/providing scientific papers: https://eli.sohl.com/2017/02/17/theseus-robust-system-for-pr...

But as another poster noted, the project seems to be defunct do it being a solo effort: https://eli.sohl.com/2017/02/17/theseus-robust-system-for-pr...

I'm curious why the OP is interested in this project, or if anyone else has any interesting DHT tools they want to share?


bittorrent-dht: https://github.com/webtorrent/bittorrent-dht (JavaScript implementation used by WebTorrent)



At the bottom is the link to the more technical specification: https://github.com/libp2p/specs/blob/master/kad-dht/README.m...



This seems to be ~4 years old. Have any cryptographers reviewed / audited this protocol in the interim? A quick google search comes up empty [1], and google scholar doesn't show anything [2].

[1]: https://www.google.com/search?&q=Theseus+dht+protocol+%22aud...

[2]: https://scholar.google.com/scholar?q=theseus+dht+protocol


IANAC† but I think the use of Noise Protocol is completely bogus. Regardless of your protocol of choice, you cannot encrypt things end-to-end if you cannot authenticate who is at the other end that you're encrypting for—hence the use of "safety numbers"[0] in Signal, "certificate authorities"[1] in TLS, "web of trust"[2] in PGP, and the list goes on and on...

This is at best an obfuscation attempt to make fingerprinting more expensive, just like BitTorrent protocol encryption[3].

† I Am Not A Cryptographer

[0] https://support.signal.org/hc/en-us/articles/360007060632-Wh...

[1] https://en.wikipedia.org/wiki/Certificate_authority

[2] https://en.wikipedia.org/wiki/Web_of_trust

[3] https://en.wikipedia.org/wiki/BitTorrent_protocol_encryption


Hi, I wrote this post. Let me use your comment as a chance to talk about what I was going for with the use of Noise here.

You're correct that part of the assumption inherent in end-to-end encryption is that you've got the other end right; otherwise, MitM attacks become possible. To prevent this, Social apps like Signal (also WhatsApp and many others) give you, roughly speaking, a way of confirming that both endpoints have negotiated the same secure channel. Less-social web apps use PKI. In both these cases, the design makes sense because you care who the other endpoint is. Their identity is meaningful to you outside of the context of the protocol.

In the peer-to-peer context, though, this identity information is mostly meaningless. I think, as long as there are no "network admins" or similarly privileged identities worth claiming, it's less important to certify identities up-front (what would even be a reasonable standard for accepting them?) and it's more important to be able to persist them. You want to make sure you're still talking to the same person you were talking to last time, whoever they are. This is what the PSKs are meant to accomplish (though they are not the only possible solution here).

This is technically still MitM'able, but only if you catch two parties on their very first connection; after that, they have a shared secret, and you should be locked out. Technically this falls slightly short of the ultimate goal of end-to-end encryption, but I still think it's a good fit for this context.

I went into this in (much) more detail in this old blog post: https://eli.sohl.com/2017/06/09/message-encryption-in-theseu...


> This is technically still MitM'able, but only if you catch two parties on their very first connection

Ah, haven't realised it was Trust On First Use (TOFU)--neat!


Cool project, which lead me to this post: https://eli.sohl.com/2020/06/10/sybil-defense.html

I'm in this space right now (Sybil-resistant DHTs) for an identity project and finding things like this is absolute gold for me. Is there a good place to find more articles on modern sybil protection mechanisms (other than the proof-of-whatever blockchain bs)?


Hi, I wrote this post, and I just wanted to say thanks for the kind words - this research was a lot of fun to do and I'm glad it has an audience.

The "best" resources I know of are the ones I cited in that post, though do bear in mind it's a few years old. That said, I did want to make sure you didn't miss this companion piece: https://eli.sohl.com/2020/06/05/dht-size-estimation.html which is applicable to detecting (and - conjecturally - responding to) Sybil attacks.


Just wanted to say thank you, again. Your project gave me a lot to think about.



See my sibling comment. I am looking for partners to help implement this “holy grail” stuff together


Anyone here an expert on DHTs, or maybe an enthusiast?

Back in 2018 when I was designing the Intercoin Protocol (which we now call the Intercloud) I met with Petar Maymounkov, who invented Kademlia in the early 2000s. Dude used to teach at NYU (my old school) so I emailed him and took him to Olive Tree Cafe — wound up asking him a bunch of questions and learning a lot.

Anyway, I have a question about all this. Consensus is an expensive operation and th reason Blockchains are slow is because they run fu network consensus cor every block. In Intercloud, we run network-wide consensus only occasionally, and mostly to update a provably random seed (by combining input from many different nodes). This serves as a seed for a random oracle for the network.

Now, that oracle is used during “shuffling”. I think we may have originally got the idea from the SAFE project. What happens is that, every so often, swarms of nodes are shuffled and nodes are reassigned to other swarms. So although they may pick their original swarm, they have no control over what future swarms they will be in, because it is effectively impossible for then to predict, let alone control, the random oracle and hashes of activity from neighboring swarms. Being off by a single bit throws off any of their well-laid plans.

Shuffling is necessary to break up swarms that have been corrupted, overrun by dishonest nodes sybil attacks, or just unable to reach consensus due to unavailability.

So most of these DHT issues go away. However, encryption is still a very good idea.

Another GREAT idea is what SAFE pioneered - removing the IP addresses from responses after the first hop. Because otherwise it is trivial to DDOS a network or a subset of it.

PS: last year I interviewed Ian Clarke (creator of Freenet) about this stuff https://m.youtube.com/watch?v=JWrRqUkJpMQ


> To a passive observer, all Theseus DHT protocol traffic is indistinguishable from random noise.

This.

And it should have been a design requirement for every protocol on the internet since day one.


It's a cool property for sure, but in reality it's pretty difficult to implement, especially if you want a protocol that's extensible, interoperable, or allows open communication. At minimum, you're probably going to have to assume any participants trying to speak the protocol to each other have pre-shared cryptographic keys and/or protocol parameters.

Even a protocol like Wireguard for example, which does hardcode protocol parameters and requires the client to know the server's public key in advance, is not designed to be indistinguishable from random noise. You can certainly do it, but you lose a lot of other properties that are arguably more important to protocol design than indistinguishably.


Can you explain this, how can it be indistiguishable from random noise? What does that even mean? And why is it even important/useful? Surely even if it looks like noise anyone can see that you are communicating with the DHT network?


The idea is that from the point of view of an observer able to see every byte of data your protocol puts on the network, there is no way for them to tell whether you are actually speaking the protocol or just exchanging random bytes. Basically this is an extension of the idea that encrypted data should look completely random to someone who doesn't know the encryption key, just applied to an entire network protocol. Achieving this means every single byte your protocol puts on the wire needs to be be encrypted with a key known only to the participants or entirely random. As you can imagine, it's pretty hard to actually do, which is why most protocols don't work this way.

While this is not a universally useful property, it can be valuable in situations where even being able to detect a device is using a particular protocol is a problem. A good example is using an anonymization network like Tor in a repressive country like China. Even if your data is protected by Tor, you probably don't want the authorities to know you're using Tor at all.

As you said, one of the biggest problems with this is that even if the protocol itself is perfect, it's not worth much if the network participants are known and communicating with them is itself evidence you're using the protocol. The solution to that would either be making the participants non-public and hard to discover (so an observer doesn't know you're talking to a network participant) or having participants do a lot of things other than participate in the network you're trying to hide. Tor for example takes the former approach with non-advertised "bridges" that you have to know about via some out of band method (e.g. someone emails one to you).

In practice, this indistinguishably property is becoming less useful even when it works given the ubiquity of "normal" encrypted protocols like SSL/TLS. Arguably just using TLS is far better than trying to look like random noise even if you're trying to hide, since random noise on a network is much less common than TLS and probably more of a red flag these days.


This is a great explanation, and the point about TLS is well taken as well. If you want to go for that level of misdirection, then depending on your threat model you might consider e.g. using a remote proxy and wrapping your session to it in TLS.

The tricky thing about baking TLS in at the protocol level is that it brings its baggage with it. This is not necessarily a bad thing, but it makes the design more complex to reason about. In particular, it is arguably overkill when you're not planning on using certificates, as is the case here. Just compare the number of steps here: https://tls13.xargs.org/ to any of the patterns here: http://www.noiseprotocol.org/noise.html#interactive-handshak...

Another thing worth mentioning regarding indistinguishability from randomness is the impact of metadata. Even if the bytes you send on the wire look meaningless, there's still the size of the message, the spacing between messages, the time of day, etc. Any of these channels can carry signal, and it is very hard, if not impossible, to get rid of those signals completely.

That said, I still think the original goal of indistinguishability is worthwhile, because if you can force the passive adversary to move from perfectly accurate methods (e.g. fingerprinting message contents) to imperfect ones (e.g. guessing the protocol from message timing), that seems like a win to me.


I would have rather have every protocol to supply endless amount of ice cream.


So, identifying this protocol must be trivial: it's the one that looks like random noise.


I learned about Elligator quite recently and was shocked by how rarely it's actually been deployed (or even implemented), very happy to see it being used here.

> Release date: 4/20/2018

> Revision date: 10/8/2018

Would be nice to get a (2018) in the title.


Elligator is both very cool and surprisingly less useful than one might expect. It does what it promises, but there really aren't all that many situations where it's really important to conceal the fact that you're exchanging Curve25519 keys.

The most common encrypted protocols advertise the cipher suite they're using, so there is little value in concealing the format of the keys being exchanged. And protocols trying to hide their existence entirely are pretty uncommon.


That’s also only the specification release/revision dates.

Digging into the GitHub repo: “The implementation is not yet complete.”

https://github.com/wootfish/theseus.dht


I found a 2020 blog post from the author, announcing a pause of the project https://eli.sohl.com/2020/04/26/pressing-pause-on-theseus.ht...


Year added. Thanks!


Does it allow someone with knowledge of a key to find the IP addresses of other people interested in that key? Because if yes, the other security guarantees are not all that interesting.

The fact that Tor is mentioned would suggest that you have to add that to be secure.


Hi, I wrote this post, and yes, you're right that IPs would be disclosed, because it is necessary to learn a remote peer's IP before you can make a TCP connection to them. Whether or not this is "secure" depends on who you're trying to be secure against. If you're sharing Linux ISOs, you really don't care who sees you do it; if you're sharing pirated movies, you just care that Disney doesn't find out it was you, so you can use a VPN; if you're sharing government secrets, you might have a more rigorous threat model.

For users in extreme circumstances, Tor gets them a lot, but it also adds a lot of overhead; you'd want to avoid it if you don't need it, not just for your own sake but also for the sake of Tor's finite bandwidth. There is a lot to be said for having a feature like Tor be supported-but-optional, opt-in, and interoperable with users who are not on Tor.

I did consider the idea of allowing Tor hidden service addresses as an alternative identifier to IPs, though that idea comes with its own compromises: you'd only be able to talk to other Tor peers, for instance, or, alternately, every peer would have to be on Tor.

The latter case is obviously a non-option. The former case would kind of work, but the Tor-hidden-service peers' effective network size would be much smaller (since they can only talk to Tor peers) and thus easier to compromise. This is obviously bad - users in need of greater security would end up in a less secure version of the network - and if this subsection gets owned, that could have a second-order impact on the network's overall resilience.


I just had a brief read of the protocol but my understanding is yes, there is nothing like onion routing or similar that could disguise to a peer that is serving the data that the requester is indeed the one who is interested in it.


If I can still get fined or disconnected by my ISP for what I do on the network, the whole "indistinguishable from random noise" and "forward secrecy" really mean nothing as an end user.


Maybe this is also why they underline in the preamble that it is suitable for running over Tor.


I like the ideas in this, and that it's got changes that allow it to work over stream based protocols, which means webrtc and tor. However DHTs (Kademlia at least) aren't efficient on a large scale if they're not built on packet oriented protocols. I don't know what the solution is, maybe allowing TCP is just a compromise you have to make sometimes.


Hi, I wrote this post, and this is something I was wondering about the whole time. It seemed like TCP was a necessary compromise, but I wasn't sure how painful the overhead would be. If there are any notable examples you know of where this went wrong (or right) I'd be interested.


If it helps, you may find that when you scale your DHT to 10000+ nodes you will start to have issues with building on TCP. Of course many DHTs never get to that scale. IPFS is an example of one that sits just above that mark and can't figure out why things don't work well.

For comparison, the BitTorrent DHT has 20 MILLION nodes and works fantastically.


> If there are any notable examples you know of where this went wrong

IPFS

Stop using sessionful protocols for DHTs. It. does. not. work.


This is true. Last time I looked at the DHT in IPFS it was full of kludges and workarounds for performance issues due to the fact that it uses TCP connections underneath.


A more recent attempt at a DHT by the Holochain team: https://developer.holochain.org/concepts/4_dht/


I once combined libp2p/ipfs's DHT impl with Tor many moons ago: https://github.com/cretz/tor-dht-poc




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: