Hacker News new | past | comments | ask | show | jobs | submit login
Globally Distributed Elixir over Tailscale (richardtaylor.dev)
121 points by moomerman on March 8, 2023 | hide | past | favorite | 20 comments



Very cool concept.

Reminds me of the intended use case of Nebula, which seems very similar to this. If you're interested in a bare-bones and totally self-hosted option, it could be a good choice here. https://github.com/slackhq/nebula


There’s a reimplementation of the Tailscale control plane called Headscale, if you want to self-host.

I’d try to use that first, because sadly Nebula and ZeroTier don’t have a relay/TCP/HTTPS fallback option and still “have no plans of implementing one” as of Mar 2023, which leaves you out of luck as soon as you encounter NATs or try to access your cluster from an airport/hotel wifi

Maybe you know of other mesh/p2p VPNs that do support TCP fallback though? Would be great to see some alternatives in this area


Nebula implemented UDP(?) Relay support (1) last year although it is marked experimental.

1. https://github.com/slackhq/nebula/pull/678


I've been using the relay for months. It is very stable BUT, it sometimes takes minutes before peers realize they can also talk to each other on the local network.


Pretty sure ZeroTier supports relaying (I remember reading some of their earlier blogs and it mentioning something to that effect). In practice, you just have to turn off the uPnP in Settings to use it I've found.

Edit: Yep, just found a reference to it: https://docs.zerotier.com/zerotier/troubleshooting/ (Sorry, no direct link, so you'll need to Ctrl-F and look for Relay)



we're focused on this very thing - https://bowtie.works


does Bowtie aim to provide the same functionality as Tailscale ?


A lot of the concepts are similar, yes. A few key differences exist, specifically as it relates to architecture and user experience.


They actually have a (semi) hosted solution these days.

I evaluated it for one of my projects but it was not fitting the bill for reasons outside of their control, https://www.defined.net/


I would be very curious to see how well this actually works in different data centers. My understanding of empd is that it basically expects all nodes will be on the same switch. I don't believe there is any default handling for spikes in latency or netsplits, and epmd is known to be pretty chatty.

The common wisdom seems to be to do federation of clusters rather than clustering multiple data centers together.


By epmd do you really mean all of dist?

The common wisdom on dist is drastically wrong. I was at WhatsApp and going to Erlang conferences and people would say don't run dist clusters with more than N hosts (N ~~ 100), and we were running dist clusters with 10N.

You probably want to adjust socket buffers and dist buffers.

WhatsApp did use something they called wandist to provide cross-cluster connections and more specifically to determine which clusters communicated with which other clusters. That's useful if the node counts are high enough and memory per node low enough that socket buffers are significant and you have a lot of nodes that don't communicate with each other; dist wants to make a full mesh, but you might be able to avoid it other ways? Wandist was also useful because pg2:join had scaling challenges because of contention on cluster-wide locks from the global module, but the new pg (originally contributed by WhatsApp) addresses that. Wandist also added regional affinity for the cross-cluster equivalent of process groups, and as mentioned below, reliable messaging layered on top of multiple connections.

Dist doesn't hide any imperfections of your network though. If you routinely have tcp connections that can't make progress, you will have dist reconnects and have to deal with the unknowns there (wandist layered multiple connections and reliable messaging on top of dist, if a connection stopped functioning, messages would be sent on a second connection; at the cost of using more memory). If your network gets throughput limited, you can see net_adm:ping times of tens of minutes, which is exciting too. All that said, you really do want a reliable network when running distributed systems of any flavor, and building mostly reliable networks is possible, but it's a choice. You'll still need to deal with ocassional issues even with a mostly reliable network; WhatsApp used to routinely catch network problems at our hosting provider before they did; and we had to build a lot of blackbox monitoring and diagnostics for their network.


I would love to read books or post-mortems of running Erlang/Elixir at scale. Do you have any recommendations? Have you written any memoirs of your experience at Whatsapp anywhere?


It's been a while since I read it, but I think https://erlang-in-anger.com/ addresses some of this subject matter; I'm not sure if it's still current, many things that used to be terrible are now fine ListA -- ListB with big lists used to be a good way to stall your node (to the point we were considering modifying erlang to remove that operator), but now it does something sensible, if not particularly fast. I have not written anything in an organized fashion, but I write way too many comments on here. I haven't seen much of anything from my colleagues; they must be better than I am at keeping their mouths shut, and just getting things done. :)

There's several talks from Erlang conventions; Rick Reed did several overview type presentations, and we had a couple other presenters on different bits and pieces. But, post 2014, we didn't present nearly as much (got kind of bogged down in post-acquisition stuff).


Doesnt seem to be a problem for all the elixir apps running as clusters on fly.io.


I think there might be some gotchas with some of the Erlang clustering primitives. I worked on a product that Erlang solutions helped design and they didn't use Erlang clustering for RPC but rather reimplemented it themselves over dedicated TCP connections. If you do some googling there are people complaining about issues in Erlang RPC caused by multiplexing the RPC messages between nodes and cluster meta messages over the same TCP connection creating head of line blocking issues. Though, maybe some of this has been fixed since:

https://www.erlang.org/blog/otp-22-highlights/#fragmented-di...

There are also apparently other bottlenecks like funnelling everything through a single gen_server. (https://erlang.org/pipermail/erlang-questions/2016-February/...). Again, this may have been fixed since then.


It's probably not a problem during normal operation, but I'd be worried about what happens if the network misbehaves.


How is Tailscale to connect production servers? I will need to do something like that very soon, to have many geo-distributed workers for my Elixir app, and I was thinking of using raw Wireguard, ansible scripts, and some effort required.

I use it at home, and I'm not sure I need yet another third-party product to do some average networking setup, but I have to admit it's pretty decent at what it does.

So... is anyone using Tailscale on their servers?


I use it in a home-lab setup with my personal development and local deployment machines (mostly raspberry pis and a qnap) plus few EC2 instance as a part of the mesh along with my laptop and gaming PC.

Tailscale makes the ec2 instances feel just as local as the LAN machines. I love having easy SSH connectivity to my laptop and EC2s from my windows machine. There's no tunneling or bastioning or key management or any of that nonsense. And without extra network hops (nodes direct-connect whenever possible) I don't have extra nodes in the route to steal bandwidth time or cost.

But maybe small-scale isn't what you're after : I haven't done the work to figure out the at-scale bootstrapping process (I still install the systemd or whatever units and google auth each machine manually).


I use tailscale with subnet routers at work. It's pretty good and mess-free with what it does there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: