Did Tailscale introduce any security measures to their Coordination Server? Last I checked the coordination server is basically controlled by them and they could easily just insert any of their own pubkeys and infiltrate your network.
Even supporting WireGuard's Share Secret feature would be a start.
As long as that's not addressed, not a chance in hell I'm going to deploy this.
For a FOSS solution (now that ZeroTier no longer is one), there’s Nebula, but you’ll have to operate your own central node. That’s nothing compared to maintaining the hell that is IPsec or even OpenVPN, though.
Even bare Wireguard is a joy to set up, to my endless surprise, if you’re fine administering things as a traditional small-scale LAN, manual routing and all.
We want ZeroTier to be FOSS. We just need to find a way to do that without allowing someone to take and rebrand it and write us out.
ZT is vulnerable to this because we actually try to make everything decentralized and self hostable. We are not making a cloud silo play where we try to lock everyone into our hub to do their networking, so we have to be able to monetize by licensing as well as SaaS. Our SaaS is optional.
This wasn’t an issue a decade or two ago when existing FOSS licenses were created. It’s an issue now, and is why other people working on decentralized easy to set up systems like CockroachDB use the same license we do. If CockroachDB were liberal FOSS someone would fork it and slap their name on it and scoop them, since by not having to do the hard work of actually developing it they could focus exclusively on marketing.
(You can self host ZT controllers, and roots become easier to self host in the next major release. Once we do that the protocol could theoretically outlive the company with no loss of functionality including its unified namespace.)
It’s not my intention to accuse or condemn you, in that while I do dislike the direction you’ve chosen, I don’t think you are being intentionally or unintentionally malicious. It is also hard not to love the tecnhical work you do and the way you’ve not only struck out in the corporate networking badlands but also taken up the mantle of Hamachi, which has lain unclaimed for way too long.
You’re absolutely brilliant (like Cloudflare), your goal does not seem evil (similar to that of Cloudflare), neither do the consequences of achieving it (somewhat less like Cloudflare), and you haven’t done outright evil things like patent clever but fundamentally simple tricks (have I mentioned I’m conflicted about Cloudflare?). And when all is said and done, you do have to pay your bills, and only you can say what works and what does not in that respect.
So I applaud you when you say, loudly and honestly, that the goal of your licensing model is to pay for the development by making yourself an essential part of it. I believe you when you say your work couldn’t exist otherwise. But my mental model of (dev– and knowledge-of-humankind–centric) open source (unlike that of user-centered free software) is that its very essence is the original developer making themselves unessential or at least not using legal threats to preserve their special position.
Thus I am explicitly not saying that your non-open-source approach is evil—that would be unjustified purism; morality judgments are difficult and best not employed as one-sided proclamations. I am not expecting that you will go home ashamed or mend your dastardly ways, because you have nothing to be ashamed of, your ways are not dastardly, and I have no idea how you could mend them and still survive. I am saying that your non-open-source approach disagrees with the very core of the open source idea as I best understand it, not on a purist technicality but in a fundamental way.
Bruce Perens co-founded the OSI. He seems to have given up on the idea of open source in the age of mainframe providers wielding armies of star programmers, and probably would not have any issue with your strategy. I am younger and have not given up yet, so I look upon your strategy with sadness and a longing for a better world.
Bruce Perens is much smarter than me in more ways than one.
I really agree with your sentiment to a point, but the problem as I've written elsewhere (ad nauseam) is that software isn't free. Software, particularly easy to use software, is extremely labor intensive. If FOSS is about developers writing themselves out of the equation, it means nobody can make a living making it. If nobody can make a living making it, that puts a pretty huge limit on how much resources can be put toward making FOSS better.
Closed silos don't have that constraint so they will win by sheer muscle power.
The BSL (our current license) sort of sucks, and I look at it as a stopgap until we can come up with something better. I have some ideas, but they need to be developed and so far I have not had time to develop them further since coding and running a company are fairly demanding.
I don't necessarily think you should open source your most important bits. I'm OK with binary blobs for the most important bits, and possibly open source clients.
I think people who insist on FOSS are demanding too much. Your choice of license is very reasonable. Don't let the purists and freeloaders get you down. (No, I don't yet use Zerotier, but maybe I should.)
More generally, I like what you said a little while back in the thread about the Mighty remote browser [1]:
> (2) The hopelessly naive idea that "information wants to be free" and everything has to be "free" (as in beer) needs to die, be cut into a thousand pieces, burned, encased in concrete, and sunk to the bottom of the ocean. Nothing is free. Software takes a vast amount of labor to produce, and that must be funded. If it's not funded directly and honestly it will be funded indirectly and dishonestly (surveillance capitalism, cloud lock-in, etc.). "Everything has to be free" and piracy actually help push us toward a surveillance capitalist panopticon future.
That point should be front and center in all discussions on FOSS sustainability (or lack thereof).
You can self host ZeroTier controllers, which are the things that control network access. See the controller subfolder of github.com/zerotier/ZeroTierOne.
Roots in ZeroTier are harder to self host right now (it’s getting easier in the future) but they are just dumb STUN/TURN equivalents. They have no power to grant access and can’t even see what networks you join or what you are doing.
I highly recommend you check out ZeroTier. I've found WireGuard setup to be non-trivial and extremely error-prone, but with ZeroTier there's a web UI that you can add new services and machines from, all as one logical "network"
I believe that the free tier is capable of running clusters of up to 50 machines. Really nice to use + truly trivial to set up.
It does look like it's missing some functionality that could be key (DNS, ACLs, namespaces). But, pretty neat that it exists, given "The purpose of writing this was to learn how Tailscale works".
Those aren't really comparable situations to an externally hosted "coordination server" which explicitly allows a third party to control access to your internal networks.
Security breaches happen every day, with serious consequences. These are real threats, and there are ways to mitigate against them.
A target like that coordination server is particularly risky because, as we saw with the massive Solarwinds attack, attackers will look for and expend effort to compromise the most attractive targets, and that's a particularly juicy one.
Dismissing such threats with false equivalences is a pretty good way to find yourself on the wrong side of a security breach.
Seems like it would be particularly tough to secure that coordination server. It handles metadata about not just nodes, but routing, ACLs, DNS, pre-auth keys, etc. And apparently in a way where it needs to understand the data versus just passing on encrypted bits that it can't read.
I guess you could split it in half, and pair it with some actual service that the customer runs, and let it just be a dumb relay of encrypted/signed/whatever data.
They are exactly the same situation. When you run other people's code you are explicitly giving them control over your stuff. Ken Thompson pointed this out back in 1984. ( https://dl.acm.org/doi/pdf/10.1145/1283920.1283940 )
Either you trust the entity whose code you are consuming or you don't.
Do you trust WireGuard to not get owned via a supply chain attack? Like Solar Winds.
Do you trust any open source project you currently use to be free from bugdoors? Like University of Minnesota recently demonstrated by submitting intentionally-vulnerable patches to the Linux kernel.
There a significant difference. When I run code, it is isolated from the people who wrote it. Someone providing a github library I compile into my code has no ability to later unilaterally come in and modify it. There is a gate where I have to ask them in, and I do tend to read the diffs first. There are only isolated windows of temporal vulnerability where I have to trust them, with mechanisms for dealing with that coherently.
An always-on service is an ongoing, continuous trust relationship with continuous temporal vulnerability
Consequently, while I can reasonably safely apply a lower bar of trust to libraries and code, someone asking to be continuously trusted must be held to a distinctly higher standard.
It is can be valid to trust that as well, but you're operating from a crippled starting point for your security engineering if you don't see those as separate categories of issues to address.
>Consequently, while I can reasonably safely apply a lower bar of trust to libraries and code, someone asking to be continuously trusted must be held to a distinctly higher standard.
Also, this seems backwards to me. You trust Tailscale at the network layer. If they abuse your trust to inject routes/keys your machine becomes network-reachable but all of your other controls are still in tact.
That's not true when you applications violate trust.
You can do AppSec if you don't trust the network.
You can't to NetSec if you don't trust the application.
You are necessarily claiming that a single person (you) is capable of adequately verifying the trustworthiness of the work of hundreds (perhaps even thousands) of developers.
That's just the story you tell yourself to convince yourself that you are in control.
Mean while FAANG have been doing it all wrong by hiring thousands of security engineers. They should've hired you.
The really interesting idea here is that once you change the frame of reference, or start from a different technical basis, features like Taildrop become incredibly easy to build and design - it seems like it almost built itself. It seems like the same goes for interfaces or platforms… once you have a computer in your pocket capable of 11T operations per second, you get a lot of stuff “for free” right out of the box.
Not "his challenger"; it was not a contest. It would be more proper to say "his reviewer, the inventor of Unix pipes, using the review column to advertise UNIX®". See a previous discussion here: https://news.ycombinator.com/item?id=22406070
Incidentally, a couple of days after that thread, the current fastest submissions (which, like Knuth, all use tries) were posted at the question https://codegolf.stackexchange.com/questions/188133/bentleys... mentioned in a sibling comment. (AFAICT that comment should say "200 times faster", not "30 times faster", but anyway in light of the original context it's not meaningful to compare the two approaches either for efficiency or ease of implementation or anything else.)
Indeed. For those that started scratching their heads 'this is impossible', this deep dive from the Tailscale blog will be informative. Perhaps even entertaining, the birthday paradox may have a hand in it.
However in situations where you have say a Juniper SRX scrambling both source and destination ports on both sides of your nat, the birthday intersect is 2^32 rather than 2^16.
With a Cisco ASA or Fortigate which tends to keep the same source port where possible you'll converge far more quickly. When there's a central server to help it's even quicker and most of the time will just work.
(sometimes it's not possible to keep the same port when source-natting -- with two devices from 192.168.0.1:9000 -> 1.1.1.1:53 and 192.168.0.2:9000 -> 1.1.1.1:53, the second will have to have a mapping to non-:9000 source IP, but in my experience, Cisco, Fortigate and Mikrotik (thus linux) all support the "only change if needed" option)
In a world moving towards "zero trust networking", this appears to be going in the opposite direction, where the network is now being implicitly trusted.
This enables unauthenticated file transfers between hosts on the same network.
Given the world we live in today is that RCE vulnerabilities are relatively common, what happens when host1 gets some malware and uses this to transfer itself onto host2?
I assume this has been considered, and it's been decided that the convenience of the feature outweighs the security and reputation risk considerations?
Someone else mentioned that basically malware on your machine is expected to bypass all security layers. So we’re basically saying “the ‘new’ network layer is now as trustworthy as the app layer” rather than claiming everything is perfect, you’re right.
This is one reason we limited taildrop to only transfer between devices owned by a single user for now, and only to drop files into controlled locations. Tailscale also has ACL policies for when you don’t trust all the endpoints to just do anything.
Isn't the tailscale work more to do with authenticating hosts than the network. You trust all communication decodable by a trusted public key.
That still allows malware on a trusted end device to connect to you, but then if you've got malware on the box chances are it will have access to the private keys, authentication cookies etc, anyway - at least currently.
I guess in an ideal world where everything in your org is behind MFA, app authentication and protected from denial of service, then solutions like tailscale would not be needed.
But my personal experience is that most software is completely hostile to good security practices and you end up having some perimeter security as well.
Awesome work, I wish I could use it. Unfortunately I use tailscale at work too and until https://github.com/tailscale/tailscale/issues/713 I'm turned off from using tailscale also for my personal stuff since I don't want to physically switch to a different set of machines for work and personal projects.
I use Tailscale every day I like the simplicity of setting up a new node. My problem is that I don’t know how to separate my nodes into isolated groups, so my question is does it support some form of multi-tenancy ?
Thank you, yes we are using this feature but I was thinking of something like creating separate isolated network for each group of machines. For example I manage two different cloud infrastructures that don’t share anything in common. It would be more convenient if I could place them in different tenants.
re:localapi: Are these httpd instances? If so, are they run on-demand or are always running? Curious how this plays out on mobile devices with regards to power consumption?
re:whois: I presume this is in control of a central identity service. I see that the private tailscale network is (mostly) p2p and (always) e2e, so wondering if you envision a future where the tailscale network goes decentralized without a central control àla BitTorrent / LimeWire (despite [0])?
re:peerapi: Excuse my naivety, but couldn't this binary transport be used for file transfers too? Or, does http simply provide too many (file transfer) options [1] to not bother re-implementing it in a custom protocol?
The localapi is indeed an http server built into the tailscaled process, which is written in Go. Since we already had an http client in there, the net new code to add an http server is quite low. And it doesn’t take any battery unless it’s being queried. I think people underestimate how cheap http can be (once you’ve paid the up front cost, anyway).
You’ve correctly guessed that blog link [0] that explains the reason I don’t think we’d ever want to try making a distributed coordination service. Most importantly, corporate customers absolutely love having a single control and registration point for every corporate authorized device on their network (and thus, a way to instantly deauthorize stolen devices). What we’re going to do though is add private audit trails and tamper proofing, kind of like TLS certificate transparency, so that the central instructions can be validated in a decentralized way, if that makes sense. More on that later. :)
Re: peerapi, there are lots of ways to build app layer protocols once you have tailscale making the connection itself easier. We picked http since it was the fewest lines of code and it makes an easy example.
Re: live video, Jitsi already works fine on a tailscale network if you want to try that.
> And it doesn’t take any battery unless it’s being queried. I think people underestimate how cheap http can be...
Curious about the underlying design decision on why a separate peerapi layer if a golang http/2 server is listening already (or is peerapi running over http, too)?
> What we’re going to do though is add private audit trails and tamper proofing, kind of like TLS certificate transparency, so that the central instructions can be validated in a decentralized way.
> ...there are lots of ways to build app layer protocols once you have tailscale making the connection itself easier.
True. My previous employer built an internal service similar to tailscale but it worked over bluetooth, wifi-direct in addition to ICEing NATs out. It made device discovery, cross-app, cross-device, cross-service communication super easy.
This is really excellent work — congrats! I previously used upspin to securely move files between devices on my network, but I really love the simplicity and ease of use that taildrop now provides.
To give a concrete example, I had been dragging my feet on moving some old but useful keys/tokens between my Windows desktop and MacBook — using taildrop, the transfer was both easy and virtually instantaneous.
Historically, it's always been such a pain to either have to upload/download the file(s) from a cloud service (what if I need to move private keys? that adds another layer of complexity/annoyance because then I need to encrypt them) or find a usb drive, plug it in, copy the files, eject, find the adapter for my macbook, plug it in, copy the files, eject, etc. Taildrop completely fixes all of that — it's amazing!
Im still a bit confused about the "without the cloud" aspect. It seems that you still need NAT traversal, which means an external STUN server, and a relay for NATs that cannot be traversed - as well as a coordination/signalling broker. Surely those require some services "in the cloud"?
In Tailscale, much like in SDN or SD-WAN, we think of the network in two parts, the control plane and the data plane.
The data plane is how the bulk of your packets get sent from one place to another, which in Tailscale is peer-to-peer (as long as your network is not completely blocking NAT traversal for some reason). Even if NAT traversal is blocked and we have to relay your data through the cloud (through our DERP network) to make it work, the data plane is still end-to-end encrypted using private keys that only exist on each node. The private keys never leave each node. The DERP servers are just relaying opaque byte streams, like any IP router would do.
On the other hand, the Tailscale control plane uses a central coordination service. It's used to exchange public keys and STUN information between nodes, but this is a tiny amount of information updated rarely (and therefore reasonably cheap for us to handle at scale). These public keys are not enough for an attacker (us or anyone else) to be able to decrypt your data traffic.
So when we say taildrop never sends your files through the cloud, that's because taildrop exists entirely inside the data plane, sending data through e2e encrypted tunnels that have already been established with the help of the coordination service, STUN, etc.
Because the coordination service is cheap for us to run, we can have a really generous Tailscale free plan without losing all our money. The paid plans are intended to be for "corporate network" situations where people want more centralized controls, audit trails, and so on.
>as long as your network is not completely blocking NAT traversal for some reason
Out of curiosity, how often does this happen in practice? Also, how would you even do this? Isn't NAT traversal a direct consequence of how firewalls work and always possible?
Session Border Controllers were a big part of any carrier VoIP deployment and focused on essentially the same problem : using the signaling plane to normalize a separate direct endpoint packet flow across variable networks and devices. You are implementing this same logic in your SD-WAN or do you use like an acme packet (->oracle) box now you operate at scale ?
MacOS, iOS, and Linux clients can use your native OS updates. Windows needs to be updated by hand or with something like chocolatey or MDM. But more importantly, we have a policy of not breaking old clients if we can possibly avoid it. So far we have never deprecated old clients. We extend our protocols in a backward compatible way, because unilaterally breaking your network infrastructure… really sucks.
The way tailscale networks (tailnets) work is probably not how you’re used to thinking about them. Each node has its own view of the world, based on which nodes and services are shared with it in particular. We have security policy settings per domain, and a node sharing UI that lets you share any of your devices with anyone else.
The default model is that all devices belonging to someone in the same domain, say tailscale.com, can see each other. But we’re working on making that even more flexible since it doesn’t always do what you want for huge orgs (like universities).
Do you think it is sufficient to rely on update channels via distributions? Wouldn't a bug in your code potentially expose an internal node to the internet?
> Each node has its own view of the world
I haven't read the docs enough, but can a node belong to many domains at once? If so, does it need one port per domain that it is shared on?
> Do you think it is sufficient to rely on update channels via distributions?
Tailscale employee here. Most officially supported distributions use our own package repo server (https://pkgs.tailscale.com), which would pull Tailscale updates in your normal system updates. The other distributions that aren't in the package repo server (Alpine, Arch, Gentoo, NixOS, Void Linux, etc.) use packages made by the distribution themselves. We do our best to make sure they get updated (contacting the maintainers can be a slog at times), but we do not completely control the update process for them.
> I haven't read the docs enough, but can a node belong to many domains at once? If so, does it need one port per domain that it is shared on?
> Everything in Tailscale is Open Source, except the GUI clients for proprietary OS (Windows and macOS/iOS), and the 'coordination/control server'.
Keeping the GUI clients for proprietary operating systems closed-source is certainly a valid business decision. It's just unfortunate that it means we don't get to fix any issues we find that may matter more to us than they currently do to the Tailscale team, e.g. accessibility.
Edit: Yes, I know, to be fully consistent on this point, I should run an open-source OS. But then, the proprietary operating systems have accessibility teams, while presumably Tailscale does not.
Is Tailscale / Taildrop fully open source? Such that I can set this up myself for my own purposes 1:1? Wanted to use Tailscale's service but saw this on the sign up page:
> Sign up with your identity provider...
This frightened me enough not to use Tailscale, I don't want Google / Microsoft sharing any more of my data. Is this still the case?
There’s an open source project called headscale (not written by or officially supported by tailscale’s team, but we like it) which you can point tailscale’s clients at. Then your whole system is open source. You can also avoid using a central IdP that way, if that’s what you want. (I strongly recommend caution about that, if you want good security. I know it’s not popular to say so on HN, but most people running their own IdP will do it less securely than one of the big providers. It’s a very hard job, akin to running a root CA.)
Btw, there is no IdP support in Headscale. You need to have access to the machine where you are running it, and use the CLI to register your machines (or use a authkey, ofc).
>Such that I can set this up myself for my own purposes 1:1?
I'm guessing both peers ping a central server in order to discover each other, which is why it's integrated with tailscale at all. If you're in the tiny segment of the population who is already paying for a server with a public IP, and have the technical ability to deploy services to it, I feel like it would be marginally less effort to just sftp the file to that server rather than try to clone the feature set of taildrop.
Tailscale could theoretically publish a docker container that contains the guts of this service, but it'd be rather a lot of work, for no money.
>>Sign up with your identity provider...
I have not seen a lot of evidence that regular users care about identity provider privacy, seeing as how Facebook had 2.8 billion monthly active users last quarter. That customer segment (half of the world population) might be more interested in signing up for the free tier of tailscale and getting magic file sharing than they would be in administering their own linux server.
Open source is a good way to get community buy in, because it improves transparency and reduces lock-in. There are lots of open source commercial services, including the Tailscale client service.
If you want an open version, ZeroTier provides a similar kind of seamless networking setup. I'm sure someone will implement an alternative to taildrop on top of it.
Skimmed through their excellent design docs. The code is open source, but requires running a centralized control plane sever and may also require running a data plane relay server to cover some relatively common NAT configurations.
If I share my email with tailscale that's sharing data between me and them, no third party involved.
If I use an OAuth integration I'm involving a third party
Some people use a third party email provider, but chances are they aren't supported (even if your mail provider offers oauth integration) -- the only support is with Google and Microsoft
Now that aside, I support the idea of tailscale not creating its own identify provider. BYOI is far better than yet another supplier to leak information and yet another password to manage.
If I were interested in integration with my company's SSO, I'm sure they'd be able to support our OAuth endpoint.
For personal use, chances are you have an identity with one of the providers, and the information leaking to that provider (that you use tailscale) is minimal and seems like a good solution.
The idea is to get there eventually, and taildrop is the first example/experiment in that area.
In theory a bunch of individuals on the free plan should be able to build arbitrarily complex networks by using the (also free) node sharing feature: https://tailscale.com/kb/1084/sharing/
In practice, probably needs more stuff added before that's truly realistic. But we're always looking for feedback on how to make it easier.
This was my first thought as well. I wonder if what they refer to as the control plane[0] could be a DHT that contains keys instead. But as I thought more, I wonder what benefit a VPN-based fabric for P2P would have over protocol/application-level encrypted content. Granted the NAT busting features can help on their own, but otherwise I struggle to see the benefit assuming the protocol has encryption and P2P built in (I am probably just missing something).
Innernet is an open source alternative to tailscale https://github.com/tonarino/innernet . Since this feature was so easy I imagine someone could pretty easily add it as a PR
I wonder how uneasy this sort of nicely documented and packaged user-space networking and nat traversal is making IT admins. It seems like it's pretty easy to, for example, create an internet facing service that pulls from internal company data sources. With user-space tunnels that probably aren't very easy to find from the IT admin perspective.
I get that Tailscale didn't invent this capability, but they do a nice job of showing how it would work, some of the pieces (wireguard-go), and so on.
Nice tool. I wonder if it supports resuming downloads?
Might have been better if they built on bonjour/zeroconf rather than make a "tailscale" api though... on the other hand, I guess bonjour/zeroconf works oob on tailscale anyway.
Which makes this more of a marketing for a new api/app platform?
Not yet on resuming downloads. No principled reason, just haven't gotten to it yet.
LAN discovery is handled by Tailscale's network engine, a couple layers of abstraction below the file sending. We're going to add some mdns stuff in there at some point (similar to WebRTC's privacy-preserving LAN endpoint discovery), but the benefit of running this all over Tailscale's network engine is that file transfer works exactly the same whether you're on the same LAN as the target, or halfway across the internet. The only variable is how fast the file moves.
Wide-area DNS-SD exists too, it’s not just for mDNS, so if this was built on a DNS-SD infrastructure then supporting devices off the LAN would be as simple as just offering a wide-area service enumeration and browsing domain.
unless they do do something funky only unicast packets are routed in wireguard. bonjour/zerconf/mdns all use multicast or broadcasts for discovery. integrating some kind of service discovery relay would be possible though.
Hm, I thought I'd seen someone comment that mDNS worked for wireguard - but I haven't tried myself. I guess I can't move from ZeroTier for my home network yet.
For the taildrop author: I could see potential on this to be used in the enterprise blockchain area with a condition: make it easy to setup a virtual network in wich each member is manually approving public keys of the consortium members. Working with docker/kubernetes will also be good. Contact me (taildropfromhn at axiologic.net) if you want details. We will love to use such a VPN setup in a real blockchain project for pharma industry.
For others: therr others that will appreciate the better security offered by using public key cryptography while still having a lightwight but decentralised setup?
You might like the Tailscale Machine Key approval feature. If enabled for your domain, no device can join your tailscale network unless they’re manually approved by your network’s admin or the API.
Not really. Magic-Wormhole's primary use case is for when you don't already have an existing secure channel between two computers. That's why Magic-Wormhole uses a PAKE, which gives you a way to easily authenticate and exchange keys without any other pre-existing trust between systems.
If you have tailscale installed on your machines you already have a secure communication channel available to you.
I've deployed it to my larger-than-average home network with great success. A device can roam outside of the network and still access intranet sites and services, and I can remotely access family devices for helpdesk tasks.
My only complaint is that there is no good alternative to authentication other than using Google for small deployments like this. I don't want to pay an auth provider, and none of them offer a simplified plan for 3-4 users.
It's "launched this week" new, yeah. We don't have a way to migrate between auth providers yet, but if you feel strongly you could log out your devices and sign back in with github auth. That'd create a completely new tailnet tied to github auth instead of google.
I would probably use their isolation/segmentation features for this if I ran something simular. Like you said I wouldn’t trust my own devices let alone family members to have full access to everything.
https://tailscale.com/kb/1018/acls/
That you can get it up and running in 5 minutes. No joke, I did it one day before jumping into the car to go to the park to work on a day with nice weather.
It let me SSH back into my Mac Mini to compile elixir and run VS Code Remote and not kill my battery, but still have zero latency UIs over cellular tethering.
I tried Zerotier a few years ago and maybe I just did something wrong but I couldn't get it working for an hour and gave up.
Not sure if 5 years ago it was much different, but I've been using it for ~3 years and the process is trivial. Setup your account, install client, join your network, click "auth" on the website by the new client. It just works.
Yeah it’s clear that whatever I had going on may have interfered with zerotier being drop dead simple. I’ll revisit it just because I like to play around with all these ideas.
- Tailscale is an IPv4 vpn. Zerotier is a virtual ethernet switch.
- Tailscale does not do ipv6, ipx, or any other protocol. Zerotier doe.
- Zerotier supports multiple networks. Tailscale only has 1 network per account. So it's not possible to be connected to WORK1, WORK2, HOME at the same time.
- Tailscale uses wireguard, whichs means it's probably going to be more compatible
- Tailscale forces SSO on certain domains (for example gmail). I really don't like being forced to use this. It's nice to have the option, but usually they require too much information.
- Tailscale uses login credentials to link machines, vs using machine keys in zerotier. I prefer zerotier's method, as I really don't want do the whole login flow all machines.
- Tailscale by default expires the sessions after X amount of time. Super annoying when you discover things are not working, because you didn't change a setting somewhere
- On mac, tailscale uses a VPN tunnel. Zerotier creates a virtual interface. I prefer zerotier's method.
- Because of this, tailscale is available via the appstore (mac). I like that. Both are available on the ios app store
- Both have some sort of SDN language / configuration.
- Tailscale is only an ACL type config, which is good enough for most
- Zerotier's is more extensive, and allows cool things such as redirecting packets to a different machine for inspection. Great for security monitoring and other stuff.
- Tailscale comes with a built-in dns, which is nice. Zerotier is working on that
- Zerotier has public networks. While I think it's a bad idea, it's pretty fun and dangerous
In terms of stability and performance, they're both stable and fast. Somehow zerotier does not get as much love as wireguard. Or nobody talks about it ;-)
Tailscale seems to forcus more on higher layer apps (filedrops, service discovery (just a portscan), simple firewall rules), while Zerotier seems to focus more on the technicalities. Tailscale is trying to look for which value adding features are popular amongst clients.
This makes sense, as tailscale simply uses wireguard, and zerotier is their own tech. Zerotier seems to be run by networking people. This is also the things that works against zerotier.. UI is clunky, the SDN language is difficult, no service discovery.
Imo zerotier's updates and development is very slow. Not sure why.. Maybe they lack a roadmap.
For zerotier to be easier to use, they could create a visual editor for the networks, by actually showing a switch, using cables on the ports, and different colors for different rules. This is more in line of what they do (global ethernet switch), instead of a VPN.
Also, I think zerotier should do more official integrations with routers / nas. Wireguard at a certain point will be integrated in all routers, because it's in the linux kernel.
$ ip addr show dev tailscale0
4: tailscale0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1280 qdisc fq_codel state UNKNOWN group default qlen 500
link/none
inet 100.67.184.57/32 scope global tailscale0
valid_lft forever preferred_lft forever
inet6 fd7a:115c:a1e0:ab12:4843:cd96:6243:b839/128 scope global
valid_lft forever preferred_lft forever
It's BeyondCorp done at a network layer. You have cryptographic trust that traffic from specific IP ranges are specific people, you get a flat network topology for your infra/team/etc, and it works in the most hostile network environments.
The tailscale team has gone through great lengths to handle a lot of edge cases and hacks in the infrastructure at many corporate networks.
It sounds like you'd want it if you have machines on your local network that you want to use when you're not at home. (Or you have some other machines somewhere else.)
If you just have mobile devices and always take them with you, or don't care about accessing a desktop computer when you're away, it's maybe not that interesting?
It can, see https://tailscale.com/kb/1019/subnets/ . There's more work for us to do for more "automagic" behavior. For example, I want my home desktop to offer proxying to my printer without having to set up a full-fat subnet router.
Even supporting WireGuard's Share Secret feature would be a start.
As long as that's not addressed, not a chance in hell I'm going to deploy this.