With everyone looking at remote work, tons of people must be questioning their VPN strategy right now. I know we are.
I was going to complain that that's a close source product (as I don't want to use something closed source for my network access controls) but it looks like the code is on Github, just not linked in an obvious way from the website. https://github.com/tailscale/tailscale
Has anyone used this or checked out the code? I may give it a spin.
I'm a Cloudflare employee, so obviously biased (shilling incoming).
Cloudflare Access is worth having on the list (and it's free right now [1]). It is a pretty flexible identity aware proxy, and ssh gateway. We use it internally for those two things for basically all of our infrastructure and internal applications.
If it works for you that's awesome, if not (or you have just general questions) I'd love to hear feedback about why not / what else you might be interested in. I work on our security team but work really closely with the access team, so would be happy to pass on feedback to them.
I typically don't shill around here, but why isn't ZeroTier on that list? I suppose we don't have a giant marketing budget (yet). We do have a product used by hundreds of thousands for years and it has microsegmentation and other nifty stuff like multicast.
Also not everything on that list is the same. Some of those are app-level gateways, IAM infrastructure, etc. That's a bag of stuff for different use cases.
Some of that stuff is also cloud proxy based, not mesh based. Maybe some people don't care but I'm heavily biased toward peer-to-peer mesh. I find it offensively stupid for packets to travel 1000 miles to reach a system next to me or in the same city. Of course I guess everything has a use case. I might opt for a cloud proxy if the bandwidth were low, the users and/or customer were non-technical, and it was all web stuff.
> I find it offensively stupid for packets to travel 1000 miles to reach a system next to me or in the same city.
1. Security isn't an absolute.
2. Defense in depth.
PDR (protect, detect, respond) is a well regarded strategy, for good reason. All of these are easier when done centrally (cloud proxy based). Detect can be particularly difficult to do well in mesh. You probably find it offensively stupid because zerotier doesn't have a Detect component, nor integration for one, nor any kind of consideration for it at all.
I've been using ZeroTier for a small collection of two devices plus two servers. Honestly, the main pain points I have are UX: Android and Dashboard. Other than that, I have no complaints.
Thank you 'api. I have been using zt at the university group of about 60 users. Must say it is very reliable. As for UI, we are OK with it. But one thing is firewall config is too complex. Not enough examples:
Recently I saw that zt-laduke posted a simple bridge tutorial on reddit. Would be great if you made some examples in your wiki.
1. Allow only samba traffic
2. Allow only ssh traffic
3. Allow only RDP
I am sure is kinda popular at r/datahoarder and related communities. May be you need to apply for a GSOC type project - so that academics start using it (will then later get used in corporate).
> I typically don't shill around here, but why isn't ZeroTier on that list? I suppose we don't have a giant marketing budget (yet).
now you know, nothing here is just coincidental, not even the timing the posts are submitted or the type of comments that get quickly upvoted or buried to derail the discussion in a certain direction or even the links injected in comments of popular threads to improve the SEO of some companies and projects. Every, again EVERY, WireGuard post over the past 2 months has instantly converted into a shilling party for this company that literally has no product to offer, its source-code is pre-alpha state yet the party is persistent for every thread, it seems that the owners have really powerful friends all over the place and that's why they're having free ride despite having no product. This recipe for having a popular startup name with no actual product has really worked very well in the easy times of 2014-2020. Let's see how things work out this time
Interesting, kinda reminds me of Nebula by Slack. https://github.com/slackhq/nebula - which I hope someday there's a CNI for Podman and Docker.
I guess right now you have to run these solutions inside of the same container you want to allow in the network, so have to build it apart of your image... unless maybe you had your own custom base image with the VPN software loaded and just pass in the auth cert info at runtime.
However tailscale has a polished UI it looks from the screenshots, maybe meant more for employees to use than using it for all the server to server stuff.
I've been playing around with an idea on and off I was planning to use some sorta tech like this but seems like stuff is improving. Like early this year Podman folks are going to adopt a API like Docker instead of Varlink. Varlink seems cool too but I guess not as many people wanted to support it and then no lib for every language or runtime like Node.js, etc.
Also even if your own private network, still recommended to encrypt and authenticate internal services talking to each others. but I guess it's like having multiple layers.
Woah. I had been kinda trialing Tailscale casually for a little while but I see they've now introduced pricing of USD$10/user/month! That's gonna be a very hard sell for what it offers. I currently handle access via wireguard VLANs and some ruby scripts and it works fine - change is nowhere frequent enough to justify going from $0 to $250+/month just for a little automation.
I'd advise Tailscale to rethink their pricing. I'm not averse to paid services at all, but it's pretty hard for me to justify hundreds of dollars a month for "VPN access management software", even to myself.
I'd like to add a little more productivity to this comment by telling tailscale exactly what I would pay. I ran the numbers - I'd be $310/month for VPN access for my devs and staff. It would be my single most expensive SAAS contract per head. No.
Tailscale, if you're listening and are not too drunk on the patio11 "charge more" "i know what i'm worth" kool-aid: I am smack bang in your target market. Small to medium company, appreciative of wireguard, very security conscious, time poor, happy to spend money to make my life easier - but your pricing is ludicrous for what is essentially a VPN config manager.
I am not paying USD$10/user/month for google login to a VPN. That's more than I pay for actual GSuite. I have to justify, broadly, what I spend. Can you put yourselves in my shoes?
I suggest charging $24.95/m for 1 admin user, with 50 non-admin users free. $50 for 3 admins, 99 for 10, and so on.
Not partially. It is entirely an ad (SEO content) for tailscale. The author is one of the founders.
I have to say, disappointed in the author's understanding of zero trust. Microsegmentation and ZT are at odds with each other; one is not a progression to the other. Of course, he has that POV because "his salary depends on it". I rather wish he had coined a new term here: nanosegmentation perhaps.
Regardless of accuracy/understanding, tailscale is certainly interesting and perhaps a better (more practical) solution than others -- at small scale. So don't take my criticism as poo-pooing their interesting approach. Rather, I am not pleased with their furthering the widespread differing understandings of zero trust, in service to their own needs. Then again, I'm not a principal in a "zero trust" company, so who am I to say what is fair in love and war.
I’m also a tailscale founder so perhaps I’m biased :) but I think you are both kinda right. I see zero trust and microsegmentation as two different axes. Zero trust proxies are a way to establish and authenticate an encrypted stream; microsegmentation stops streams from being established except through explicitly permitted (and presumably supposed to be secure) pathways. One enables good things, the other prevents bad things.
A good security and connectivity system provides both.
> I am leery of jargon. I am as guilty of using it as the next engineer, but there comes a point where there are just too many precise, narrowly-understood terms polluting your vocabulary. The circle of people you can talk to shrinks until going to the store to buy milk feels like an exercise in speaking a foreign language you took one intro course to in college. Less jargon is better.
Before reading the paper, That beginning has hit me, That's a brilliant writing skill.
> Zero Trust networking means treating the internal network just like an external network: authenticate every connection, encrypt all traffic, log everything. Plan as if every machine (virtual or otherwise) as if it is sitting on a public IP address.
Everyone knows that's the ideal network, but in practice it takes a lot of resources, that's why people prioritize internet exposed things. I mean, outside of tech companies (even then..) Most companies don't even fix critical security vulns internally fast enough. Behold exemptions galore!
My problem is this term is used way beyond netwoking to mean so many different things. What the author describes is not what half security vendors think zero trust means.
I don't dislike the term because it's a jargon but because the ambiguity leaves too much room to turn it into another box people check to have a false sense of security.
Something like "AAA resource access" (Authentication,Authorization,Auditing is an very old networking concept from cisco land) might be a better term. You gotta be unambiguous so it's easier to say "No, you have not implemented that".
The entire "BeyondCorp" strategy from Google has probably done more harm then good. Tons of smaller companies and well known startup paid their prices with breaches left and right.
Removing or not deploying basic firewall controls to lock down traffic is ill advised. Tons of exposed s3 buckets and other assets keep showing that.
Zero Trust is correct strategy of course, but it doesn't mean you have to open up your network to the entire world- it's in addition to already established best practices. Better to continue those traditional practices and be more thorough via micro-segmenation for instance, and identity on top of it.
BeyondCorp does not imply opening up your network to the entire world! If anything, it means locking down your network tighter, because not even the office is privileged. Production is a black box that you touch by authenticating through the same reverse proxy tier, no matter where outside of it you are. In effect, nginx is your “VPN” server and everyone has to use it.
Plenty of companies paid dearly for trusting every device that merely needed internet access.
> BeyondCorp does not imply opening up your network to the entire world!
I think GP is implying that IT departments are taking away that [wrong] message. Just look at the proliferation of zero trust companies that do no such thing.
One of my biggest gripes with infosec is how un-empirical the field is. Best practices and advice are based on what security people think is the best strategy for preventing breaches, but rarely do I see that backed by actual data about how real world breaches actually happen.
This leads to an outsized focus on the latest sexy vulnerabilities (e.g. CPU speculative execution vulnerabilities) and fetishes for things like firewalls. Meanwhile people type 'npm install kitchen-sink' with no worries.
In my own anecdotal but real world experience most breaches result from phishing, downloaded malware, phishing to get the user to download malware, and malware-assisted phishing, in no particular order. Firewalls do nothing for that.
I'll also add that once I'm on a traditional network, ripping through active directory is generally not difficult - my first ever live pentest went from privileged non-AD asset to domain admin within about 4 hours. My average time to compromise has come down significantly since then, too.
There's lots of bad/scam security focusing on logging and monitoring, weird antivirus products and securing the wrong things. The last network I compromised dropped an obscene amount of money on a SIEM product that couldn't detect nmap or PtH attacks, I achieved complete compromise with the same chain of attack as my first ever pentest because nobody had looked at the fundamentals of implementation/configuration security.
If I could list things that would actually secure traditional networks:
- Active Directory Hardening (See: ADSecurity, Microsoft AD Hardening Guidelines, ACSC Windows 10 Hardening Guidelines)
- Regular Patching and reliance on Microsoft Products (they're actually pretty good!)
Dunno if you'd consider these 'zero trust', but unless you've covered the fundamentals nobody is going to waste time figuring out how to abuse your network with speculative execution or drop a huge amount of budget to develop a perimeter breaching RCE 0day. Especially when in most cases sending shitware.docx.exe to a sales staff member (who is almost always going to run whatever you send them if there's a bonus incentive) will suffice.
"Application Whitelisting" Does that actually work anywhere except in an boring office? Everyone that has a functioning devshop always has too many holes in their whitelists to effectively protect them. Client machines WITH credentials has to be made untrusted.
There's no reason it can't work in say development environments - the simplest approach is to configure whitelisted directories which permit execution in most solutions. It's not perfect but it helps prevent execution of questionable downloads / attachments.
> Client machines WITH credentials has to be made untrusted.
Network segregation is better if you truly care about security and costs in productivity. I think you'll find higher end environments have a 'trusted box' and a 'development box' which physically sit side-by-side on different networks.
We might discussing this from different views, network segregation is important even if you build everything as zero trust, firewalling something it's one of the easiest way to not trust it.
I think the difference in perspective is that I only really need to work in very narrow an well defined interface to systems that can be hacked, so my view of what the attack surface is affacted by that, I do not care about. For me as long as revealing my password to my client is fine I'm fine. My client can be full of malware without affecting security severly.
Clean room; "leave everything connected on the outside", and variants of that is REALLY ineffective. I've measured that the waste from that, back then we spent alot of time classifing what software had to be developed in such an environment. You do not want to work that way.
I don't disagree that segregation is still important, but it really depends on specific environment technical details and threat models.
Firewalling AD networks, for instance, really won't help if the administrative security model is flawed (network admins using privileged account to maintain endpoints, privileged local administrative/maintenance credentials being reused on critical infrastructure, etc). The communication protocols for administration and general use are iirc pretty much require bidirectional traffic to work.
If you don't trust the host you develop on then everything produced on that host must be audited by a trusted host. Maybe that works in environments where cost is not an issue, but I would be somewhat skeptical of any environment which attempts that without the appropriate resources. It also doesn't help in situations where source code disclosure is an issue (eg a dev posting too much to pastebin/stackoverflow/inadvertently searching google for paste buffer full of data etc).
The 'BeyondCorp' approach is the opposite to a firewall, and is designed to limit the damage of successful phishing attacks.
For example, if you have a 'trusted' network, a successful phishing attack gets you access to that network. If you have a 'zero trust' network, that attack only gets you access to the compromised user.
Zero Trust includes heavily leverageing attack surface limitation to prevent lateral movement. This means even if an attacker has stolen passwords, cookies or whatever single factor token that is needed to move laterally, they can't - because no connectivity.
> This leads to an outsized focus on the latest sexy vulnerabilities (e.g. CPU speculative execution vulnerabilities) and fetishes for things like firewalls.
sounds like a fantastic plot for an anime. i can even see firewall san in my head :)
seriously though, maybe phishing and malware are common because firewalls are working?
Really? In most deployments the firewall is only outward facing. Local isolation is possible but it breaks a ton of stuff and basically renders the LAN useless.
My own opinion is: Secure. The. Endpoint.
If your devices, OSes, etc. are not secure then your systems are not secure. A firewall will not save an insecure system, and firewalls and netsec in general gets far too much attention. That attention should be focused on OS-level and application level security.
Exactly, endpoints should not be listening on the network for instance (its not just about outbound connectivity).
Company laptops often have RDP or SSH open - and newly added software might expose a remote endpoint in future (or a 0 day, like EternalBlue).
And here it comes: then an employee works from home or a coffeeshop and anyone there can attack and try to login! Locking down these things is critical to securing the endpoint.
"Endpoint security" in practice translates to full disk encryption (good) but seemingly also corporate-mandated spyware that logs and reports process and network
metadata, even traffic and keystrokes (bad).
Security isn't the only thing in the optimization equation; endpoint security is only useful — and humane — to a point.
> Tons of exposed s3 buckets and other assets keep showing that.
Would firewall rules help alleviate that though? I can only speak for GCS (GCP's s3 equivalent), but firewall rules don't apply to GCS buckets.
Like a sibling commentator said, I would like to know more instances of companies getting breached specifically for adopting a zero trust networking philosophy.
Zero Trust is the right strategy, but very few have the resources to implement it.
So they in spirit do Zero Trust, but end up without basic security controls nor full fledged micro segmentation and identity.
If you work at a startup and experience exponential growth, you know that your internal production network likely has no authentication - this is unfortunately not uncommon and can be dangerous.
What old school security controls prevent, are the script kiddie attacks (ooops, that database or Elastic Search cluster exposed on Internet) and random automation attacks - which there are plenty of examples.
ha! if only zero trust were precise! By narrowly-understood, the author means "well understood by those that speak jargon, but impenetrable to outsiders". In this case though, there's a nice irony in that statement.
With everyone looking at remote work, tons of people must be questioning their VPN strategy right now. I know we are.
I was going to complain that that's a close source product (as I don't want to use something closed source for my network access controls) but it looks like the code is on Github, just not linked in an obvious way from the website. https://github.com/tailscale/tailscale
Has anyone used this or checked out the code? I may give it a spin.