Great to see them continue this series, and glad that this one touches on what it takes for other companies to achieve something similar. I talk about BeyondCorp a lot as evidence that the Zero Trust model works, and that employees will love it.
The most common feedback I get is that it seems like too much of a stretch for companies that don’t operate at Google scale. That may be true if looking at the system as a whole, but the principles behind the architecture should attract anyone’s attention - remove trust from the network by authenticating and authorizing every request based on what’s known about the user and connecting device at the time of the request.
Disclaimer: I work for ScaleFT, a provider of Zero Trust access management solutions.
The major barrier is really for companies that lack a lot of internal IT expertise. It's really dangerous for people who don't understand security and networking to just open up like this, since most enterprise software is grotesquely insecure out of the box. Everyone assumes LAN = safe = no need to worry about security. This is always false, but it's especially false if you're devolving away from LAN.
The illusion that it's okay to run cleartext, unauthenticated services on an internal network is also pretty dangerous. Making it clear that the network is out in public might actually yield a better security posture overall.
If an organization is doing 802.1x, competently manages its endpoints (this is a tiny, tiny fraction of "managed" Windows sites), etc then maybe a BeyondCorp-style architecture is a net loss of security.
If an attacker can waltz into a conference room or exploit some salesperson's IE6 and start making requests from the "secure" network, probably best to make it obvious that there is no secure network.
Very true... the "ditch your VPN" sure is a nice soundbite, but in reality it's the last thing you should be doing. I mean that literally... as in it's the last step. Better know what you're doing before getting there.
The first couple BeyondCorp papers talk a lot about how Google deployed this architecture side-by-side their traditional LAN, and slowly migrated applications over, only after closely inspecting and understanding the traffic.
But the real point they make is that Internet != safe = very much worry about security.
I'm not sure I understand the argument you're making here. A VPN offers you direct access to all the servers within your internal network. The BeyondCorp model offers you proxied access to only particular applications that have been opened up based on a wide variety of checks on the user and device accessing the application.
How is the latter going to be less secure than opening up your entire LAN to everyone who needs to access a single resource?
The point was that fundamentally the Internet is not safe, so companies will do the right things to secure their resources. So yes, in BeyondCorp this means running a proxy service that centralizes the auth workflow through policies that check the user and connecting device against the resource at the time of the request.
Exactly. And because you can't be sure that the intervening network is safe, you need to encrypt all the traffic, even after checking authorization and authentication. That's the BeyondCorp mission at Google.
[Disclaimer: I work for Google, and worked on these papers and blog post]
off topic: do bastion servers in scaleFTs architecture provide any interactive-session auditing capability (e.g. gravitational teleport), or do they simply act as a bastion access tunneling tier?
If you have interactive session audting.....you will be hearing from me.
Our first priority in developing our bastion product was to guarantee end-to-end privacy and verifiability, so the cleartext is not available on any bastion. We do have a roadmap item to support customers' desire for visibility into team activity, but we engineered for privacy first. Our current auditing is event-based - device enrolled, credential issued, ssh/rdp login, etc.
Happy to discuss our roadmap further - ivan.dwyer@scaleft.com
In case anyone is unfamiliar with them, ScaleFT is a leader in this space and a team of solid folks. They took the BeyondCorp paper and model and really ran with it. Worth listening.
I commend the Google team for not only deploying an effective and innovative security solution, but also for contributing to security community through this series of informative articles.
Enterprises need to know that while BeyondCorp is Google-specific, there are similar types of open architectures that they can deploy today, most notably the Software-Defined Perimeter (SDP).
SDP is an open architecture from the Cloud Security Alliance, and with it security teams can ensure that:
. All users are authenticated and authorized BEFORE they can access network resources
. Network resources are inaccessible to unauthorized users, dramatically reducing the attack surface
. Fine-grained policies control access for all users – remote and on-premises – to all resources , whether physical, virtual, or cloud
. All network traffic is encrypted, even if the underlying protocol is insecure
Disclaimer: I led the CSA’s Software-Defined Perimeter working group publication of SDP-for-IaaS, and am leading the current effort to create an SDP Architecture Guide. I also work at Cryptzone, an SDP platform vendor.
My ex-manager who left Google to another well established company once said the most missed thing from Google was the ability to work remotely right away on corp laptop with BeyondCorp.
Disclaimer I work for Google not related to BeyondCorp.
Modern VPN solutions allow for full IP roaming. Nothing to maintain really.
I see what these guys are trying to get at - its essentially how I run distributed services for my small business, but having a VPN in front of those is still a more secure option. VPN should not mean the keys to the kingdom and should indeed be restricted to a subset of explicitly exposed services.
Directors might get to work remotely. Good for you. I hope you enjoy Palm Springs while your reports are trapped on 101.
Mere Developers are essentially never permitted to work remotely long-term. Google would rather lose someone valuable like Tim Bray to a major competitor than allow him to do so.
If you're a global subject expert like Professor Hinton, maybe you'll be accommodated, but you dare don't mislead people into believing it's remotely common. That would be a lie.
For me it is the glance over to the next cubicle or next aisle to see if the person is on phone, heads down working, or is easily interruptable. I don't have remote technology to do that. The best I have is an IM which is a cognitive load for me and an interruption for them.
Same here - not a the same level though :-) Undisclosed location from the European Alps - I've always worked remotely, even when my company HQs were in Sunnyvale and I was living a few blocks from the office.
Full-time remote work? If you're going to reach an agreement on where you will live and work it's better to do it as early in the process as possible. I'd say WELL before onboarding and interview. Like conversation #1 with the recruiter/internal contact. It's about mutual understanding and respect, and making sure your physical position would provide value.
The smaller the team, the better, but it's 100% on you to explain when there are 70,000 counter-examples in play. Same would go for discussing why a certain regional office (like Seattle) would maybe work vs. Mountain View. You have to be where you will give your best work to your self and team. I got really tired of flying back and forth and holding meetings being the one remote person out of ~12-20 got really ridiculous, so a move was inevitable for me.
When I'm applying for jobs I'll open with an email to their recruiter saying that I'm interested and intend to apply, but only if they can confirm they're open to me working remotely the majority of the time, I also mention my expected salary range. Doing it that way saves us both time and hassle in going through the motions only to find down the line that it would never have worked out because of either work arrangements or salary.
For a big company, Google is surprisingly willing to make the right thing happen in individual cases. My sense is that Google's ban on working from home is a strong default, a rebuttable presumption that working from home would be a bad idea in a given situation. Rebuttable presumptions can be rebutted.
Remote work can be intended also as in "you are at a customer facility and need to access the corporate intranet to get a document or access the SW repository".
Bring Your Own Device is fine for ChromeOS and Mobile. You might not get the full amount of trust as a Google-issued device (for mobile/tablet).
To achieve the highest levels of access in the BeyondCorp model you need a machine with Google's management agents, so we can evaluate device state accurately and pull information from our inventory management system.
But if you don't provision the device yourself how can you be sure it hasn't been tampered with in a way that just displays "bootloader OK, everything good" but in the mean time it was rooted? Or is that a risk calculated in the "no full amount of trust"?
That protects against newbies, but we’re talking here about Google employees – modifying and cloning the ICs on the board to fake a verified boot status should be a triviality for people who design their own chips and boards for Google’s own servers, right?
That would be covered by policy controls, not technical ones—it's the same issue as someone taking pictures of the screen with their personal phone. You'd need to address the actual issue that's causing people to do that (ill-thought-out policies, employee actually working for $INTELLIGENCE_AGENCY, employee enjoys espionage,…).
A recent example would have been the data that was stolen from Google and given to Uber – the employees who were qualified enough to design their own LIDAR chips and boards would equally be qualified to circumvent any such protections.
On a Google-approved device, you can still use that device, and copy content to another, non-Google-approved device. Nothing is perfect, but at some point you trust your employees.
I don't know why people are negatively marking your post, because this is a thing a lot of people do and it does feel like there is a bit of a stigma to do so.
I work for Duo Security, which this year launched the first major commercial implementation of BeyondCorp as a part of our product offering. Using it to jump on to the wiki, for diff reviews, and other internal resources has been excellent.
In addition to simple primary and second factor, you can design policies for MDM-controlled devices only (i.e. designing endpoints that are trusted for remote access), geolocation, and software versions on a per-application basis, for example.
I think save for a few use cases (SSH into your datacenter, e.g.), VPNs will be dead before we know it.
I think you misunderstood. My point is that you will still need direct access into the network in order to work on the BeyondCorp servers themselves, for example -- not that SSH shouldn't, or couldn't, be covered under the zero trust model as well.
Are you saying that "the bootstraps" (panic access) is VPN? Why isn't the first level just an open SSH port?
I'm not sure why "when all else fails" is better left a VPN port than an SSH port.
I'd say SSH infrastructure (a server with only pubkey login, maybe behind TCP-MD5 and/or heavily filtered source addresses) is probably more reliable and safer than a VPN.
I did mention two other security layers, so I take issue with you counting only one.
Especially since the SSH access would obviously be on a dedicated jumpgate, so that IS two just right there. Maybe running a different OS and architecture just lower risk of one zero-day piercing both.
Also huproxy, or squid, or anything else that provides network-level access.
But I also think that if you consider network-level access as fundamentally different from other access then that's kinda missing the point of BeyondCorp.
> But I also think that if you consider network-level access as fundamentally different from other access then that's kinda missing the point of BeyondCorp.
I understand what you're saying here. I think to me, the point is, don't trust LAN access more than WAN access. But that doesn't mean that restricting LAN access is a bad idea. One of the benefits to BeyondCorp is that you don't (generally) require LAN access in order to access resources. But if your BeyondCorp server goes down, then what? How will you access it?
I probably still wouldn't want to expose my mission critical services over WAN (though I understand your point that VPN is a service exposed over WAN -- and why not SSH then?) Maybe that's wrong of me (I likely haven't given this as much thought as you), especially if you're using TCP-MD5 (which I actually haven't heard of until now, sorry for missing that), or filtering source IP addresses.
This is an interesting discussion, and I really appreciate your thoughtfulness. I'd love to hear more about huproxy and how it's working for you, if you'd care to discuss it more. My email is jmaguire@duo.com.
This is really awesome. My own venture ZeroTier (www.zerotier.com) was strongly influenced by the original BeyondCorp paper. Our vision is a little different in that we do network virtualization that treats the whole world like one data center. Instead of eliminating the LAN you make it fully virtual and mobile and replace the physical perimeter with a cryptographic one.
Here's a somewhat over-simplified TL;DR on Google's approach:
Make everything in your company a SaaS app that lives on the Internet via cloud hosting or a proxy.
Thank you for creating ZeroTier. It is really awesome. It's so much simpler to setup than e.g. OpenVPN and the peer-to-peer architecture also makes a lot more sense to me.
Yesterday, I saw an article[1] about Amazon's plans to block websites in their stores (a very bad thing) and was wondering when a company like Google was going to launch a VPN service. I wonder if these things will meet in the long term. If companies that control the network try to limit access to information about their competitors, then their competitors might try to liberate that information.
One of the more interesting insights from the comments (which I agree with) was that the Amazon patent was for defensive purposes in order to prevent other companies from trying to implement such an idea in their stores.
I have never given much thought to the idea of defensive patents, but if this is truly the intent of Amazon's patent then it's brilliant.
Look up Macrovision (VHS copy protection from the 20th century). They came up with the scheme, then patented every way they could think of to break it.
Not a lawyer, but wouldn't that be challengeable in court? I see the ethics of it, but maybe legally it falls in the same bucket as patent trolls that hold on a patent with no intention of ever commercializing it.
This should be seen as a defensive move so they can sue anyone who comes to market with a product that blocks the shopper's ability to search Amazon while in a given store.
Showrooming benefits Amazon and will continue to until they have a majority of retail space (never).
Back in the day, the one-click patent was claimed to be defensive, too.
These decisions tend to be opportunistic. Or maybe a holder honestly convinces themselves that this particular offensive use is really a "defensive" move.
I give these sorts of declarations the same value I give crime-law proposals where someone pushing it declares that it would never be used in that way.
I see -- I thought the patent was new. I'd ask you why you wrote a patent that allows large companies to block the open flow of online information (considering that it might prompt other companies to block information in different, but similar, ways), but I'm guessing that you won't be able to talk about it.
If you are a good guy, getting a patent for X may help you prevent bad guys from using X.
Also if you don't patent X, somebody else might and then figure out a away to use it against you.
News sites too often write the patent articles in the form "company A plans to do Z" when the only fact available is that company A has applied a patent for Z. There's an incentive for a company to patent pretty much everything they can, since besides the patenting costs, there's no downside I'm aware of in having extra patents. The costs are probably negligible on Google/Amazon scale and when you have good processes.
Amazon employees are encouraged to patent basically anything, at least in AWS. Validity of content or relevance to future business plans isn't really a factor.
They also take into account the state of the machine you're working on. So locked bootloader and probably a client cert in TPM-like component, plus "device health". Client certs alone are good for authentication (don't work in HTTP/2 though) but they want to reach even better target - no malicious software running on your computer.
That's from reading old papers, I don't know if anything changed now.
That's correct. Previous papers touch on the inventory data pipeline and machine health, though without as much detail as I might like in your shoes. Our agents track a wide variety of things on client machines, and we use that inventory data to determine how trustworthy a machine could be.
[I work at Google, and helped make these papers, and blog post, happen]
Interesting design. As far as I understood from old papers client certificates are used only to identify the device while user authentication is handled differently.
Could you elaborate on the technical details on user authentication? (If that's not top-super-secret) I guess it's just like accounts.google.com for Enterprise with mandatory 2FA (username+password+U2F key?). Does it work the same on mobile/Android (U2F via NFC or codes)?
Android supports U2F via NFC and Bluetooth now, which is used for user authentication on Android devices. We've also released an (experimental?) iOS app to support U2F over Bluetooth.
There's tpm and secure boot - does the (presumably signed, in the trusted boot->os->user binary/service-path) agent access signing services from tpm - backed by a key in tpm, and use that to identify itself as an authentic agent?
Otherwise I can't see how an (admin) user couldn't extract the key from ram and run the os and agent in a vm?
Yes. As far as I understand, the problem was that the requirement for a certificate is a per-request thing, but HTTP 2 can have multiple requests in flight over the same TLS connection at the same time and thus can't just renegotiate the connection when it comes up. There have been proposals to fix this, but nothing has gained the necessary interest and traction.
Servers can ask the client to fall back to HTTP 1.1 instead, and then use client-certificates there.
With productivity apps being cloud hosted (Office 365, Google Docs, Tableau, PowerBI, etc) and with source code and team management services being hosted (Github, Visual Studio Online, Gitlab, etc) huge percent of people's day to day work can seemingly happens without a VPN.
The largest notable exceptions seem to be internal file shares, and remote connections to machines that need to be behind a firewall.
I guess the overall point I have is that with the data files for both productivity and source code being stored cloud side, that VPNs become less and less necessary for a large % of workers.
"The largest notable exceptions seem to be internal file shares, and remote connections to machines that need to be behind a firewall."
Office 365 / OneDrive and Google Drive are even doing away with the requirement for internal fileshares. We used the former heavily at my previous job and I use the latter in my current role. Both have been pretty good alternatives.
The actual framing is WebSockets over TLS once the session is established and the latency is no worse than SSH over VPN practically speaking.
The protocol also supports session resumption in case your connection to the relays is briefly interrupted, but client support is buggy so it's been disabled for years (with few complaints)
To get good performance, one would need a BeyondCorp enabled mosh proxy.
With plain SSH over HTTP over TLS, performance is satisfactory but not great. 4G is just about usable for vim, but you'd probably be best off using sshfs over http over tls and running vim locally, then compiling and running remotely.
It almost seems like this could be described as dynamically building a per-user VPN, via inbound proxies for admission control and traffic src/dst filtering, and services hosted behind multiprotocol terminating proxies. Some extra client analysis (practically effective, even if no theoretically valid remote attestation), tedious but necessary work to understand the access patterns for all the internal services, etc.
It seems there can still be lateral re-infection via difficult to patch shared services (finance/procurement/obscure wikis). The examples in one of the papers (delivery people not needing access to financial systems) is completely bogus -- sometimes the worst engineered, most xss-y, mission critical apps have to be accessed by everyone, have insanely hand coded 'business logic', and no docs. Content aware behavioral profiling would seem to have a role in managing that risk.
Sorry this will come off as a super dumb question. I use ssh. I can login, edit, develop, run, basically anything. What am I missing? I thought VPNs are for 'admin' types that need access to a MS Excel file.
Google's model allows SSH to "internal machines" over a set of relays that apply the same machine authentication and trust tier logic that's laid out in the papers.
So your workflow would still be supported, and it would likely be more secure than exposing SSH traffic to the internet at large.
"I use ssh. I can login, edit, develop, run, basically anything. What am I missing?"
You're not missing anything and you have an extremely efficient and secure workflow that runs laps around any of this.
The tradeoff is you work in a terminal and understand SSH, etc., which is too much to ask of many non-technical users.
If you wanted to obfuscate your traffic or the direct path to your remote host was blocked for some reason, a VPN might get you there, but you'd still run SSH over that VPN and your workflow should remain unaltered (albeit, higher latency).
To address a sibling posts comment, you can enjoy this very same workflow without exposing your sshd to the global Internet by placing it behind a "knock" with knockd. Highly recommended.
The VPN changes your network route. This can get you around geographic locks (services that only work in certain areas). It can also get you around traffic issues, if your ISP has technical/political routing issues. Like with Comcast/Verizon refusing to add additional peering because they wanted to double-bill netflix traffic.
Some VPN services also advertise additional privacy or anonymity, but trusting a stranger to not sell you out to their local government isn't usually a good idea.
From a business standpoint, you may want web and network services without exposing them to the wider internet. So they're only accessible on IPs in local subnets. VPNs will get you inside the wall.
ssh can be used as a VPN - you can proxy ports and tunnel all sorts of things through it. You can easily drill a connection through to say an "internal" Windows or NFS file server and grab docs off it. There is file transfer built in as well eg sftp and scp, with easy rsync integration.
It doesn't really matter whether you use ssh, RDP or whatever for remote system access but you should be aware of the capabilities of your methods and the strengths and weaknesses of them.
If your username and password are reasonably hard to guess, and ideally you use passwordless logins, and you keep your system regularly patched, and you definitely don't allow remote root logins, and you cycle your passwords say 90 days or so, then you should be fine. Do not bother changing port 22 to say 2222 or requiring 20+ char passwords. You may want to disable some of sshd's functionality if you don't use it but that might be a step too far.
Also, reset your sshd's keys occasionally and get them into your local ~/.ssh/known_hosts as soon as possible and read up and understand why ssh warns you when the keys and names look odd - that could save you a MitM attack from a bored techy in a hotel with wifi or whatever.
To sum up: a well handled sshd and client can be a fairly decent VPN and remote access solution. However, a separate VPN eg OpenVPN and then ssh over that is better and need not be inconvenient.
This sounds a lot like Microsoft's DirectAccess which has been in the Enterprise version of Windows since Windows 8. Please correct me if I'm wrong though.
Kind of. Microsoft sold it more as an always-on VPN. They weren't selling a radically different philosophy for securing your network with it. But regardless of the differences Microsoft really hamstrung themselves by making it so Windows centric.
You never know. SaaS is kind of a gateway drug to BeyondCorp since SaaS isn't inside the firewall to begin with. The next step is to start applying a SaaS mindset to your own internal apps and then you're mostly there.
Trying to secure a traditional corporate "intranet" while enabling productive work is much harder. This is just branded common sense end to end security.
Working on that now, I think I messed up on my end with our internal tool, hope to have the full PDF download from research.google.com in a day or two, maybe next week if I epic failed.
[I work at Google, and helped make these papers, and blog post, happen]
Yes turn keys over to Google. I am sure if you are an American Fortune 500 company you have no problem with this. Not so if you are a non-American company. Though a lot of people will jump on board despite the huge security implications of doing something like this and turning over all your security over to Google. Meanwhile nation states are exploring how to use quantum encryption to prevent eaves dropping others are being coerced to simply hand over security to a third party that you hardly trust with any sense of privacy.
It seems from the article this is only being offered as a product to people already using Google Cloud services, specifically for accessing those services? Otherwise it's just a series of papers describing the system.
You're right, the initial version of Identity-Aware Proxy (IAP) is for Cloud applications, but that's not the end of the story, and we're learning from BeyondCorp's 7 year journey to inform the direction of IAP going forward.
[I work at Google, and helped make these papers, and blog post, happen]
Much like 'bigtable' was a google internal product, and only published a set of papers describing the system, and now we have hbase.. Or how 'mapreduce' was a google internal product, and now we have hadoop, etc.
Calling it "Google BeyondCorp" makes it sound like a product; maybe if they called it something like "BeyondCorp architecture" it would be clearer what they're talking about.
It's a way of doing things, nothing specific to Google.
Use any authentication/identity service and publish all internal services as public apps, consolidating access, increasing security, and simplifying maintenance.
The most common feedback I get is that it seems like too much of a stretch for companies that don’t operate at Google scale. That may be true if looking at the system as a whole, but the principles behind the architecture should attract anyone’s attention - remove trust from the network by authenticating and authorizing every request based on what’s known about the user and connecting device at the time of the request.
Disclaimer: I work for ScaleFT, a provider of Zero Trust access management solutions.
Edit: If folks are interested in hearing more about how other companies can achieve something similar, here's video of a talk I gave at Heavybit a few months ago on the subject: https://www.heavybit.com/library/blog/beyondcorp-meetup-goog...