Hacker News new | past | comments | ask | show | jobs | submit login
Zanzibar: Google’s Consistent, Global Authorization System (2019) (research.google)
238 points by themarkers on April 29, 2021 | hide | past | favorite | 95 comments



Maybe a dumb question on standalone authorization services: does the authorization service end up having a representation for every single object in all of the rest of your datastores? (e.g. every document, every blob of storage, every user in every org).

If so, does that become a chokepoint in a distributed microservice architecture? Or can that be avoided with an in-process or sidecar architecture in which a given microservice's objects are not separately referenced in auth persistence? If not, how do folks determine which objects to register with the auth service and which to handle independently?


A Zanzibar-style service does not need _every_ object from your DB replicated into it, but only the relationships between the objects that matter for authorizing access. Many of these relationships require little/no metadata in your DB so they can live _solely_ in Zanzibar rather than being in both your DB and Zanzibar. This is pretty great because when permissions requirements change, you can often address them by only changing the Zanzibar schema, completely avoiding a database migration.

>does that become a chokepoint in a distributed microservice architecture?

It actually does the opposite because now all of your microservices can query Zanzibar at any time to get answers to authorization questions that were previously isolated to only a single application.

Full disclosure: I work on https://authzed.com (YC W21) -- a permission system as a service inspired by Zanzibar. We're also planning on doing a PapersWeLove NYC on Zanzibar in the coming months, so stay tuned!


> because now all of your microservices can query Zanzibar at any time

This sounds a bit like a chokepoint. Is the important point here that Zanzibar is distributed and therefore is a good thing to be querying from all over the system (as supposed to one centralised application).


Contrary to microservice cargo cult, it's possible to build a relative monolith that scales infinitely. The bottleneck is the db, but if you have a schema where data is easily sharded you can scale it infinitely.

There's plenty of giant monoliths that scale fine. Like Google's analytics and gmail. If you have a database that can scale microservices are more about isolating code between different teams than any performance advantage


The novel aspect of the Zanzibar paper is its application of distributed systems principles to avoid such a chokepoint. This includes not only the design of the service itself, but also the consistency model used in the APIs that are consumed by applications that make many operations cacheable.


As someone who’s not the founder of an authorization provider, I’d tend to agree with you. Sure looks and sounds and quacks like a choke point!

But it’s also fundamentally hard to avoid isn’t it?

The challenge is that authn is so easy to implement statelessly, since you can verify a token anywhere you have a public key. But authz is far more complicated, since it requires an ACL list of arbitrary length along with the token. It’s not like GitHub can stuff a list of every repository I can access into my access token.


>But authz is far more complicated, since it requires an ACL list of arbitrary length along with the token. It’s not like GitHub can stuff a list of every repository I can access into my access token.

This is exactly the problem that Zanzibar solves that makes it exciting! I've written about why giant lists of claims are not a good way to structure permission systems[0] and Zanzibar-inspired services do not function this way. Instead they ask you to query the API server when you need to check access to an item. All API calls return a response along with a revision. The response will always the same at a given revision, which means you can cache the response. If Zanzibar disappears, your app can function so long as content is not modified, which would force you to invalidate the revision. And that's only if you want consistency in your permission system -- a feature that not all permission systems even support. Most applications can tolerate just using the cached response regardless and relying on eventual consistency.

All of this is also ignoring the global availability of the Zanzibar service itself which it gets from using a distributed database like Spanner and replicating into data centers in every region in the world (which is why you want someone else to run it for you).

[0]: https://authzed.com/blog/identity-isnt-the-foundation/


As with everything, it depends on your requirements.

Say your goal is to externalize just your authorization policies from your code. A simple implementation might look like an OPA sidecar to your services, with the policy itself being sourced from a separate control plane - this might be something as simple as a centrally-managed S3 bucket.

The service implementation provides the attributes to OPA to allow it to evaluate the authorization policy as part of the query. e.g. which groups is this user in, what document are they accessing, is this a read, write or delete operation.

If you want to externalize sourcing of the attributes as well, that becomes more complicated. Now you need your authorization framework to know that Bob is in "Accounting" or that quarter_end_results.xls is a document of type "Financial Results".

You can either go push or pull for attribute sourcing.

The push model is to have the relevant attribute universe delivered to each of the policy decision points, along with the policy itself. This improves static stability, as you reduce the number of real-time dependencies required for authorization queries but can be a serious data distribution and management problem - particularly if you need to be sure that data isn't going stale in some sidecar process somewhere for some reason.

The pull model is to have an attribute provider that can you can query as necessary; probably backed with an attribute cache for sanity's sake. The problems are basically the opposite set - liveness is guaranteed but static stability is more complicated.

The methods are not equivalent: in particular, the pull model is sufficient to answer simple authorization questions like 'can X do Y to Z?' - we pull the attributes of X, Y and Z and evaluate the authorization policy.

However, if you need to answer questions like 'to which Z can X do Y?', how does that work? For simple cases you may be able to iterate over the universe of Z's asking the prior question; but it generalizes poorly.


I have recently looked at a similar context but using Ory Keto. I've written about it here: https://gruchalski.com/posts/2021-04-11-looking-at-zanzibar-....

Evaluated scenario was: a company employs a director and IT staff, the director contracts a consultant, the IT staff subscribes to external services. Find out what the company pays for directly and indirectly.

The new Keto 0.6 works very nice.


Thank you, thank was a cogent summary.


I've been writing about application authorization here: https://www.osohq.com/academy/chapter-2-architecture (I'm CTO at Oso, but these guides are not Oso specific). It covers this in the later part of the guide.

Depending on your requirements, yes that's kind of what happens if you want to centralise. It can make sense for Google-scale problems where you really do need to handle the complex graph of relationships between all users and resources, and doing that in any one service is non-trivial.

In practice though, a lot of service-oriented architectures can get the same benefits by having a central user management service, and keeping most of the authorization in each service. That central service can provide information like what organizations/teams/roles etc. the user belongs to, and then the individual services can make decisions based on that data.

This is the approach I covered with the hybrid approach. With this you can still implement most complex authorization models.


Sam, thank you for your talks on the Python.__init__ and TalkPython podcasts. I've appreciate how well you describe the problem domain.

https://www.pythonpodcast.com/oso-open-source-authorization-...

https://talkpython.fm/episodes/show/294/oso-authorizes-pytho...


<3 Thank you!


This is a really interesting question that gets at the heart of service federation.

I don't know the answer for Zanzibar, but take a look at how AWS IAM solves it. IAM has very few strong opinions in its model of the world (essentially it divides the world into AWS account ID namespaces and AWS service names/namespaces, and there's not much detail beyond that). Everything else is handled through symbolic references (via string/wildcard matching) to principals, resources, and actions in the JSON policies, as well as variables in policy evaluation contexts (and conditions, which are predicates on the values of those variables, or parameters to customizations (policy evaluation helper procedures) provided by each service).

IAM is loosely coupled with the namespaces of the services it serves, and that allows different services to update their authz models independently with pretty much no state or model information centralized in IAM itself. This is a key, underappreciated part of what makes AWS able to move so fast.


I can't speak for Google, but I'm working on something similar as a personal project and here is my architecture;

Each service has its own store of objects. Each store also has a directory of Metadata describing the objects contained in each service.

When you send an Auth request to a service; the service you are sending the request to looks up which service is the authority for the given object and then routes the request to that service for auth.

You can do away with the Metadata store if you offload responsibility for remembering which store to use to the user. You provide them with a cookie that tells any of your Auth servers which store contains this users data.


You can and it doesn't have to be a choke point; as far as the ACL is concerned it's just a namespace (the microservice) and an opaque ID inside that namespace.

What the Zanzibar paper describes is two big things:

(1) The auth service gives those microservices an ability to set their own inheritance rules so that you do not need to store the fullest representation of those ACLs. If you are propertly targeting the DDD “bounded context” level with one bounded context per microservice, then in theory your microservice probably defines its own authorization scopes and inheritance rules between them. (A bounded context is a business-language namespace, and it is likely that your business-level users talking about “owners” in, say, the accounting context are different than the users talking about “owners” in a documentation context—or whatever you have.) Some upfront design is probably worthwhile to make the auth service handle that, rather than giving the clients each a library implementation of half of datalog and having each operation send a dozen RPCs to the auth service for each ACL check.

(2) The microservices agree on part of a protocol to allow some eventual consistency in the mix for caching: namely, the microservices agree that their domain entities will store these opaque version numbers called zookies (that the auth service generated) whenever they are modified, and hand them to the auth service when checking later ACLs. This, the paper says, gave them an ability to do things like building indexes behind the scenes to handle request load better, without sacrificing much security. Most of the ACL operations are going to not affect your one microservice over here because they happen in a different namespace or concern different objects in the same namespace: so, I need a mechanism in my auth service to tell me if I need an expensive query against the live data, or if I can use a cache as long as it's not too old.


This is relevant only if you have zillions of objects of hundreds of types and more such types of objects are likely to emerge in future (as you launch more products/use-case).

And you have billions of users and their sharing permission models are complex and likely to keep evolving in the future with more devices, concepts of groups/family etc.

In such a scenario, doing access control in a safe and secure way that scales and evolves well to such a large base is itself a major undertaking. You want to decouple the access control metadata from the data blob storage itself so that they each can be optimally solved for their own unique challenges and they can evolve independently too.


This was talked about 2 years ago on here[0]. This service was also brought up in the discussion[1] of Ory Keto, as it's based on Zanzibar.

[0] https://news.ycombinator.com/item?id=20132520

[1] https://news.ycombinator.com/item?id=26738344



I am continuing to be amazed at how much over engineering Airbnb does for ostensibly a cleaner couch surfing broker. Like they don't actually do much even for a travel site, they have so much investment and could have easily disrupted so many different travel related Fields instead they keep over engineering software. Not sure how to feel about it (since we do kinda benefit from their busywork)


They have similar incentives to Uber where their main goal is to get engineers to work for them by being interesting, and it doesn't actually have to be profitable. I think Uber also writes blog posts about architecture to trick competitors into thinking it can't be done by sharding each city into one box under someone's desk.


Casbin is another that’s pretty interesting I’ve been evaluating alongside Ory’s

https://casbin.org/


Is it my impression or nowadays the emerging technology in this sense is OPA (Open Policy Agent)?

It looks like a flexible system to build cross-language and cross-framework authorization systems.


I use OPA with terraform and kubernetes, but I’m looking for something for application ACLs, where I as a resource owner can assign permissions to arbitrary subjects for a resource.

Does OPA support that? If so that would be very very cool.


Certainly! Application and microservice authorization is probably one of the more common use cases for OPA, and there's definitely benefits in having a unified policy engine in an organization or company.


I have only found RBAC and ABAC docs and tutorials for OPA, do you happen to know of a good source of docs for ACLs like, User A gives User B edit rights on Resource C?

Update: I swear I’ve looked through the docs 20 times and I’ve never seen this use case, but of course after writing this comment I go back and immediately find what may work :-)

https://www.openpolicyagent.org/docs/latest/comparison-to-ot...


I think Ory Keto would be a better choice because it's easier to manage individual resources on an ad-hoc basis.


There is an Open Source (Go) implementation of "Zanzibar" called Keto [0] that integrates with the rest of the Ory ecosystem. We are actually testing it and looks great so far.

[0]: https://github.com/ory/keto


This comes up every time but I think it’s worth noting that Keto provides literally none of the consistency properties of Zanzibar. All of the distributed systems homework assignments for that project have been left as TODOs.


I'm curious what's driving the resurgence in interest authorization infrastructure, particularly the Zanzibar paper. As founder of Oso (https://www.osohq.com/), I have my own opinions, and I think this is a good thing. But would love to hear others' points of view here.


The rise of the zero trust paradigm in corporate networks probably.


Pandemic times and working from home. Companies were already exposed by their employees mobile devices and by people working on public wifi networks, like catching up on email while having coffee at the neighborhood coffee house. Now with employees more-or-less permanently remote, what is the corporate network? Add to that the realization that as organizations adopt more and more SaaS offerings into their operations, the distinction between "corporate network" and "public network" vanishes. The old VPN/firewall/DMZ perimeter model was leaky anyway.


And it's about time


My guess is that it is mainly driven by the increasing adoption of microservice (or just generally more distributed architectures). Doing fine-grained authorization in that type of architecture quite difficult and people are starting to realize that.


Agree. That and the fact that customers today are more sophisticated, requiring their vendors to provide the ability to create custom "roles" and "permissions" in the used applications.


I think the other replies to you are probably correct, but I also can't help but think that a lot of the small/mid size businesses that use AD for Auth, have been on prem for years, and weren't really planning to make a move very soon until the Pandemic hit, have sort of run face first into the fact that they're really stuck with Microsoft now and when Azure AD goes down, their whole business tends to go with it. I don't think there's an easy solution here, but I've seen some places coming face to face with this reality and there's been some very mixed feelings and not many alternatives.


Fair, but even still AD only gives you a piece of the puzzle when it comes to authorization. You still have to do all the modeling and implementation inside your app and map it to however that's stored in AD.


Some factors might include increasing usage of microservices, frontend SPAs, serverless, and more early startups looking to integrate with enterprises, who now have high expectations of what's possible thanks to Auth0 and the like.


Never heard of Oso til now. I’m eval’ing a few tools, I really like your policy syntax!


Which tools are you looking at and what is your evaluation criteria?


Thanks!


Here's a decent twitter thread (2019) with some background on the project:

https://twitter.com/LeaKissner/status/1136631437514272768


I'm currently building an abstracted authorization system for PostgreSQL, and one problem I ran into were timing attacks. Granted, I only had an unoptimised prototype, but querying a table and only checking if the user has permission to read the objects after the fact led to being able to differentiate "no matching object" and "one unavailable matching object". From skimming the paper, it seems Google use this approach, why are timing attacks not a problem for them? Is it because authorization checks are so fast? Or because they make sure only to query available objects, only using Zanzibar as a final "just in case" guard?


What exactly is the attack you're worried about here?

Why do attackers have direct query access to your database? What useful information can they extract from knowing there is an unauthorized object in the database?


My model attacker is a limited user that has access to an advanced search function with filtering on number inequality and/or string patterns akin to LIKE. Such an attacker could send a search query such as "id = 4829 AND cost > 1000" and measure the time that query took (over multiple executions). From the time data the attacker could then determine if object 4829 has a cost value of over 1000, gaining 1 bit of data. Through a binary search they could obtain the full value in logarithmic time.

If the authorization check was fast enough (which it probably is for performance reasons anyway), this would be reduced to the attacker obtaining statistical information (roughly how many objects have cost over 1000). That might be acceptable, my problem is that a benign-looking performane problem could become a serious security problem.


Zanzibar doesn't actually contain any object data, only authorization metadata. So you can't do the complex queries you're suggesting against Zanzibar itself, and presumably the databases that store actual data have authz as a requirement prior to the actual query (which is fine bc Zanzibar is fast)


Could you elaborate on "prior to the actual query"? Do you mean taking some rough queryable subset and then calling Zanzibar for each object in that subset? That's how I'm handling it right now, I just hoped others had a more scalable solution.


No I mean that a user either has access to the database or not. If they do, you check access prior to the query. I think you're doing something related to row level permissions within a database.

And ultimately "Implementing side-channel secure row level security in a database" is a completely independent problem from "abstract authz checker, which is what zanzibar is. You might build a row level security infra atop zanzibar, but you'd probably do that within your database engine, with zanzibar serving as some sort of authz primitive.


From what I can see, Zanzibar is also intended for "row-level" access checks.

I also don't think it's such a separate problem. If you've got a set of authorization primitives, you should have some simple and foolproof way of applying them to various usecases. You might have the best policy description language and very fast evaluation, but what good is it as a central authz service when you can't securely implement search on top of it?


> From what I can see, Zanzibar is also intended for "row-level" access checks.

Yes, as a primitive for storing acl relations, not as a magic solves all security problems tool.

I thought about this more, and I think your usecase is simply unsolvable. You're allowing an untrusted user to take speculative action on something they may not have access to.

This is the same problem as spectre (and similarly unfixable). You'd need to do the acl-checks per row prior to the checks on the internal data. That is, as part of the operation `WHERE id = 123`, you need the database engine to check that you have access to the row, and only if they are acl'd, allow the check against X > 100. Otherwise, just pretend that id=123 isn't in the database.

Of course, this is a simple case, I expect that more complicated cases may not be solvable at all. Like I think the correct way is to say that certain (and perhaps all except the primary key) columns need authorization prior to access.

This is, I think, entirely a database implementation question, and ultimately has nothing to do with Zanzibar itself.

So to answer your question

> what good is it as a central authz service when you can't securely implement search on top of it?

You can, you just can't do it in the way you've described. Its a difficult problem that the central authz service shouldn't solve, and the design of this service is still faster than all of the others.


If your object IDs are 1, 2, 3... then attacker can check all the IDs. If instead each object ID is a 256-bit UUID, then the attacker can't make a query for every possible object ID.


I'm not sure I understand the concern here. Typically there is a logged-in user, and server asks Zanzibar if the user can or cannot access some document. Whether a certain document exists or not isn't typically a secret i.e. you might get HTTP 403 (forbidden) or 404 depending on whether or not the document exists.


Please see my other comment: https://news.ycombinator.com/item?id=26983342

My concern isn't access to single objects, but rather filtering of complex search results.


This very much depends. GitHub for example will return 404 for a private repository when you are logged out. The idea is balancing HTTP semantics with information leaking.


Does the 404 a logged out repo return in the same amount of time as a repo that doesn't truly exist?


Maybe evening response time is some abstraction on top? It may be useful for protecting much more than just auth so it would make sense not to repeat that on every layer.


I considered that, but it seems way too fragile to trust, expecially if you want to test complex relationships for authorization.


One of the authors is Mike Burrows -- https://en.m.wikipedia.org/wiki/Michael_Burrows


I'm just wondering if there's a one size fits all solution for authz. I spent a few days on a use case : - users have one or several roles ( these are hierarchical ) - there are some objects in the system ( hierarchical too, eg files and folders ) - there are different features available according to a user's subscription. I ended up with a 30 lines program which given a set of rules calculates who can access what in less than a millisecond. Does it worth an over-engineered mega system ?


The problem isn't the 30 lines, though. The problem is "millions of users, billions/trillions of objects" and both are non-hierarchical with pairwise sharing etc.

If the requirements were simple, the POSIX model would still work too :)


I agree. for my use case, once a user is authenticated, you get his roles and subscription. There's a limited number of features or actions for each object type, and a limited number of object types. So you can get the set of rules in the client to manage UI, and apply the same set of rules on the backend in the API. In this use case the authz calculation time will be the same with a million users and a billion objects.


You are not wrong. And this pattern shows up everywhere. e.g. do you need a SaaS for "feature flags", since they're just an if statement?

In the case of authz, the argument for separating it as a concern is that many applications can share the same scheme, and you can have specialized tools for provisioning, auditing, etc.


Exactly. When you cross a certain complexity threshold, it's worth separating concerns. It's true for configuration, it's true for IaC, and also for authorization policy.


> do you need a SaaS for "feature flags", since they're just an if statement?

If you want the ability to remotely enable/disable a feature, then yes.


It'd be remiss of us to let left-pad aaS [0] go unmentioned in this thread... For those in today's 'lucky 10,000'^, you're welcome.

There are definitely good arguments for it, services like feature-flagging I mean, and such things are generally relatively low-cost; it's more the risk of adding a 'disappearable' dependency for anything and everything that'd put me off.

(^And if you don't know about this, OMG how can you not have heard about lucky 10k?! Just kidding. [1])

[0] - http://left-pad.io/

[1] - https://xkcd.com/1053/


Not to be confused with Uber's Zanzibar: https://github.com/uber/zanzibar


Great paper, lots of it got blended into our tech at https://build.security


I’m curious about what their approach is to handle consistency with object creation and deletion in the client service. ie how do clients guarantee that the relevant ACLs are created and destroyed in Zanzibar when clients create and destroy their objects.

Destroy can be done asynchronously with durable messaging but asynchronous creation of ACLs is annoying from an api consumer perspective.


Is that a Metal Gear Solid[1] reference?

[1]: https://metalgear.fandom.com/wiki/Zanzibar_Land_Disturbance


Much more likely to be a reference to Brunner's classic work of dystopian fiction, which postulates that the 2010 population of Earth, projected to be around 7 billion people, could all stand shoulder to shoulder on a single island the size of Zanzibar.

https://en.wikipedia.org/wiki/Stand_on_Zanzibar

It's not the kind of literary allusion I'd want to make, if I were a global multinational like Google/Alphabet, but there it is.


One of the project authors (Lea Kissner) relates the story to the naming of the project here:

https://twitter.com/LeaKissner/status/1136691523104280576


Well, that was unexpected. I would have gone with Stand on Zanzibar as the other poster mentioned. Also makes me feel old that they don't remember the spice channel...


Why did they name it Zanzibar?

Zanzibar is an island off the coast of East Africa known for being a place where people traded cotton for enslaved humans.

Not sure the connection.


Hmmh, auditing doesn't seem to be mentioned in that paper. I'd think that's a mandatory feature of an authorization service.


In Google, auditing is handled separately.

The availability guarantees necessary for basic authorization are far more strict than auditing. Auth fails closed, audit fails open.

Anything that can be stripped out of auth should be, even if we're talking about a best effort extra rpc from the auth service.

Auditing typically needs more information than auth as well, and making the auth pipe wide is a risk.


What is the status of xacml based solutions? Anyone using it?


The ideas (attribute based access control) have stood the test of time, but the spec is archaic, and there are relatively few implementations. You can achieve alot of what XACML was intended for with a general-purpose policy engine (OPA).


Should add "(2019)" to the title


How is it not a SPOF?


It ABSOLUTELY is a SPOF and was responsible for several high profile outages at GCP last year.


(2019)

(maybe?)


[flagged]


Did you reply to the wrong thread?


yeah, wanted to reply to the british workplace article. This app is pasting stuff wrong


Google stands on it.


I also immediately think of the Brunner book


It’s so tempting to make some snide remark about it being cancelled.


It might be. Notice that it's only been in use for about 3 years. The difference is you don't tend to upset users due to underlying infrastructure changes.


Zanzibar has been in use for way more than 3 years. I used to work in the SRE team supporting it 7 years ago, and it already had significant users back then.


Rule of thumb when reading Google papers: if you start now and copy it perfectly, you'll still be at least ten years behind. With few exceptions they don't publish "industry-enabling" papers.


I hope not, I just finished integrating with it at work.


Interesting choice of name.

https://www.researchgate.net/publication/325605315_The_1964_...

>On the fiftieth anniversary of the atrocious killing and raping of the Arabs of Zanzibar in the wake of the 1964 revolution in the Island, this paper sought to establish that this mayhem was genocide. In light of the almost complete failure to notice this tragedy, the paper pursued critical genocide studies and hidden genocide investigations to argue that this Arab tragedy in Zanzibar has been a denied genocide. Worse still, the paper showed that this genocide is commonly ignored even in studies devoted to bring to memory of hidden genocides life.


Here's the story of how its name came to be: https://twitter.com/LeaKissner/status/1136691523104280576


Somewhat off-topic I know, but I'd love to see this extended to some of the features that Sign in with Apple has in terms of private relay.

Signing in with Google yields (at a minimum) the e-mail address to the client which means that the list of third parties that have your e-mail (and can therefore spam you at will) is increasing exponentially. It would be great if Zanzibar extended the ACLs to include privacy controls with external services.

(Or I'm misunderstanding and this is only the component for internal Google authentication and not external federation for clients).


Zanzibar is an authorization system, not an authentication system.


I can't get over the name because I definitely had a memorable experience going to Zanzibar in Toronto (https://www.yelp.ca/biz/zanzibar-toronto) shortly after turning 19.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: