Hacker News new | past | comments | ask | show | jobs | submit login
OAuth 2.0 Security Best Current Practice (ietf.org)
237 points by mooreds on May 5, 2020 | hide | past | favorite | 92 comments



I implemented more than a dozen OAuth integrations last year with multiple American and Chinese companies and oh boy it was painful.

I do not know why so many engineers end up reading a clear specification document like RFC-6749 [1] and then ignore 80% of the instructions. I had to deal with so many weird bugs and bad OAuth server implementations, I lost count of how many emails went back and forth trying to make sense of unexpected behavior.

Even Apple engineers got it wrong. They decided to create their own thing based on OpenID Connect for “Sign In With Apple” [2]. OpenID had to write an open letter [3] explaining the repercussions of their changes and fortunately were able to convince Apple to fix the implementation [4].

OpenID was lucky though, they had some leverage, but a no-one like me couldn’t possibly convince a gigantic conglomerate like Tencent to fix their web API. Talking with Tencent engineers has been one of the worst experiences in my career. The company apparently has a culture of constant job rotations, engineers are assigned projects for short periods of time with bonuses for early completion which encourage them to deploy half-finished code before moving to something else.

[1] https://tools.ietf.org/html/rfc6749

[2] https://developer.apple.com/sign-in-with-apple/

[3] https://openid.net/2019/06/27/open-letter-from-the-openid-fo...

[4] https://openid.net/2019/09/30/apple-successfully-implements-...


Oh man, I've done Google, Facebook, Reddit and all of them have tiny but annoying differences in their Auth Flow. I've given up up on Twitter, because they didn't even seem to support OAuth2 (maybe they do now?), but it was too much of a pain.

What's also very annoying is the need to register an "application" at the provider. So i.e. for Facebook you have to login into their developer portal, create an app, fill out all the stuff they want to know, and then rely on them to approve your app. You have to do this for each and every provider, and for each application you want to support, and of course they all have different developer portals and want to know varying amounts of information... I've given up on Instagram OAuth because it seemed to much hassle to get your application approved.

Don't know why this is necessary, there should be a simple straightforward spec solely for Auth, that doesn't require all these steps.


Client registration is so that (1) the authorization server can obtain the client type (confidential or public), then, (2) if appropriate, obtain the client's Redirection Endpoint URI, so that the authorization server's authorization endpoint can avoid needing to be an open redirector, and (3) to obtain any metadata that the authorization server will display to the resource owner just prior to them approving or denying the authorization request. The spec explains this [1].

Also, service providers who are custodians of the resource owners' data want you to be accountable to them and their users, so they want to have some information about you.

There's a proposed standard for dynamic (and/or programmatic) client registration [2]. While there are some interesting use-cases, it offers little benefit to a service provider who wants to vet clients (or appear as if it did).

The spec contains some design recipes if you want a loose association between which clients are allowed to receive data the user approved (i.e. implicit grant), if you care little about whether the user trusts your app (i.e. resource owner password credentials grant), or if you care little about user approval (i.e. client credentials grant). The catch is that your unilateral opinions are insufficient: the client and service provider need to be in agreement about whether these are okay to use.

And, as is often the case with "kitchen sink"-type specs, these other usage modes muddy the waters around the safer and saner usage modes, so much of the current advice on OAuth2 focuses on dissuading their use.

[1] https://tools.ietf.org/html/rfc6749#section-2 [2] https://tools.ietf.org/html/rfc7591


Technically unregistered clients are within spec. IndieAuth is one such implementation, focused on federated services. Aaron Parecki has a good writeup here[0]. Unfortunately that's not going to help you when playing with the big boys.

[0]: https://aaronparecki.com/2018/07/07/7/oauth-for-the-open-web


There's a spec for dynamic registration of clients for Oauth2/OIDC if I recall correctly. There's a reason that you have to register, and it's because you're potentially obtaining information about users, and they need a way to be able to block specific (potentially malicious) clients using their IDP.


When I was looking up the OAUTHBEARER draft, there's a (OPTIONAL) field in the error response to point to the openid-configuration for dynamic registration. At the time I was implementing things for Thunderbird, no one was using this yet, and I would be surprised if this has actually been implemented by any major provider in any sort of usable way for clients.


Yeah no one uses it because they explicitly want you to authenticate to register the clients on the users behalf so they can lock you out if you abuse it. It's annoying, but it makes sense. Otherwise an abuser can just dynamically create new clients anytime their existing clientID is banned.


Did you end up implementing it in Thunderbird? We're looking to do that at my company and I'm currently implementing OAUTHBEARER in cyrus-sasl (which then could be used in cyrus-imapd & postfix)


I haven't implemented it myself, and as far as I'm aware, it hasn't been in the past few years. If it is going into cyrus-sasl, then I can probably put together containers for testing (which is on my work todo list anyways).


One particularly nasty annoyance I've discovered recently is SoundCloud, which describes in great detail how to use their API and how to register applications, except when you actually try to do that, you discover that they "temporarily" disabled application registration several years ago and never re-enabled it.


> "Don't know why this is necessary"

Ok so the other replies here clarify why — but really it'd have been nice if there was at least some consistent interface between all the OAuth / OIDC providers, so one wouldn't need to learn new interfaces all the time.

... And, worse, now I need to take screenshots and document all these different interfaces, for an open source project I'm building, whose users might want to register OIDC client apps too. :- /


I can only give my perspective, which is likely to be unique. I'm in the process of implementing an Oauth2 server for the first time. I feel like I have a handle on it now, but it was definitely overwhelming at first. If I were confident that RFC6749 had everything in it that I needed, I would read through and implement it. But it quickly became apparent that there are a dizzying array of RFCs with extensions, deletions, and best practices. So I just skipped the specs, and have been using resources like this, Oauth2 Simplified, and of course YouTube.

So I guess the upshot is I find resources like this and the upcoming 2.1 to be valuable.


Don't implement your own, there's tons of open source that can do that, for example https://github.com/ory/hydra


Thanks for the link. I'm rolling my own for a reason. I'm working on not only an Oauth2 implementation, but also a specification for using Oauth2 for filesystem operations (btw if you're aware of such a thing existing already, I'd love to hear about it). So I need to be intimately familiar with Oauth2. I wasn't originally planning to use it, but ultimately it's close enough to what I need, and for better or worse users are familiar with the flows.


> btw if you're aware of such a thing existing already, I'd love to hear about it

Since OAuth is pretty coarse-grained, you tend to have: - A client has a policy configured for the file sharing service or file collection, and does not use e.g. the scope parameter to request particular permissions - A file collection lets scopes be assigned particular permissions, and a client requests access by requesting one or more scopes

Although I am not a fan of the level of complexity it adds, UMA (User Managed Access) makes a pretty strong attempt at solving these sorts of problems as well.


But why do they have to be coarse-grained? If you have a cloud storage provider, why not allow scopes to be set per path, like this:

scope="/dir1:read /dir2:write /dir3/file.txt:read"

Then when the authorization screen is presented, the user could even modify the permissions granted on the fly.

Usually applications only need a single directory to store data, and it shouldn't matter to the app where that directory is in relation to the rest of the user's data. In that case you could do something like this:

scope="dir?:write"

Which tells the authorization server to present the user with a directory picker, so the user has control over where the data is stored. It doesn't make any sense to give write permissions to all your data for every single application.

After 5 minutes of poking around, I still don't understand how exactly UMA works in relation to Oauth2. I agree it seems to be pretty complex.


2.1 will hopefully be a one stop shop for implementors. At least, when I was writing a post about OAuth 2.1, that was the intention of the authors, from what I read on the mailing list.


I always wonder why do companies have to implement oauth2/open id connect. It seems like a big waste in tech to spend so much time implementing the same things by different companies.

can you tell me if there is a good drop in component that I can add to my product and integrate easily to get oauth2/open id connect? I heard good things about identity server but would appreciate if you could share your opinion on it.


At least one problem is that Oauth2 isn't a protocol; it's a framework for making protocols. Things like endpoints, scopes, security practices, etc aren't prescribed, so every implementation is different. And unfortunately none of the big players are really incentivized to standardize anything.


OIDC is basically a protocol, and is what you're describing. It's _the_ attempt to standardize on a common set of scopes, discovery mechanisms, etc for making it easier to build apps that use OAuth2.


And the platforms that do support OIDC properly just require a login portal URL to integrate support for it.

Want to add Microsoft logins with an oidc library? One url.

Want to add Twitter / Facebook signin? Go get an OAuth library and write several hundred LOCs to detail the scopes and crap you want form them because they are non-standard.


Forgive me if I'm mistaken, but isn't OIDC for authentication/profile information? Does it include standards for things like accessing contact lists, reading/writing files, sending email, etc?


OIDC is for authentication and profile information. The standard claims refer to each field of profile information [1].

It doesn't include any domain-specific operations like your examples.

[1] https://openid.net/specs/openid-connect-core-1_0.html#Standa...


A bit late: you're correct, but it does provide a standard API for looking up additional metadata, even if it's non-standard. That's using custom claims and/or the user info endpoint (which can include custom claims). You can use custom scopes to limit what is available as well.


I've recently tried out Keycloak(https://www.keycloak.org/) and have been impressed with it. Saved at least a few weeks on a personal project. It does have a learning curve though.


This, so much.

I've been tasked with implementing oauth/openid at work and in about one month in spare time i was able to read rfc 6749, install keycloak, configure it, create a client, create a very basic app (~80 lines of python and flask) and log in via oauth/openid with user information and groups pulled via LDAP.

Keycloak is really a game changer.

This presentation is very interesting: https://www.youtube.com/watch?v=FyVHNJNriUQ


I've been usjng keycloak as an UMA authorization server, and while it's better than nothing it has a few bugs that make it not exactly compliant with the spec. Additionally the UI is incredibly confusing and the documentation isn't that helpful in some cases. Anything you want to do you have to spend too much time searching the UI and some things are not even available: Want to assign ownership of a resource to a user? Need to run a curl request. Though I haven't tried other implementations, so maybe this one's the best


Keycloak is a great open source identity server that supports openid connect. Works out of the box but also allows for full customization as well. Whatever you do, don't write your own oauth server.


Hydra looks like another popular open source solution, but it leaves you to implement the user login and challenge.

I have been writing a hobby OpenID server as a way to learn Go and to understand auth. Reading the OpenID connect spec is SO much more educational & clearer than the OAuth RFC on its own.


> Whatever you do, don't write your own oauth server.

I wish I could upvote this 10 times. There are open source solutions. There are closed source solutions. There are SaaS solutions.

Pick a solution that meets your needs. Build the differentiating features of your app, not an OAuth server.


Any recommendations or favorites, and any to avoid?


Hiya,

I'm employed by a company which provides an OAuth implementation which I think is worth evaluating; you can look at other comments on this post or in my profile for more details. Happy to chat more over email, which is also in my profile, if that'd be helpful.

Since I don't know your use cases or needs, I would like to take a step back and think about what a developer might think about when evaluating this type of decision. Here are some things I'd consider:

* what does your organization currently use? If your org has experience with an identity solution, I'd evaluate that very seriously. It might not fit all your needs, especially if it is focused on IAM (identity and access management) as opposed to CIAM (customer identity and access management) but since your org uses it and presumably knows how to operate it, start there.

* cost of a solution. What does it cost to run the solution every month? What costs to set up the solution? Don't forget to estimate the time of staff to operate--even if I handed you a great solution for free, it would take an engineer's time to get it up and running, and that costs money.

* features and functionality. These could be what you might think of as features (does it support passwordless, what languages have client libraries, what standards like SAML, WS-Fed, OIDC, LDAP are supported) as well as non functional requirements like tenancy, performance and data center locality (sometimes governments have requirements about where user data can live).

I've built a number of software applications and I can tell you that I wish I'd started out with a separate identity provider more often. It makes it easier to create a single sign on experience, add more applications, and enable new functionality. Add that to the fact that identity is a necessary but not sufficient requirement that doesn't often differentiate or add much value to your application. If you're building a todo app, people expect to be able to login but will rarely rave about how smooth the login experience is :) . If you find a solution that works, you can often drop it in and accelerate delivery of the real value of your application, while giving you flexibility for the future.

Finally, here's a video from RailsConf about the dangers of rolling your own user identity management: https://railsconf.com/2020/video/seyed-m-nasehi-why-you-shou... . I don't know the speaker, but certainly some of the issues his hedgehogs (no, really) encountered seemed familiar.


(Full disclosure, FusionAuth is my employer.)

If you want a drop in OAuth/OIDC server that runs anywhere, I'd recommend looking at FusionAuth. You can set it up in 5 minutes and it runs everywhere: https://fusionauth.io/docs/v1/tech/5-minute-setup-guide

Worth noting: it is free as in beer--if you need an open source solution, it's not a fit.

Edit: typo


A drop-in component so your product can do what exactly?

To be an OIDC Relying Party [1]?

To be an OIDC Provider [1]?

To be an OAuth2 client [2]? Keep in mind, being an OAuth2 client is probably not useful by itself -- you'll have to program your app to deal with the resource server's specific APIs to accomplish anything, and request the appropriate scopes for the situation, based on what you're doing.

[1] https://openid.net/specs/openid-connect-core-1_0.html#Termin... [2] https://tools.ietf.org/html/rfc6749#section-1.1


Company i work in spent 2 years to implement OAuth. For that time application was rewritten 3-4 times. 3 developers worked on that in different times. And guess what? It still doesn't work and serves tiny portion of traffic with tons of issues.


I work at WorkOS (we handle Enterprise SSO among other things, https://workos.com). We interface many... should I say "less elegant"... enterprise identity protocols with an OAuth api. There is support for common identity providers such as Azure, AD FS, Okta, VMware, to name a few. If you'd like access to our beta send me an email (max@workos.com) and mention hn.


Were you implementing a client in all of these OAuth integrations?

I thought the point of OAuth was to make it easy to build clients and push the complexity to the authorization servers :) ?


I have implemented OAuth servers and also OAuth clients because our integrations go both ways: some parts of our systems rely on 3rd-party services and other companies rely on us as well. The take away from my comment is that even if the software specification is clear (I gave RFC 6749 as an example) some engineers will disregard the instructions and make something completely unexpected. The majority of developers expect good quality software from big companies with massive engineering teams, but people who have the opportunity —or should I say misfortune?— to work with them realize software engineering quality is often an illusion.


Whatever the intentions may have been, it is definitely not the case that OAuth pushes complexity to authorization servers. My audit checklist for OAuth clients is fairly long.


As someone in the early stages implementing Oauth2 for the first time, I would be interested in seeing that list, if you don't mind sharing.


Agreed, I'd love to read that blog post.


Or Book/booklet, I'd buy that as I'm sure many would.

Indeed, there are some people when it comes to best practices, that I respect more than industry standards as they are usual best practices that will be standard tomorrow.


I think in addition to a spec, an official test suite that's used to certify an implementation as compliant would also greatly help.


This is why Nylas.com exists now.


This is all good and useful, but I wish OAuth folks would fix the bugs in the spec like, e.g.

"The authorization server MAY issue a new refresh token, in which case the client MUST discard the old refresh token and replace it with the new refresh token. The authorization server MAY revoke the old refresh token after issuing a new refresh token to the client."

This sounds reasonable on the surface, and technically it might be correct (depending on your interpretation of the words). But if someone naively follows this instruction to implement the server - issue a new refresh token and revoke the old one in the same refresh request at the server, what happens is when the client doesn't receive the new refresh token due to any transient failure, the client effectively loses its authorization, since the old refresh token is revoked and they didn't get the new one. Thus the protocol becomes "leaky" - you lose users because of transient errors.

i.e. "after issuing a new refresh token to the client" needs to be "after the new refresh token is being used at least once", or otherwise invent some other way to ensure the new refresh token is correctly received at the client before revoking the old.


"… or otherwise invent some other way to ensure the new refresh token is correctly received at the client before revoking the old."

These sorts of details will typically wind up in an implementation report document. It's rare that an IETF protocol specification document will attempt to pre-determine the solution to all such implementation details up front, and IMO, not good when they do.

In some contexts, it may be much better to require the application to re-obtain resource owner authorisation than to allow a replay of a refresh token. In others, it may be much better to avoid requiring resource owner authorisation. What you see as a bug, others would see as a critical feature.


There are many parts to the spec for the full picture. This here specifies that server MAY issue a new refresh token (clarifying whether it is required), and if new refresh token is issued, the client MUST discard the old.

I believe the intention here is not to verify whether it was used once as you state. And transient matters (how the client will detect if not received would be a different scope), for example, if client didn't receive (or lost), then likely here you may need restart auth cycle.


Could not find what you have quoted in article. What are you referencing?


To this day I have no idea how one should handle client secrets in (F)OSS applications, and I'm often wondering if I'm the only one. I guess what this text says in 4.13 is "just don't put it somewhere for anyone to see", which means that the only solution is to make the id and secret configurable and let the user register the application with its OAuth provider, which of course most users wouldn't know how to do. As a result, applications like Thunderbird simply put the secret in the code with a comment above it "please don't copy these values for your own application". I really don't mean to shame Mozilla, what else are they supposed to do?


It's not just an open source problem. It's fairly trivial to scrape the keys out of a lot of closed source applications. When Twitter first mandated OAuth, I was able to grep a key right out of the apk of their native Android client.

OAuth with client secrets is fundamentally unsuitable for desktop and mobile applications, but that hasn't stopped virtually every major implementor from using it that way. The people who wrote the spec were too busy congratulating each other and downplaying concerns to seriously engage with this issue until it was much too late to do anything about it.

It remains a deeply broken standard, and the people who were responsible for it at an early stage are entirely to blame. Several of them have removed their names from the spec, but I'll not forget the role they played in shouting down critics who raised concerns about this very issue.


Much of the deployment of OAuth2 was a shitshow, because early on, there was a huge debate about OAuth1 vs OAuth2, and the concepts and terminology used in the OAuth2 spec exceeded the mental load of people who were simply looking for a quickstart to some API. Docs by various providers either adopted the OAuth2 terminology wholesale without sufficiently explaining the new concepts and the best practices, or they simplified the client registration to just a few fields and offered no guidance as to make use of them in your app.

The spec has been clear from the start that there are clients that can protect their secrets, and ones that can't [1]. Here's a choice quote: "A native application is a public client installed and executed on the device used by the resource owner. Protocol data and credentials are accessible to the resource owner. It is assumed that any client authentication credentials included in the application can be extracted." Unfortunately, much of the rest of the spec is a bit like a playing puzzle, where reading it in one sitting doesn't actually build your understanding, and you have to constantly cross-reference various sections to construct a complete picture of the process the spec is trying to set.

It look a long time for people's familiarity with the spec to grow, and for providers and clients to improve in quality and adherence to the ideas and text of the spec.

[1] https://tools.ietf.org/html/rfc6749#section-2.1


If it's a FOSS clientside application, like a mobile or desktop app, it's a public application and the client secret doesn't really apply. If it's a serverside application, a web app, then every instance of the app will be registered independently with auth servers, and will manage their own client secrets.


From my understanding the best practice for applications that run on clients' machines (Javascript SPA, mobile app, email client) is to not use a client secret at all. Just use a client id and assume that anyone can reuse it.

Of course that is if the OAuth provider allows it.


Yup, a SPA can't really keep a secret, right?

That's why the best option is to set up a secure middleware layer to hold your secrets. Here's an example using react: https://fusionauth.io/blog/2020/03/10/securely-implement-oau...

(Full disclosure, this is a post on the blog of the companyfor which I work.)


People build SPAs with the OAuth2 code flow (and PKCE) all the time.


Sorry, I misspoke. Those SPAs always have middleware to handle the authorization code, right?

It's really the implicit grant (where access tokens are available to the client) that should be avoided.


No, with PKCE you can do auth code from an SPA


With the implicit grant? I read the RFC recently and thought it was only for the Authorization Code grant.

From https://tools.ietf.org/html/rfc7636

"OAuth 2.0 public clients utilizing the Authorization Code Grant are susceptible to the authorization code interception attack. This specification describes the attack as well as a technique to mitigate against the threat through the use of Proof Key for Code Exchange (PKCE, pronounced "pixy")."

At the end of the day, no matter what the grant, if you store the access tokens (or refresh tokens) in the browser running an SPA, it is vulnerable to an XSS attack, right?


This is solvable by use of a web worker as a key vault:

https://gitlab.com/jimdigriz/oauth2-worker


That is very cool, thanks for sharing!

Edit: This seems pretty new, any timeline for a release?


Thanks. I am not wholly convinced of the benefits of releases for a project of this scale, I suspect most will just reimplement the moving parts in their own code for a variety of (usually NIH) reasons.


With PKCE you can use the authorzation_code grant flow. The whole issue with SPAs and the authorization_code grant flow isn't the presence or absence of middleware; rather, it's the lack of a confidential client. PKCE gets around the requirement for a client to securely store it's secret.


Right, but even after the authorization code flow, the access token is stored somewhere on the client.

My understanding is that that is suboptimal because the browser has such a large surface area to secure.


> Of course that is if the OAuth provider allows it.

That's the catch. Last time I checked, at least Google didn't. So how can you write a (F)OSS email application for accessing GMail via OAuth2, without putting the secrets into the code (which Google also forbids)?


In this situation, their guidance says to embed the secret, because in this context it's obviously not a secret. Here's the current page [1]; here's the earliest Archive.org snapshot of its one-earlier predecessor page from 2015 [2] -- the advice has been consistent.

[1] https://developers.google.com/identity/protocols/oauth2/nati... [2] https://web.archive.org/web/20150520223809/https://developer...


No, Google is not consistent in the slightest, because their terms of service directly contradict this statement:

"Developer credentials (such as passwords, keys, and client IDs) are intended to be used by you and identify your API Client. You will keep your credentials confidential and make reasonable efforts to prevent and discourage other API Clients from using your credentials. Developer credentials may not be embedded in open source projects."

From: https://developers.google.com/terms


The conversation I had with a Google engineer implied the appropriate course of action was to embed the secret.

https://github.com/openid/AppAuth-JS/issues/46


The Google engineer should read Google's terms of service which explicitly states that you may not do that (see my reply above).


Google Mail's OAuth2 instructions discuss this point explicitly.


Have a link?


"The process results in a client ID and, in some cases, a client secret, which you embed in the source code of your application. (In this context, the client secret is obviously not treated as a secret.)"[0]

[0]: https://developers.google.com/identity/protocols/oauth2


You would not embed the client secret to any application unless you are open for public to see and use the secret (most likely not for applications you distribute to public domain).


Usually you put a service that hold the key, and the client application ask the service to do the request for it.

If the user doesn't want to use your service, they configure their own key.


Do you have any examples of this?


Isn't that what PKCE is for?


It is probably worth pointing out on this thread that there is a brand new draft (not yet on the working group) attempting to combine extensions and the best practices of OAuth (including these security practices) into a new "2.1" version.

https://tools.ietf.org/html/draft-parecki-oauth-v2-1-02


I need to read this draft RFC a few times before I can grasp it completely. There is an existing RFC drafted in 2013 with a focus on OAuth 2.0 threat model and security considerations [1], and it looks like this new RFC is making more specific recommendations on top of it. May be read them together.

To be honest, I was wishing for OAuth 2.5 if not OAuth 3.0 to consolidate already fragmented OAuth 2.0 spec and landscape [2]. At this stage, there are too many draft proposals and a majority of them led by vendors with some interest in standardizing their implementations.

For example, this RFC suggests restricting issued access token to one resource at a time (using audience parameter). Well with Microservices landscape this gets really challenging. Your client application may be interacting with multiple resources. Neither OpenID Connect nor OAuth 2.0 offer solutions to issue multiple access tokens (not yet). An API Gateway may be a solution but still so much ambiguity.

I think OpenID Community has done a better job to organize their specifications and working groups [3]. If you go on their specification page it tabulates really well what spec is final, currently under implementation, draft, or obsolete. Still, big vendors influence agenda and direction.

(Disclaimer: I am the founder of https://axioms.io/ which OAuth 2/OpenID Connect compliant identity management platform)

[1]: https://tools.ietf.org/html/rfc6819

[2]: RFC 6749, RFC 6750, RFC 6819, RFC 7662, RFC 7009, RFC 7519, RFC 8414, RFC 7591, RFC 7592, and 20 more.

[3]: https://openid.net/developers/specs/


As posted by someone downthread [1], there's an ongoing effort to update the OAuth 2.0 spec with changes since, lessons learned, and best practices [2], in what's currently being called OAuth 2.1. Some more rationale and resources on oauth.net [3].

[1] https://news.ycombinator.com/item?id=23083245 [2] https://tools.ietf.org/html/draft-parecki-oauth-v2-1-02 [3] https://oauth.net/2.1/


Thanks for the pointer. I am not sure if OAuth 2.1 is part of working group yet. Nonetheless, a good consolidated read compared to reading 20 different RFC specs.


You need an API gateway, you don't want your clients tightly coupled to the individual services


> Clients MUST NOT pass access tokens in a URI query parameter

I've been wondering about this lately. This makes sense from a security perspective, but the primary alternative is setting a header like Authorization: Bearer, which makes your request subject to browser cross-origin restrictions, and all the fun that comes with CORS. In my case, the extra round-trip required for the CORS preflight in this situation is simply a non-starter for my application.

It would be nice if there was an escape hatch for developers who understand the risks. More background in this SO question I asked a couple days ago:

https://stackoverflow.com/q/61563348/943814


put it in a cookie (secure samesite) or in request body


Can you give more details on how you'd use a cookie here? SameSite=Strict/Lax can't be used cross domain, and none is no good for mutating requests due to CSRF.

Request body works, but it forces you to split your API if you want to be cache friendly. What I mean is ideally you want public data to be GETable and thus cacheable. In the system I'm building, any given path can change back and forth from being public vs requiring authorization, so if I use requests bodies (ie POSTs), I would need to detect whether the data is public or not before making the request and choose between GET or POST at request time. That might not actually be that bad; I'll have to think about it more.


Worth bearing in mind:

> This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.

> ...

> It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."


As a case in point: https://tools.ietf.org/html/draft-wkumari-not-a-draft-05

I think publishing an IETF RFC on security best practices is hard, but I hope these authors manage to pull through


If there was one perfect way to do third-party auth, we wouldn't need a standard, because all the big companies would be doing it, and the smaller companies would do it.

The problem is, there's a bunch of decisions with auth, which have to be made, but aren't really important. What should the name of the auth token header be? What should be the format of the responses? These decisions are like deciding what side of the road to drive on: it doesn't matter which side of the road you drive on, as long as everyone drives on the same side of the road. The function of a standard is to decide what side of the road everyone drives on.

OAuth isn't a standard. It's just a description of all the various homegrown third-party auth systems that anyone has implemented over HTTP, with only the absolute worst patterns weeded out. None of the shareholders wanted to re-implement their third-party auth, so they just made sure their flavor of auth made it into the standard. It's like if the Europeans and the Americans got together to standardize the side of the road that everyone drives on, and the standard they came up with is "You have two options for which side of the road to drive on, the right or the left."

Until a group of visionaries with enough clout to make the world fall in line creates an actual standard, OAuth is going to continue to be crap. And there's strong disincentive to do that: if you agree to conform to a standard that isn't exactly what you've already implemented, then that means you have to reimplement. A few years' pain conforming everyone to a standard would significantly drive humanity forward--an incomprehensible amount of developer hours have been spent writing custom OAuth integrations, and an actual standard would allow us to write libraries around it that everyone could use. But corporations don't care about pushing humanity forward when it's contrary to their bottom line.


It's not structurally all that great; it opens with a long section of recommendations backed by citations to later "attack model" sections in the text, which themselves include disjoint lists of recommendations.


I've been working with OAuth 2 for 7 years and I still don't really understand all the dark corners of the spec. Good to see Resource Owner grant type get called out for a "Do not use", although I dont think using it in a private, server side context is off the cards...


Is it weird I like reading such specs? Many people find it dense, but I find it rewarding to read. What CS job would be more tuned to spending time doing this leisure?


Sysadmin/devops


Since we're talking oauth/openid: does anybody know if there's a way to get the OpenID spec in a format that is nicely formatted for printing?

(ideally, just like rfc 6749...)


The road to hell is paved with best practices.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: