Somehow OAuth for native apps is never covered in “complete” guides, and yet it is among the most challenging (how do you verify an inherently untrustworthy client?) and quite frequently used ways of integrating OAuth.
There is an RFC for that though, and I know this will be controversial but my recommendation regarding OAuth in general is to spend a bit of time and just read all the specs first[0][1][2][3]. It informs you on the basics, and gets you a superpower of being able to spot non-compliant implementations from afar.
* figure out your risk posture
* secure your bearer token if you are using it
* if you need higher levels of assurance, then use DPOP https://datatracker.ietf.org/doc/html/draft-ietf-oauth-dpop-04
The other thing to consider is whether you are first party or third party. If you are really using third party auth, then you should follow RFC 8252 as linked above and pop out to the system browser. Otherwise you may use a webview (because if you control both ends, the tradeoff in improved UI may be worth the increased risk).
Definitely agree on reading the specs. If you are interested in OAuth in general, it's worth reviewing the OAuth 2.1 doc, which is a work in progress but will consolidate a lot of best practices: https://datatracker.ietf.org/doc/draft-ietf-oauth-v2-1/
Why is meant by “an untrustworthy client”? I always assumed clients are inherently untrustworthy? I assume I’m misunderstanding your sense of “untrustworthy”.
The OAuth terminology can be confusing, and many write-ups assume you already know some of it, which doesn’t help.
In OAuth, an “app” is considered a “client”, even if it’s running in a web server in a data center, as it’s a “client” of the other services involved (e.g. the authentication services and the resources/APIs it will access on behalf of the user).
Now if it is running in a web server in a data center, it can be considered “confidential”, as the code/config is not readily available to attackers. Thus the app can hold things like private keys etc. without too much risk.
On the other hand, if the “client” is an app you can install on a PC/mobile device or load into your browser (e.g. a “SPA” app), then this is a “public client”, as attackers can easily access the code and examine it for stored credentials, etc.
In server-to-server, the client is considered trustworthy—no third party is supposed to be able to get the source or inspect memory, and OAuth secret can be a thing.
But if you are, say, GitHub, how can you prove that GitHub Desktop instance trying to auth on behalf of the user is genuinely GitHub Desktop and not some malicious impostor, who previously extracted all the tokens from the original app and now is trying to get access to user’s repos?
How do you know that the Github desktop instance hasn't been compromised by eval()'ing some third party code, and is using the tokens the client already has?
For that matter - what is the difference between a token in a piece of code on the desktop and code on a server to third-party attackers, other than the server is a much more attractive central location to attack so that you compromise everyone at once?
A lot of effort is spent on trying to identify a client as trustworthy and in preventing exfiltration of tokens, but generally a good portion of the security budget should instead be spent on how to mitigate risks.
A trusted system is one where you have accepted you are fubar if it is compromised. It is equally important to evaluate the trustworthiness of systems as it is to reduce the need to have trustworthy systems in the first place. That unfortunately means actually having security considerations as part of the overall system design, and not just firewalling systems with policy at the end.
You can reduce overall risk at authorization servers by not supporting dynamic registration, or limiting such registrations to a small set of acceptable scopes and claims. You also want to make sure you can revoke compromised tokens in a reasonable amount of time, such as due to device compromise, client compromise (or identified malicious client), or user account compromise. That usually can be done with just refresh tokens with appropriately short lifetimes for many systems.
But the underlying protected resources still need to be designed to reduce risk as well. They should not expose functionality which clients do not need, both by having limited scopes and simply not exposing functionality publicly via API at all. They should also minimize destructive actions - actions should be reversible, such as restoring a deleted record or cancelling a pending order.
Additionally, you'll want to invest in phishing-resistant authentication as much as possible. That means sticking to best practices and going to an external agent (browser) when authenticating, and ideally adopting something like WebAuthn or mutual TLS or Kerberos. Determining trustworthy clients or limiting API access does no good if the user hands a malicious piece of software their actual system credentials.
> what is the difference between a token in a piece of code on the desktop and code on a server to third-party attackers, other than the server is a much more attractive central location to attack so that you compromise everyone at once?
Unlike servers, end-user machines are supposed to execute third-party code not vetted by the developer, which makes for a pretty stark difference as to attack surface.
> A lot of effort is spent on trying to identify a client as trustworthy and in preventing exfiltration of tokens
Read linked documents. The relevant RFC explicitly states that no token bundled together with a native app can be considered “secret” and preventing its extraction is not a viable security measure.
Clients are either public or confidential. Confidential clients have a registered credential (cert, secret) that's used to prove that eg your server backend is actually your server backend.
Cool to see such a comprehensive guide! At the same time, are people seriously still pushing OAuth2 for first party auth? Shouldn’t everyone know by one that OAuth does not solve:
and is generally completely overblown for first party auth? I always to cite from the abstract of the OAuth2 RFC:
> The OAuth 2.0 authorization framework enables a *third-party*
application to obtain limited access to an HTTP service
Yet somehow all of these proprietary software companies (FusionAuth, Auth0, …) cling to the only „open protocol“ to push „portability“ of protocols. But the problem of portability is really a problem of data (you user’s data and migrating that data from system A to B), not the way you exchange or validate tokens or cookies.
I agree that OAuth by itself doesn't solve registration, account recovery, profile management, 2fa, etc.
Most OAuth2 servers (whether the ones you mention or others like keycloak, identity server, azure ad b2c, ping identity, etc, etc) augment the standard with this functionality. Other than OIDC for authentication, I am not aware of any other high level standards for these problems (though https://datatracker.ietf.org/doc/html/rfc6238 is useful for 2fa).
I think that software companies push OAuth2 because it solves a real problem and is a standard. If there are other standards around identity, they'll also be used by these folks (SCIM, other OAuth grants). If there are holes in functionality, those will be filled by custom implementations (whether the software is OSS or not).
I don't see how those open source projects that you mention get us closer to data portability. Can you explain that?
FYI, both FusionAuth and Auth0 allow exports of password hashes, which is fundamental to letting the data owner migrate easily.
Integrating Kratos with anything is just so much work. The only nice thing differentiating Kratos from Keycloak is the ability to provide your own UI out of bound. In Keycloak, one has to use the template language provided by Keycloak and embed all those templates in a JAR file, which can be an operational hassle. Kratos makes it nice because it decouples the registration/login/recovery/settings flows from the UI but wow - integrating it even with Hydra is a mess. I mean, look at a simple case of passing your subject info to hydra token. There's a dedicated admin API in Hydra for that.
“People” don’t want first-party authentication and accounts. With OAuth2 (or rather OpenID Connect) it’s ridiculously trivial to get started with third-party auth (not really third-party but “auth as a service”). This isn’t about portability at all.
Sure you’ll regret it later as new requirements regarding auth as requirements about the flow and page design and whatnot keep coming. However, you have to admit that it’s very tempting. Ready-to-use OIDC/OAuth2 adapters exist for just about anything. Decision-makers and customers can rarely grasp the consequences, but they certainly see the immediate cost-savings.
Hey keephand, I’m working on a project that tries to solve first party auth in a more intuitive way (and doesn’t use OAuth). I’d love to chat more. Don’t see a way to reach you in your bio; if you’re open to it, could you reach out to the email in my profile?
> Cool to see such a comprehensive guide! At the same time, are people seriously still pushing OAuth2 for first party auth? Shouldn’t everyone know by one that OAuth does not solve...
I assume you are using "Auth" for authentication, and not authorization (as OAuth does). Indeed, OAuth does zero for first or third party authentication of users - authentication is a set of extensions provided by OpenID Connect.
The reason people use OpenID Connect as the API for first party authentication is because you _need_ an API for integrating externalized authentication systems, and (excluding SAML) every other system is proprietary with minimal cross-vendor support.
However, that OpenID Connect API only provides for authentication with a few (possibly supported) hints for behavior. It is up to the OpenID Provider software to provide all the IAM features you mention - registration, account recover, profile management, multi-factor authentication, etc.
There is no standard for registration, account recovery, profile management, or MFA because all of these are business processes or user experience customizations - they are software, not protocols. That is why OpenID Connect is used to integrate with a piece of off-the-shelf software which implements your business policy and which is customized for your desired experience.
These both have the same architecture as say an Auth0. Supertokens API provides a proprietary alternative to a constrained OpenID Connect protocol. Kratos has a companion project which exposes the server via OpenID Connect.
The supertokens API may have been easier for _them_ to implement than all of the potential OAuth and OpenID Connect extensions, but when going with constrained profiles like the OpenID Connect Basic Client Profile it would be difficult to be substantially simpler without being less secure.
It works really well for scaling your systems. In the case of "third party" this can mean disparate services within your platform, permitting you to fully isolate the authentication from literally everything else.
Sorry but that’s just wrong. The biggest auth systems in the world (Google, Amazon, Microsoft) do not rely on OAuth2 for internal services. Isolating auth into a central service and acquiring user consent as a third party, which very literally is the only purpose of OAuth2, are two very different things. Saying that you need a third party delegation tool to „scale“ an auth system is like saying that you need a good combustion engine to build an electric car.
Google and Microsoft do use OpenID Connect for internal services, as well as their consumer-facing services.
While they do not necessarily use consent for delegation, they do use automated policies to restrict API access to what the application needs.
Likewise, having a common protocol for getting authentication and API access allows them to decompose systems. Having to build first party authentication and API access systems would naturally encourage applications to become monolithic.
You are wrong. You do not need to acquire consent, either, when it is implicitly granted because the service is the same system as the IdP. I.e., consent is requested/granted when they sign up with your IdP.
JWT is not OAuth2. In fact, OAuth2 shouldn’t even use JWTs in the first place. For example, all big systems (Google, Amazon, Microsoft) use opaque tokens for OAuth2 (in third party contexts) due to revokation, PII, and other concerns. Complexity doesn’t scale if used in the wrong context. And needing to put a nail in the wall (pass by value tokens) justifies buying a hammer (JWT) but not a chainsaw (OAuth)
Heya. I own Microsoft Oauth2. We use JWTs for our access tokens. (the consumer service uses encrypted tokens, yes).
Auth0 just championed the RFC for a standard for JWTs as access tokens too, largely informed by the architect there working on our access token format.
CAEP and RISC are how we're tackling revocation. Encrypted JWTs handle PII.
Oauth is a fantastic way to scale complexity - the whole "first party" consideration here is a red herring. Do you think we want Outlook calling the Exchange backend via a different auth protocol from the one we tell 3p clients to use? That's a waste of time to go build.
We also just added support for the JWTs emitted by Google and AWS, so that you can token exchange their JWTs for an access token in our ecosystem.
> OAuth2 shouldn’t even use JWTs in the first place.
OAuth2 considers its refresh tokens to be opaque to all parties except the AS which issued them, and its access tokens to be opaque to the client (only understood by the API resources where they are used).
Thats not to say they need to be self-contained - several OAuth systems will just make these both database indexes, and require resources to make an introspection call to validate and get information on access.
There are many clients which have found out that the server access token is a JWT and have extracted information from it. These are at a minimum breaking their compatibility contract, but also often doing something inherently insecure like using the access token as an authentication statement.
Note also that JWTs can be encrypted as well as signed, which would eliminate any PII leakage.
Revocation at a central location typically doesn't happen in large scale (geographically distributed or otherwise eventually consistent systems) unless it is essential for the business case - instead you just tune how long access tokens are good for, so that the client (and not the resource) needs to go back to the central location for a new policy statement in the form of a new access token.
> Actually, many vendors use JWTs as the OAuth tokens, though they don't have to. And they are required by OIDC.
Clarification - only the id_token defined by OIDC needs to be a JWT. This is because it is an authentication statement to the client, so the client needs to be able to verify and understand the data.
Fair point. I didn't realize that the access token was referenced in the OIDC spec, but sure enough, there it is:
"After receiving and validating a valid and authorized Token Request from the Client, the Authorization Server returns a successful response that includes an ID Token and an Access Token"
Some gov't services our users need to integrate with have moved to M2M OAuth/OIDC, using organization certificates for authentication.
Suddenly our customers, most of who have never heard of certificates before, has to figure out how to order the right kind of certificate, figure out which of the two or three certificates they get in the mail to install in our application, and how to extract the certificate from the password-protected, zipped PDF file in the separate mail. The password for the PDF was of course sent as an SMS...
Then they have to log in to the centralized authorization service and configure the integration access, where including the correct scopes is of course essential. Yet another term our users have never heard of and don't understand. Of course this is all very new, so the setup page has changed several times during the last year, so our guides keep getting outdated.
Secondly the whole token exchange thing is quite opaque. If there's something wrong with the certificate or the returned token, it can be quite difficult to figure out what's wrong from our end.
So far it's been quite the support nightmare. Heck even some of our support folks struggle with how all the pieces fit together and what to do (some have more domain than technical knowledge).
Just as an aside, anything using x509 certificates is a nightmare operationally. I sure don’t have an alternative, but it murders me every time. Now when I architect clouds I consider it a core service, not an afterthought - an absolute minimum set of services and some strong runnooks to generate certs, distribute, and renew/rotate them. All applications these days support them, but don’t support managing them well in my experience. There’s glue needed for each app to request a cert and install it.
I recently ran into problems with not being able to pass client certificates through to a server behind a load balancer/ingress (think Apache/Nginx) for the app on the other side to consume. The information online hints at either it not being possible altogether due to how TLS is designed, whereas other alternatives mess around with parsing/setting headers, but all of those feel hacky.
So i'd like to suggest that not only x509 is problematic, but in certain configurations mutual TLS is also a non starter, though there are certainly other challenges in regards to certs themselves - for example, managing expiry dates properly, sometimes needing a CA with the proper chain of trust, other times needing to mess around with the formats for particular tech stacks (especially problematic in Java, in my experience, with its keystores and trust stores) etc.
That's one of the reasons why at work we recently moved one of our internal services over to JWT for server-server auth. Now, granted, i did ensure 100% coverage with tests for the code that i wrote and largely relied on already established libraries for the low level bits and therefore mostly just wrote glue code, but setting everything up was a breeze. Now, of course, you do need to figure out which approach to JWT is better for your app, from symmetric encryption with something like HMAC512 and a shared secret, to something like RSA512 that once again deals with private/public keys but doesn't force its own network topology upon you.
In combination with something like overlay networks with additional encryption (in the case of containers or a service mesh), you can get a "good enough" level of security, though depending on the circumstances you might also need to look into how loosely/strictly your tokens are also validated and write additional logic for the subject/claim/issuer/audience fields, issuance date and expiry date checks and so on.
When forced to deal with x509 certificates, however, i think the best option is to go for the simplest possible use cases (e.g. server cert for SSL/TLS, instead of mutual TLS) and to manage them at an ingress level, ideally with something like Let's Encrypt. Using them for more than that is just asking for additional challenges and/or trouble.
This is unfortunate but it is not a requirement of OAuth with certificates or with the OAuth MTLS RFC. It may just be lousy policy on part of those government services.
The only time you should need actual certificates (e.g. trust chain with attributes) is when you have multiple authorization servers run by independent organizations. In that case, a central party may do certificate issuance so that each AS can tell that the client has been centrally vetted.
In all other cases, any use of certificates is because they provide a pocket in which to store a public key. A self-signed certificate is just as good for authentication (proving you are the same party as before), because it is the crypto challenge that is important.
For actual certificate issuance, we are unfortunately in a gap between browsers killing off the <keygen> tag and easy adoptability of newer schemes like ACMEv2. Providing public keys via password-protected zipped PDF files has me thinking that their solution for this problem was possibly ill-conceived, and they are issuing private keys for import rather than using CSRs at all.
Note that for multiple AS systems, one common pattern is to use dynamic client registration from OAuth, which was extended to support software statements. In this way the certificate can be used to get local, non-certificate-based client credentials with a particular AS.
The grants being opaque at the API level is unfortunately a common function of authentication and security functions - they don't want to tell you the thing you did wrong, and that is sometimes used as justification for making the errors mostly pointless. A good AS should have some focus on providing more information within transaction logs. However, if the certificates are being shared over MTLS rather than through JWTs the amount of error information is unfortunately more commonly constrained due to infrastructure layering.
> Secondly the whole token exchange thing is quite opaque. If there's something wrong with the certificate or the returned token, it can be quite difficult to figure out what's wrong from our end.
When you say "token exchange" are you specifically referring to the token exchange type grant that was ratified in 2020 or do you mean calling the token endpoint to get the token as part of the PKCE workflow? If it is the prior, I'm curious about your experience with this relatively new feature and why it's so difficult to use.
It doesn’t cover configuring scopes, on-behalf-of flows, getting tokens for different resources, etc. That’s half the stuff I’ve been struggling with lately (that and popup blockers when trying to avoid the redirect flow).
It’s a nice in-depth article for what it does cover though. This topic is so vast and confusing it’s no wonder security is often done poorly.
Although not directly related I’d like to see a modern guide to tokens and cookies. There are lots of choices and then on top of that the choice of storage location. I think OWASP recommend cookies rather than localStorage but it would be good to see the different risks each protocol or storage location comes with in an authorative guide with the pros and cons of each and which shouldn’t be used and why. I.e. there seems to be lots of footguns on top of the complexity of having to choose the appropriate OAuth flow.
It's hard to give overall general advice because the level of care you should take with a token depends on what trouble occurs if it is stolen. Just as someone building a banking app needs to take more care than someone building a recipe sharing app, the proper token storage for each of these cases is different.
> I think OWASP recommend cookies rather than localStorage but it would be good to see the different risks each protocol or storage location comes with in an authorative guide with the pros and cons of each and which shouldn’t be used and why. I.e. there seems to be lots of footguns on top of the complexity of having to choose the appropriate OAuth flow.
The mental model I have for web storage these days is of three buckets per site:
1. Server data like secure-http-only cookies and negotiated information like HSTS
2. Javascript-accessible persistent storage
3. Javascript-accessible session storage
Databases, Javascript storage API and cookies get partitioned into these buckets per site (e.g. eTLD+1). Some browsers may also extend these to support unique state for embedded sites - an advertising script can set a tracking cookie, but it will have a different cookie for each domain that the script is placed on. Likewise to prevent tracking, they may put the actual page caching into per-site storage (possibly a 4th bucket)
The browser may have its own expiration policies for these buckets in addition to any policy you set - Safari for instance will expire javascript-accessible persistent storage after a week of active use without interacting with a site, even if the cookies have an expires header farther in the future.
The browser exposes this data set as cookies, caches, local/session storage api, IndexedDB, and negotiated behavior like HSTS.
A guide to understanding all of these would be useful. But generally, the combined "most restrictive" browser behavior is that these are just different views on top of the same storage buckets.
The significant odd-one-out is a HttpOnly and Secure cookie. This may have a different expiration policy, and may be strongly desirable for say authentication systems which are trying to remember a user or device for an extended period of time, possibly for a risk tracking basis. These cookies also tend to be the ones which are synchronized across browser instances - so they are valuable for single sign-on and behavioral synchronization with installed PWAs as well as for native app use cases using the browser for authentication.
OWASP likely prefers cookies because they have policy options to be secure and http only - I as an attacker cannot downgrade to a fraudulent HTTP site to extract secure cookies, nor can I inject scripts that will exfiltrate HttpOnly session cookies.
Aren't popups not supposed to be blocked if it's in direct response to a use action, ie clicking a button? Or has that changed? I feel like Firefox has blocked popups I expected to work.
Sounds about right. The “direct” part being key. I think the framework I was using (Vue 2) was adding some “async” intermediary stuff the way I was using it between the DOM click event and the msal.js window.open call. Once it’s in a callback, it’s not “direct”.
I simplified some code, restarted everything (WebPack hot reloading has been quirky too) and it’s working now. Still took an hour or two of my evening in reading a debugging though, and not exactly sure WHY it went from broken to fixed <shrug>
Pop-up blocking is usually configurable now, with the option to allow or deny all pop-ups independent of whether it is a direct response to user action.
Different browsers unfortunately have different definitions of "direct". Chrome, as you might expect, is most aggressive about carrying that user action forward. Others will discard that interaction in more cases, and may also simply lose that interaction on certain callbacks, such as in a fetch response or indexed db response.
Most implementations are focused on oAuth to establish identity, but the test of your application's security begins after authentication. The major portion is authorization i.e., verifying access to operations and data accessed with those operations. Isolating clients' data from each other, delegating access to portion of data to apps or users, having hierarchy of descendant users with varying access rights to both operations & data, and similar real-world SaaS/enterprise system scenarios are something left to the application authors to figure out. You usually get guidance to slap roles as authorization, and sometimes permissions, but the recommendations will avoid venturing to as far as including key data as part of the authorization construct, because that may seem too application specific, but it is not. An activity-based Data-aware authorization system [0] can perfectly handle various complicated scenarios.
> Most implementations are focused on oAuth to establish identity, but the test of your application's security begins after authentication. The major portion is authorization i.e., verifying access to operations and data accessed with those operations.
Agreed. Authentication is quite a solved problem, the major focus for our portfolio of B2B companies is on authorization. They spend significant time on designing scopes which reflect and impact their business.
It is surprising that nearly every solution out there aims to handle both authentication and authorization. We choose to strictly separate these and delegate identity to established third parties or our customers[0].
We recently open sourced The Usher[1], our solution to part of the authorization problem. It is a minimalist server that issues access tokens based on roles and permissions looked up in a database (keyed by the identity token’s `sub` claim). Perhaps folks in this thread will find it useful!
> It is surprising that nearly every solution out there aims to handle both authentication and authorization. We choose to strictly separate these and delegate identity to established third parties or our customers[0].
It is a trade-off. Both the authentication and authorization process are typically customized to business requirements, and splitting them apart gives more flexibility in customizing them.
However, the expectation is often that these are both represented to the user as part of the same cohesive experience. Independent implementations of authentication and authorization can make this more challenging.
There is also the expectation that an authentication product provide services like registration and account management. Account management often includes controlling any granted delegated authorizations, which mandates additional coupling between the two systems.
I keep meaning to do a deep dive into it, but surely we now have enough hardware based modules (several billion smartphones with secure enclaves and the like, motherboards with secure enclaves and millions of Ubikeys) that we should push to go server and client certificate only?
I could imagine it's simple enough for a small company, even forcing customers to do it if the value proposition is high enough.
Not perfect, but has support from the major browser and OS vendors. I've heard some great numbers from vendors about how this increases account security and user satisfaction (no first hand experience).
I wish. Hardware backed credentials are more secure and easier to use.
The lowest common denominator is passwords. The second lowest common denominator is SMS. Even this low bar excludes some older people, customers on foreign travel, and privacy activists. Another notch up is TOTP. Requiring a TEE capable smartphone would mean losing customers or absorbing the cost of buying Yubikeys (and replacements). So this is typically only done for internal enterprise users.
Certificates are awesome on paper (I’m a big fan of WebAuth for example) but we need some sort of custodianship for less technical users who don’t realize that formatting their laptop means they might not be able to log into their services again
Apple has a tech preview of something they call PassKeys - WebAuthn credentials which are in the same keychain as the system password manager, synchronized between all your devices.
This requires other changes to meet the needs of those who expect and require the behavior of credentials bound to a single device, hence being released in an early access form behind a developer toggle.
Yep. Microsoft lets users upload their drive encryption keys to their Microsoft account. Then the account is pegged to SMS, Microsoft Authenticator, or (best) a FIDO key. The problem is what happens when they lose the key. Falling back to SMS lowers security to social engineering your phone company.
One thing I've found very frustrating is the increasing use of Oauth2 for simple API use-cases (ie I have a service that needs to call an API to fetch some data, not on behalf of any user and there's no third-party involved, aka a 2-party system that in theory should be entirely server-to-server).
In theory, you should be able to use client_credentials / 2-legged flow to simplify this, but many APIs don't support that, so you end up with an awkward flow where you are 2 of the 3 parties.
I've been a little surprised at the adoption of "Machine-to-machine authorization" and JWTs in sever-to-server APIs. I can understand that it is more secure since the the authorization token has a limited lifespan. And that JWTs could increase throughput by skipping the lookup. But seems unnecessary.
Is the "Bearer " authorization prefix going to stick?
OAuth in M2M is generally about separating out policy from the API interface, including the database of credentials which are valid, which clients are allowed which authorizations. That includes being able to make changes to these over time, without redeploying the API system.
Note that OAuth and JWT are independent like Peanut Butter and Chocolate. I assumed you are talking about JWT access tokens issued from OAuth client credentials.
> Is the "Bearer " authorization prefix going to stick?
The Bearer authorization method has a certain format and behavior. It is unlikely to be changed to meet some desire for aesthetics. Conversely, it MUST be changed if you are sending data in a different format or with different behavior, such as with the DPoP extension to OAuth.
BFF is not great for long term security, notably token binding. It's hard enough to bind an artifact to the browser session from the IDP - having each app implement binding passthrough from browser to app to IDP is even worse.
There is an RFC for that though, and I know this will be controversial but my recommendation regarding OAuth in general is to spend a bit of time and just read all the specs first[0][1][2][3]. It informs you on the basics, and gets you a superpower of being able to spot non-compliant implementations from afar.
[0] https://datatracker.ietf.org/doc/html/rfc6749
[1] https://datatracker.ietf.org/doc/html/rfc6819
[2] https://datatracker.ietf.org/doc/html/rfc8252
[3] https://datatracker.ietf.org/doc/html/rfc8628