> If you are writing a new application, use OpenID Connect–skate to where the puck is going!
I am glad to see this advice. At CoreOS we have now built API servers, command line tools, and web apps using OpenID Connect and are happy with its development in Open Source and third-party services as well.
> OAuth2 was left generic so that it could be applied to many authorization requirements,
When we started developing Dex[1] we lovingly referred to OpenID as "OAUTH 2.0 with types".
Overall, from integrating OpenID Connect into our products, enabling Kubernetes[2] to use OpenID Connect Providers, and building both an OpenID Connect provider and clients we are pretty happy with the choice we made.
My only complaint is the name of OpenID Connect is simply confusing.
When I did a short 6 month stint at the Australian version of the GDS (the 'Digital Transformation Office', or DTO), my team stood up a prototype/demo of an attributes-based 'digital identity' system using OIDC.
It's almost ideal for the government use-case: private sector IDCs can offer identity/attribute verification services if the government provides an assurance and audit framework. The potential number of digital commercial transactions this could enable are mind-boggling. It's also privacy-preserving due to its attributes-based approach to access control and its federated nature.
Also likely to result in a competitive market for IDCs instead of YAGM (yet another government monopoly) due to the low (technical) barrier to entry: I'm a terrible programmer and I spent less time coding a mock RP/client implementation straight from the spec than I did searching around for pre-canned libraries (which didn't seem to exist at the time).
It's a shame no-one would listen to us. From what I hear, they're going with some kind of centralised RBAC system. Boy it's going to be fun when some welfare payment or tax concession eligibility criteria is tweaked, forcing them to audit and update roles for 10-15 million people :)
> My only complaint is the name of OpenID Connect is simply confusing.
Yea..many people only remember the failed OpenID 1 & 2 specs...so, out-of-the-box there is some developer fatigue associated with Connect. However, federation standards have proven to be pretty sticky--SAML was written 10+ years ago. Hopefully as Connect continues to be developed, marketed, and implemented by developers, it will over shadow previous unsuccessful versions of the protocol.
It's no more confusing then LDAP. First there was LDAP 1.0, 2.0, now we all use 3.0...
OpenID 2.0 is deprecated. Don't use it.
The current OpenID authentication protocol = OpenID Connect.
In the not too distant future, no one will even remember that something called OpenID 2.0 existed.
Maybe OpenID Connect should be OpenID 3.0? I think the idea was to show alignment with Facebook Connect's design...but more open. Remember, at the time Facebook Connect provided some good data--it was a massive user acceptance test.
IMHO, the blame lies with the OIDF for not delivering this message. OpenID Connect has been final for 4+ years. What's taking so long? Maybe it's not in the interest of the OIDF's controlling board members to promote this distributed identity infrastructure? MSFT wants you to use Azure AD. Google wants you to use their IDP. Maybe they don't want to promote until they have product ready? Stay tuned... I think when it's in the interest of the current finanical backers of the OIDF to promote OpenID Connect, no one will remember that old OpenID 2.0 thing.
There's also a flipside too: those of us that remember the original community grassroots spirit and attempted simplicity of early OpenID and feel like OpenID Connect misses a lot of the old point of OpenID, and definitely a lot of the intended simplicity.
The problem with OIDC (from my point of view) is the lack of good libraries for Python/Django projects. We ended up building our own atop the now-defunct django-oauth2-provider. Unfortunately, no one (myself and my company included) has committed to adding the functionality to the more-popular django-oauth-toolkit.
This has lead to our questioning the need for OIDC given that we can generate JWTs as access tokens. If the identity info is embedded in the access token, there is no need for the ID token and (more importantly) micro-services no longer need to check in with the authentication provider to validate bearer tokens.
I do like the discovery aspects of the OIDC protocol that make it easy to distribute public keys.
Mike replied to a comment about JWT's on the blog. Figured I'd also add it here for good measure.
Feel free to continue the discussion in the comments section of our blog.[1]
"
JWT's are great. That's why they are used extensively in the OpenID Connect spec. The question is what's in that JWT? What are the claims? How do you know that the JWT that you got back in the response relates to the request you sent (i.e. Connect defines the nonce)? How do you know that the access token has not been modified (i.e. Connect defines the at_hash)? How do you know the code has not been modified (i.e. Connect defines the c_hash). How do you securely send the request? How does your client register and obtain tokens?
So yes, the puck is going towards JWT... but that's like saying the puck is going to JSON/REST... If you are using OAuth2 to authenticate a person, and you are using JWT, then you will have to define a lot of details to do so securely. Instead of making up your own recipe (which you probably don't have the time to do...), leverage the best practices defined by the experts at Google and Microsoft.
"
I totally agree with the lack of libraries!!! That's why we're working on oxd at Gluu: https://oxd.gluu.org
If you need a license, just email support@gluu.org
It's not free open source, but it will be very inexpensive. It reduces OpenID Connect to a simple three step process (with three corresponding method calls):
Yes, that is me. I have the expertise, having maintained a fork of django-oauth2-provider for over a year; however, the issue with django-oauth-toolkit seems to be the lack of responsiveness on the part of the maintainers. We at edX are willing to contribute these features; but, we'd prefer not to maintain a long-term fork of the project.
Having recently been apart of a new SAML implementation, avoid it if you can.
The technology behind it feels solid (passing signed/encrypted messages around) and using an Identity Provider like Active Directory is an easy choice but after that is where things get difficult.
The client support, at least in Python, is terrible and the resources to understand SAML are sparse. Everyone does something just a little bit different and there isn't much choice.
We have some Django applications that needed to be SAML aware. While I was able to finally able to make it work after two weeks (yikes!!) most of my problems resulted in bugs within the client.
SAML requests can have more than one signing/encryption certificate defined so when one certificate gets close to expiring you can add a second (or third...), let the first one expire then well behaved clients would automatically use the second one. Except when your client always grabs the first one and if the signing fails because the Idp signed the message with the other certificate and it doesn't try the second one =(
Then debugging errors is cumbersome. Always pasting XML snippets and x509 certificates into tools to view the actual messages. Google'ing the error messages with little to nothing out there.
Perhaps it's just a bad experience with a specific python module but given the complexity of SAML, little knowledge out there, I'd stick with CAS if given the choice. It might help the technology if whoever is behind it would maintain clients that adhere to the spec.
Great write up. Dealing with my own employer and the constant confusion around oauth vs OIDC vs JWT, and having to explain they aren't VS at all! They are all stacked on oauth itself, and don't really have much to say about the actual authentication of a user (that is just up to the identity provider).
True, perhaps worth noting though that in OpenID Connect an RP (website or app) can actually request a specific authentication mechanism using the acr_value (assuming the OP supports multiple authentication mechanisms). This can give the application some input over how people are identified before authorizing access to protected resources.
I think this is one of the great virtues of OAuth/OIDC. It makes you realise that, in most cases, you don't care about identity per se when making access control decisions. We just tend to use it as a proxy to infer attributes...
There's been some discussion regarding the merits of using JWT's (JSON Web Tokens) instead of Connect. Mike added some clarification in the form of a comment on the blog.
I also added it here [1]. Feel free to join the discussion on the blog.[2]
I built an enterprise site that needed SSO to work for a number of different enterprises.
I tried 3rd party solutions, ping fedeate, Auth0, Azure etc. and these were nightmares to configure and get working.
Turns out it was way easier just to read the SAML RFC and handle the tokens myself.
SAML 2.0 is finally old enough that most enterprises support it.
So for me Enterprise SSO is a solved problem. For others having a hard time finding 3rd party plugins / services etc. I highly recommend whipping out the RFC and DIYing a solution. Only took me a couple of days. Far less than what I spent on other solutions.
I can only speak to OAuth and SAML. I've done many integrations of each.
OAuth is a bit easier to understand.
SAML is tougher to grasp. I think it is a terminology and flexibility thing. It seems like each vendor or implementation is using their own terms for the same things. SAML has many super configurable levers, but in reality people tend to use only one or two common variants.
SAML is nice that it does not require a direct access between the backends of the client system (SP, or service provider) and the identity provider. All data flows through the web browser. As long as the web browser can access the identity provider (often on a private network) and the client system, it can work.
As far as different people implementing SAML in different ways, it is absolutely true. Getting two systems to work together takes real effort. I've had similar, albeit less fatiguing, issues integrating OAuth. One system has exact redirect URLs; others have partial matching some; some require the redirect URL in each call, others in some; and then there's the case where human intervention is required to deliver data that would normally be sent to the redirect URL (Looking at you AWeber!). At least in the OAuth world, people call a duck a duck and it is usually clear what is being referred to.
I'm by no means an expert in SAML (like with general relativity, I think there are only five people in the world that truly understand SAML), but have a good working practical grasp of the technology.
OIDC can act just like SAML. It can contain the attributes (we call it claims) as well. You do not need a backend call at all if you use implicit flow.
Interesting. I have used CAS extensively and was involved in building SAML support for a commercial app at one time. I didn't understand the post-binding thing really at all--CAS mostly uses the back channel method.
My experience with SAML was that, not only was it a bear to get set up, the second client who wanted to use it was using a different SAML implementation and found things weren't as standardized as we had assumed. Both implementations supplied a principal, only one supplied an email address by default. I think it got sorted out after I left.
I don't love CAS a lot, but it is a lot more turnkey than SAML.
We're investigating JWT now, but I have a feeling it will turn out to be an implementation detail within a larger solution that we haven't even begun to conceive.
I've also implemented shitty poor-man's federation on top of CAS. My recommendation there would be: don't do it. If you need federation CAS really can't help you, don't hack on it.
The other thing I've learned is that none of these things is going to help you with authentication at all, and that tends to be something that isn't well thought-out before implementation time comes.
CAS is definitely useful for internal authentication - you have a database of users and a pile of apps. It's really easy to hack CAS authentication into almost any web app, and you can write a custom server if you like.
It's significantly less useful for external authentication - where you want to your apps to auth against other people's user databases. SAML 2.0 is usually the standard for that, but as you say, it has to be configured properly on both ends.
JWT = Json Web Token. That's not an authentication protocol at all. It's about using authentication tokens and JSON, which everyone's been already doing for years.
SAML = Security & Authentication Mega cLusterfuck. It does get the job done, but it is the hell of a beast to understand and use.
I just don't think it's documented all that well. Aside from
Wikipedia, where on earth do go to learn how the drafted thing is meant to work? It seems to have multiple profiles, and I can't find a decent article about SAML artifacts anywhere...
Frankly, though... you're usually not supposed to be implementing this stuff yourself. Put Shibboleth's mod_shib_24.so in Apache in front of your login endpoint, and it behaves as a SAML service provider for you with consistent configuration and lots of help available for when it breaks. It provides you with HTTP headers for all the attributes provided about the user. If you're not using Apache for most of your app, you could just use it for a login script which shoves everything into a cookie.
Of course, if you're developing ASP.Net for Windows, you get your sysadmin to throw up an ADFS instance which deals with the federation, then use the built-in libraries to auth against ADFS, and it all works like magic.
You can get the SAML spec, it's a standard distributed by oasis, last I checked.
It will help you a bit. Then you will only have two majors issues.
1) The protocol is a really complex beast of giant XML messages, with many variants of workflows.
2) It's very modular to allow to interface with lots of stuff with custom authentication, custom attributes, custom systems. So there is only 50% of a protocol which is clearly "standardized", the other 50% are left as a specific use case details with limited practical example.
I'm a believer that one doesn't do or understand SAML. It's more like, you get two specific products that needs to authenticate together for the business. You read the integration documentation for both and try to find whatever micro-subset of protocol they both talk and make it interface.
Thanks for all the upvotes everyone! If you'd like to connect with the author of the blog, Mike Schwartz, feel free to do so on LinkedIn: https://www.linkedin.com/in/nynymike
The power of OKTA lies in its thousands of pre-configured SSO relationships. It's important to remember that each one was set up manually with a unique client ID and secret, metadata file, etc.
Application integration is the most difficult part of most identity and access management projects. That is why outsourcing identity to a SaaS provider like OKTA or OneLogin is so appealing--they have done all the hard work for you.
There are no open source projects like OKTA because if you're using open source software, you will inevitably have to integrate and test each application you want configured for SSO.
Also, God help you when it's time to swap out your IdP's signing certificate... almost no SPs/RPs support automatic metadata retrieval and updating, so a mad rush to do everything on a Saturday is par for the course. Some, but not most, IdPs support granular signing certificate config.
The fact they have a "catch all" DNS resolver for an SSL-based website that is an interface for admins and end users alike that is guaranteed to fail says a lot about their ability to sell a secure product.
And how else would you let your users have a subdomain login url? Why bother generating specific certs and dns for each when you can just use the HOST header for the login and use a wildcard?
This is totally common and used in a lot of places..
This was designed for exactly that sort of scenario. If you're runnin a SAAS then this is totally common practice, it's used by at least AWS, Slack, and GitHub off the top of my head.. Are you saying these companies are doing something stupid?
How could you phish github or aws by using their dns/cert setup as they intended it to be used?
That's more of the nature of SSL certs than their inability to set them up correctly. *.example.com covers www.example.com, but not www.something.example.com. And as far as i know, you can't get a "super wildcard" cert that allows that.
If I'm wrong on the last point, please let me know, as I know a lot of people that will be very happy to find out!
The power also lies in it's ability to be used as an external userstore that can be called via API's from your own app as well as oidc/integrated mfa support
Keeping those thousands of predefined integrations aside, do you know of any good combination of user-management/access-management/federated-access system beside GLUU?
One thing to keep in mind is that Gluu is an IAM platform, not just an OpenID Connect Provider. It's a comprehensive suite that includes both central authentication, authorization, FIDO authentication (and support for many two factor technologies), mobile software, client software. Starting in version 3.0, we will bundle a special version of OpenLDAP, provided by Symas (another great FOSS vendor), specifically optimized for the Gluu Server.
An IAM platform is many products integrated together, and operationally scalable to meet mission critical requirements. An OpenID Provider is just one element of an IAM platform!
You don't, actually, but you need to build it yourself instead of using their binaries (it's OSS).
Friends don't let friends OpenAM though, don't bother with WSO2 either. I'd do everything possible to make this someone else's problem, and if you absolutely have to do it in house then ping's the easiest I've worked with thus far.
Given any sort of control on such a project; I'd push as hard as possible to outsource that component and make it someone else's problem and try to stay as far away as possible from the whole show.
Making a SSO, even with nice friendly software (hint: none of it is, it's really really complex and easy to fuck up, and the stakes are pretty high) is a horrible, horrible, experience and if you've been through it you'll probably never want to do it again.
I've done it twice, perhaps I was just unlucky but it'll take a lot to convince me that the smarter plan isn't just to outsource that whole requirement instead of trying to build it / OSS it in house. It's pretty generic and can be a 'black box' on your arch diagram, your time is probably better spent building business specific things instead...
If I was absolutely forced to pick something to run on prem for IDP/Fed/Entitlement, I'd get the pingidentity onsite and have them build it for me (trust me, this will be cheaper than trying to do it yourself even if their day rate makes your eyes bleed).
I'd quit before seeing the OpenAM mgt console again.
I am glad to see this advice. At CoreOS we have now built API servers, command line tools, and web apps using OpenID Connect and are happy with its development in Open Source and third-party services as well.
> OAuth2 was left generic so that it could be applied to many authorization requirements,
When we started developing Dex[1] we lovingly referred to OpenID as "OAUTH 2.0 with types".
Overall, from integrating OpenID Connect into our products, enabling Kubernetes[2] to use OpenID Connect Providers, and building both an OpenID Connect provider and clients we are pretty happy with the choice we made.
My only complaint is the name of OpenID Connect is simply confusing.
[1] https://github.com/coreos/dex [2] http://kubernetes.io/docs/admin/authentication/#openid-conne...