OpenID Connect is effectively an evolution of OAuth.
OpenID Connect (OIDC) is mostly concerned with authentication. On the other hand, OAuth (or, to be more specific, OAuth v2.0) is concerned with authorization.
>> OpenID Connect is effectively an evolution of OAuth.
In my opinion, OpenID Connect is actually evolution of OpenID – in its vision/spirit:
- OIDC, like OpenID, primarily focuses on users' identity and authentication;
- OIDC, unlike OpenID, didn't (re)invent new authentication workflows, which were significantly different in their own ways. Instead, it built authentication workflows on top of existing OAuth spec (which was being (ab)used for authentication in some places which, unfortunately, is still the case) to achieve its main objective (i.e. authentication).
---
Edit: rephrased to better communicate my thoughts (still not perfect; but, as the saying goes, perfect is the enemy of the good so I stop here).
Didn't OpenID predate OAuth? What should OpenID have built upon?
Yes, you're right about "OpenID predate OAuth" part.
However, from my point-of-view, it seems the main source of confusion here is due to the fact that the word OpenID is used in more than one sense:
- First, OpenID used as part of the original OpenID authentication protocol developed around 2005 which communicates the idea of a decentralized online digital identity where one way a user can asserts their online digital identity is via a URL under their control.
- Second, OpenID used as part of the compound noun in "OpenID Connect" (which as per Wikipedia is "third generation of OpenID technology", published in 2014[1]) which implements the user identity and their authentication via authentication workflows built on top of OAuth2 spec.
Now, in my comment earlier i.e. "OIDC, unlike OpenID, ... built on top of existing OAuth spec ... to achieve its main objective ...", I was using OIDC (with "OpenID") in the second sense in comparison to the original OpenID authentication protocol where OpenID is used in the first sense (with both senses mentioned above).
I hope it helps.
---
As an aside, looking at all the comments about "OpenId" and "OpenID Connect" as nouns, I'm reminded of the following post: Two Hard Things[2]
---
[1] - https://en.wikipedia.org/wiki/Openid#OpenID_Connect_(OIDC)
That definition doesn't work for OpenID Connect. Is OpenID a noun any more? I don't think it is.
It is, to date, the only non-selfhosted service with which I can use my self-hosted SSO setup.
Gotta get that sweet sweet SSO Tax revenue, and justify it by blaming setup and integration expense for SAML?
That's a common refrain and it's quite inaccurate in that it's using a rather unorthodox definition of these terms. It's like the classic "The United States is not a Democracy, it is a Republic", where the speaker reinterprets Democracy as "Direct Democracy" and Republic as "Representative Democracy"[1].
Same goes for "authorization" and "authentication" in OAuth and OIDC. In the normal sense, authentication deals with establishing the user's identity, while authorization determines what resources the user can access.
Open ID Connect does indeed deal with authentication: it's a federated authentication protocol. The old OpenID also tried to introduce the concept of a globally unique identity in addition to authenticating that identity, as the GP mentioned. But OpenID Connect still supports federation: An application (the consumer) can accepts users logging in through a completely unrelated service (the identity provider). I believe this was originally specified mostly with the idea of third-party authentication in mind (using Google or Apple to log in to another company's service or conversely using your corporate SSO to log in to a SaaS web app), but microservices are very popular nowadays, even services that don't support external login often use OIDC as the authentication protocol between their authentication microservice and other services.
OAuth on the hand, started as a method for constrained access delegation. It allowed web services to issue a constrained access token that is authorized for doing a certain set of operations (defined by the "scope" parameter and often explicitly approved by the user). Getting constrained access through OAuth requires performing authentication, so you can say OAuth is also an authentication standard in a sense. But since OAuth was designed for pure deelgation, it does not provide any user identity information along with its access token, and there was no standard way to use it for federated authentication. Open ID Connect essentially takes OAuth and adds identity information on top. To make things more complicated, there have been a lot of OAuth specifications published as RFCs[2] over the last decade, and a lot of them deal with client authentication and explicitly mentioning Open ID Connect (since arguably most OAuth implementations are also OIDC implementations now).
In short, Open ID Connect is quite accurately described as an Authentication standard. But OAuth 2.0 has little to do with Authorization. It allows clients to specify the "scope" parameter, but does not determine how scopes are parsed, when user and client are permitted to request or grant a certain scope and what kind of access control model (RBAC, ABAC, PBAC, etc.) is used. That's ok, since it leaves the implementers with a lot of flexibility, but it clearly means OAuth 2.0 is not an authorization standard. It only concerns itself with requesting authorization in unstructured form[3].
Better examples for proper authorization standards are declarative authorization specification DSLs like XACML, Alfa, Rego (the language using in Open Policy Agent). I guess you could also put Radius in as an example of a protocol that implements both Authentication and Authorization (although the authorization is mostly related to network resources).
---
[1] To be fair, the meanings of "Democracy" and "Republic" have changed over time, and back in the 18th century, when the US was founded, it was popular to view the Athenian Democracy as the model case of a pure direct democracy and to use the term "Democracy" in this sense. Over time, the meanings changed and we got a weird aphorism that remains quite baffling to us non-Americans.
[3] RFC 9396 is a very recently published optional extension to OAuth that does structured authorization requests, and defines a standard method for resource server to query the requested and granted authorization data.
Basically both are concerned with different parts of Authentication, initial vs on-going (though 2 legged Oauth2 is also an initial authentication step).
The line between authentication and authorization can get quite fine, especially if the authorization policy is as simple as "this set of services can talk to me" if you use mTLS and a fixed set of trusted services in a trust-store.
This misses the mark - scopes are abstractions for capabilities granted to the authorized bearer (client) of the issued access token. These capabilities are granted by the resource owner, let's say, a human principal, in the case of the authorization_code grant flow, in the form of a prompt for consent. The defined capabilities/scopes are specifically ambiguous as to how they would/should align with finer-grained runtime authorization checks (RBAC, etc), since it's entirely out of the purview of the standard and would infringe on underlying product decisions that may have been established decades prior. Moreover, scopes are overloaded in the OAuth2.0/OIDC ecosystem: some trigger certain authorization server behaviours (refresh token, OIDC, etc), whereas others are concerned with the protected resource.
It's worth noting that the ambiguity around scopes and fine-grained runtime access permissions is an industry unto itself :)
RFC 9396 is interesting, but naive, and for a couple of reasons: 1) it assumes that information would like to be placed on the front-channel; 2) does not scale in JWT-based token systems without introducing heavier back-channel state.
I personally do not view OIDC as an authentication standard - at least not a very good one - since all it can prove is that the principal was valid within a few milliseconds of the iat on that id_token. The recipient cannot and should not take receipt of this token as true proof of authentication, especially when we consider that the authorization server delegates authentication to a separate system. The true gap that OIDC fills is the omission of principal identification from the original OAuth2.0 specification. Prior to OIDC, many authorization servers would issue principal information as part of their response to a token introspection endpoint.
"Authorization", in the context of OAuth 2.0, means whether a third-party application is "authorized" to take actions on-behalf-of a resource-owner on some resource server. And, if the answer is yes, what is the "scope" of this "authorization".
From the OAuth 2.0 RFC's abstract[1]:
The OAuth 2.0 authorization framework enables a third-party
application to obtain limited access to an HTTP service, either on
behalf of a resource owner by orchestrating an approval interaction
between the resource owner and the HTTP service, or by allowing the
third-party application to obtain access on its own behalf. ...
It's very clear that OAuth2 is all about third-party application and their access to a "resource owner" resource. As far as users and their access to their own resources are concerned, they're "resource owner" and they've all the "power" to do whatever they like (with their own resources–off course).For example, in the early days of Facebook, FarmVille games needed user permission in order to post on users Facebook walls and/or message users' friends if something interesting happened in the FarmVille while users are/were playing. And this is just one funny example to get across my point; there're many use-cases where it's super useful if users can grant a third-party application permission so that they can do some useful work (whatever it happens to be) on their behalf.
> Better examples for proper authorization standards are declarative authorization specification DSLs like XACML ...
I'm very well familiar with XACML and similar standards about access control policies; actually, I've build/developed an ABAC-based access-control service using XACML-like spec. for one of our customer-facing business application (in the recent past).
Yes, XACML and similar specs. are good for some use-cases for user access / "authorization" (based on business needs and threat-model). Yet, I'm not sure anyone would recommend them for third-party application authorization. Off-course, it's not impossible and it can be done; however, I doubt anyone would recommend doing it when simpler solutions are available–unless there is a strong business case from the business risks and security (threat-modelling) point-of-view.
---
[1] - The OAuth 2.0 Authorization Framework: https://datatracker.ietf.org/doc/html/rfc6749
So even systems where OIDC compliance is a non goal often are partially OIDC compliant, I mean there really is no reason to reinvent the wheel if part of the OIDC standard already provide all you need.
I don't think very many people know that OAuth 2.1 exists, though.
So while technically 2.1 is a draft, practically not following it means not following best practices of today so while you don't have to meet it fully if you care about security you really should use it as strict/strong recommendations. At least _longterm_ not doing so could be seen as negligence for high(er) security applications.
The most important changes are listed here https://oauth.net/2.1/
For people looking for an easy-to-follow interoperability/security profile for OAuth2 (assuming they're most interested in authorization code flow, though it's not exclusive to that) FAPI2 is a good place to look, the most recent official is here:
https://openid.net/specs/fapi-2_0-security-profile.html
This is a good bit shorter than the 50 or so pages of security BCP :)
FAPI2 should also be moving to 'final' status within the next few months.
(Disclaimer: I'm one of the authors of FAPI2, or will be once the updated version is published.)
Except for PAR, these extra requirements are harder to implement than their alternatives and I'm not sold that they increase security. For instance, DPoP with mandatory "jti" is not operationally different than having a stateful refresh token. You've got to store the nonce somewhere, after all. Having a stateful refresh token is simpler, and you remove the unnecessary reliance on asymmetric cryptography, and as a bonus save some headaches down the road if quantum computers which can break EC cryptography become a thing.
In addition, the new requirements increase the reliance on JWT, which was always the weakest link for OIDC, by ditching client secrets (unless you're using mTLS, which nobody is going to use). Client secrets have their issues, but JWT is extremely hard to secure, and we've got so many CVEs for JWT libraries over the years that I've started treating it like I do SAML: Necessary evil, but I'd minimize contact with it as much as I can.
There are also some quirky requirements, like:
1. Mandating "OpenID Connect Core 1.0 incorporating errata set 1 [OIDC]" - "errata set 2" is the current version, but even if you update that, what happens if a new version of OIDC comes out? Are you forced to use the older version for compliance?
2. The TLS 1.2 ciphers are weird. DHE is pretty bad whichever way you're looking at it, but I get it that some browsers do not support it, but why would you block ECDSA and ChaChaPoly1305? This would likely result in less secure ciphersuites being selected when the machine is capable of more.
In short, the standard seems to be much better than than FAPI 1.0, but I wouldn't say it's in a more complete state than OAuth 2.1.
Stateful refresh tokens have other practical issues, we've seen several cases in OpenBanking ecosystems where stateful refresh tokens resulted in loss of access for large numbers of users which things went wrong.
The quirks you mention are sorted in the next revision. The cipher requirements come from the IETF TLS BCP [1] (which is clearer in the new version). If you think the IETF TLS WG got it wrong, please do tell them.
As other people said elsewhere, this isn't about completeness - OAuth2.1 is a framework, FAPI is something concrete you can for the large part just follow, and then use the FAPI conformance tests to confirm if you correctly implemented it or not. If you design an authorization code flow flowing all the recommendations in OAuth 2.1, you'll end up implementing FAPI. Most people not in this space implementing OAuth will struggle to know how to avoid the traps once they stop following the recommendations, as "implementing OAuth securely" isn't usually their primary mission.
As in use OAuth2.1 recommendations including OAuth Best Current Practices to structure you clients inside of the OIDC framework which mostly will "just work" (if you are in control of the auth server) as it only removes possible setups you decide not to use from OIDC.
Through I'm not sure if requiring (instead of just allowing) PKCE is strictly OIDC compliant but implementations should support it anyway ("should" as in it recommended for them to do so, not as they probably do so).
It's technically not compliant, but people definitely do so, and there are definite security advantages to requiring it.
Technically the 'nonce' in OpenID Connect provides the same protections, and hence the OAuth Security BCP says (in a lot more words) that you may omit PKCE when using OIDC. However in practice, based on a period in the trenches that I've mostly repressed now, the way the mechanisms were designed means clients are far more likely to use PKCE correctly than to use nonce correctly.)
I used countless ID providers and while they offer a very valuable service, the flows are too complicated and many implementations have a lot of security flaws because the users don't understand the implications.
The implicit flow was trying to make it easier.
OpenID Connect is an extension to OAuth2 (RFC 6749) that adds an authentication layer (to identify a user) on top of the OAuth2 authorization framework (that grants permissions)
Earlier versions of OAuth (1.0 and 1.0a) and OpenID (1 and 2) are unrelated, incompatible protocols that share similar names but are largely irrelevant in 2024.
Google with care.
Have you ever seen OAuth used alone? I'm looking for examples of this and they seem to be few and far between.
Examples: Slack (e.g., notify you of events on your calendar, create a GMeets meeting), services like cal.com, whatsapp (store backups on your Google Drive).
Even server to server calls, ie daemons, service principals, what have you, still rely on a client identity.
I think the closest to true agentless access I've seen widely used are SAS for Azure Storage and of course deploy keys in GitHub, which we're building off ramps for. Agentless authz just is not a good idea
So it is not pure authorization, but both authentication and authorization.
Pure authorization would be like a car key, with no identity mixed in.
There are a few folks out there doing pure OAuth, but much of the time it is paired with OIDC. It's pretty darn common to want someone to be authenticated the same time they are authorized, especially in a first party context.
I wrote more on OIDC here: https://fusionauth.io/articles/identity-basics/what-is-oidc
“”” David Harris, author of the email client Pegasus Mail, has criticised OAuth 2.0 as "an absolute dog's breakfast", ””” https://en.m.wikipedia.org/wiki/OAuth
I keep on trying and failing to implement / understand OAuth 2 and honestly feel I need to go right back to square one to grok the “modern” auth/auth stack
I’d say it definitely helps to implement an authorisation server from scratch once, and you’ll realise it actually isn’t a complex protocol at all! Most of the confusion stems from the many different flows there were at the beginning, but most of it has been smoothened out by now.
I don't think I agree with every point he makes, but I think he had the right gist. OAuth 2.0 became too enterprise-oriented and prioritized enterprise-friendliness over security. Too many implementation options were made available (like the implicit grant and the password grant).
[1] https://gist.github.com/nckroy/dd2d4dfc86f7d13045ad715377b6a...
Was OAuth2 wrong to land exactly where it did on security back in 2012 or before? It seems really hard to say - it clearly didn't have great security, but it was easy to implement and where would we have ended up if it had better security but much poorer adoption?
Does the OAuth working group recognise those failures and has it worked hard to fixed them over the years since? Yes, very much so.
Has OAuth2 being adopted in use cases that do require high levels of security? Yes, absolutely, OpenBanking and OpenHealth ecosystems around the world are built on top of OAuth2. In particular the FAPI profile of OAuth2 that gives a pretty much out-of-the-box recipe for how to easily comply with the OAuth security best current practices document, https://openid.net/specs/fapi-2_0-security-profile.html (Disclaimer: I'm one of the authors of FAPI2, or at least will be when the next revision is published.)
Is it still a struggle to try and get across to people in industries that need higher security (like banking and health) that they need to stop using client secrets, ban bearer access tokens, to mandate PKCE, and so on? Yes. Yes it is. I have stories.
But that's all hindsight is 20:20, I guess. I think the points that withstood the state of time more is about how OAuth 2.0 was more of a "Framework" than a real protocol. There are too many options and you can implement everything you want. Options like the implicit flow or password grant shouldn't have been in the standard in the first place, and the language regarding refresh tokens and access tokens should have been clearer.
Fast forward to 2024, I think we've started going back to cryptography again, but I don't think it's all good. The cryptographic standards that modern OAuth specs rely on are too complex, and that leads to a lot of opportunity for attacks. I'm yet to see a single cryptographer or security researcher who is satisfied with the JOSE/JWT set of standards. While you can use them securely, you can't expect any random developer (including most library writers) to be able to do so.
If you implement OIDC you must certainly provide a configurable mapping system for source claim name to your internel representation of a user object.
Or OAuth2 had a specific use case but has since been wrestled to do anything auth related
(I'm at a loss to explain what benefit comes from being assigned an ISO standard versus putting a HTML document on the Internet.)
From the article:
"[ISO certification] should foster even broader adoption of OpenID Connect by enabling deployments in jurisdictions around the world that have legal requirements to use specifications from standards bodies recognized by international treaties, of which ISO is one."
(Unless we're now considering the IETF/W3C an international standards body? I can't find a good list of these anywhere.)
Looked on the OpenID mailing list site[0] and didn't find any discussion, so can't offer any other insight. I suppose you could contact Mike[1] and ask why it is such a big deal)?
> Unless we're now considering the IETF/W3C an international standards body?
Most of what I know about standards bodies I learned from Heather Flanagan, who is/was active in a lot of these and did a great presentation at Identiverse in 2022[2] about this very topic.
0: https://lists.openid.net/mailman/listinfo
>The World Wide Web Consortium (W3C) is the main international standards organization for the World Wide Web.
>The Internet Engineering Task Force (IETF) is a standards organization for the Internet and is responsible for the technical standards that make up the Internet protocol suite (TCP/IP).
I would also add IEEE to this list. I think it's pretty clear these groups are international standards organizations, it think it's pretty odd that OpenID Connect wasn't circulated as an RFC and they went the ISO route.
This seems like a made up requirement.
* Internationally recognized standards organizations, such as ISO (literally International Organization for Standardization). Republic of Backwoods and Kingdom of Flyover can't really do much against the majority of the whole world agreeing on something.
* Bigger Gun and/or First Past The Post Adoption, mostly exercised by the US in recent decades. Examples include practically all IT standards, aviation standards, and so on.
If you manage to combine them, you're basically unstoppable at conquering the world.
Sometimes you see the same thing with organizations referring to web RFCs. It's likely because of a general culture of "don't try to invent new things if you already have a reference for it", although it doesn't really tend to make those documents readable.
Single source of truth. The internet has been plagued by numerous incompatible implementations of the same thing. There are numerous tests [0] showing incompatibility between simple serialization format JSON. How many times have you heard "Yeh, nice feature, but virtually nothing implements it"? A standard becomes whatever majority of highly adopted implementations do instead of formal specification. This is what you get for putting a HTML document on the internets. ISO standardization somewhat reduced this effect.
> but I don't think there's a meaningful "toll gate" in this case: the standards are already free and public
Major problem with ISO standards is that they cross-reference each other. It's rare NOT to find definition "X as defined in ISO 12345". Complex product may need to reference hundreds of ISO standards.
Somewhat tautologically I agree with you as in reality things are probably going to be implemented referencing tutorial subtly incompatible tutorials on the internet but will claim ISO compatibility.
So to get a single source of truth we make, presumably, the same truth have more sources.
I think I know what you mean (sources as in standards organizations, not individual standards), but I also think that people arrive at this odd position because they aren't actually thinking about the practicality and the absurdity of making the world more complex and confusing.
See Adobe and PDF: PDF 1.7 was available gratis from Adobe and also (“technically identical to”) an ISO standard. At the time, people expressed concerns about ISO’s paywalls and Adobe reassured them there was an agreement to ensure that wouldn’t happen. Indeed it did not... until PDF 2.0 came along, developed at the ISO, and completely paywalled.
I seem to remember (but don’t quote me on that) that AVIF and JPEG XL standards were at one point downloadable free of charge. In any case, they aren’t today.
Even some RFCs are basically available as ISO standards and vice versa, e.g. for time/date formats you almost never need to buy ISO8601 and can just read RFC3339 (which is technically a 'profile' of ISO8601).
OpenID Connect does of course remain free view/use, but now people in the above situation have an easier option available to them.
I also worked with mp4 in the past only to realize that the ISO was not enough as Apple had some changes in their stack.
While I understand this particular frustration, in my book it is a feature. The critique usually devolves into hypothetical scenarios, e.g. changes to DST. "I want to be able to specify 14:00 in four years in Absurdistan local time, whatever that is in relation to UTC, but cannot!" is a common critique of ISO 8601. However, if you go a little bit further with hypotheticals, Absurdistan might add some overseas territories, might join some alliance and change timezone/DST, etc.
When you think about the problem statement, definition of "local time" itself may change, therefore it is impossible to specify "local time" in the future without *exhaustively* defining all possible changes.
So you either define number of atomic ticks (TAI) in the future and resolve local time at the time of use or specify some static time and resolve it to local time at the time of use.
I wonder if the new JS temporal API handles that.. they went pretty deep.
https://news.ycombinator.com/item?id=37339535
https://www.mooncratertycho.com/the-12-lunar-calendars-still...
See even RFC 3339 seems unnecessarily lax. You can use _ or lowercase t or space as the date and time separator? Why? Just pick one instead of complicating all the parsers.
The intersection of the two specs seems pretty good though.
[0] https://en.cppreference.com/w/cpp/links#C.2B.2B_standard_doc...
Details matter. So not so open after all?
Identity is an innate and inalienable property of individual, not something that anyone else (another person, company/website, government or whoever else) can "provide". They can merely attest by providing a credential, by e.g. issuing a passport.
At least Webauthn got this right.
I suppose I've used some identities in enough places that it would be hard to deny to certain entities that the identity was mine, but even in that case it's a small subset of entities which have seen the identity that could prove that it's me.
However, there are practical consequences. There are plenty of stories how people got their Google/Facebook/Apple/... accounts blocked; or domain name lapsed, for self-hosting folks - and thus lost "their" "identity" (quotes for a lack of better words).
One can back up their credentials and attestations - losing a password (or a keypair) is preventable, since it's all first-party. One cannot back up a third-party service.
Self-hosting is not a solution, though, unless we're talking about system like yours where - if I understood you correctly - there's no third party. I used to love "classical" OpenID back in mid-'00s (I haven't really thought about it back then, and it was convenient) and self-hosted an OpenID provider. I stopped doing so when OpenID went out of fashion; and later I learned the fact that one cannot own a domain name, merely lease it from authority (despite whatever anyone says, it's just how it is in practice), so tying an identity to that is a bad idea. I-names/i-numbers are also dead. And that's how started to think about it and eventually came to understanding that identity provisioning is a fundamentally broken idea.
OIDC as a machine-to-machine protocol is fine, save for the associated language. But how it works in, what I suspect, the majority of real-world implementations is just conceptually wrong (and practically harmful).
Having backups is key. The only difference is legal ID's can be backed up without registering the backups with each service ahead of time. Perhaps this could be engineered into OIDC too: a backup field with hashes of other OIDC tokens or something.
I wanted to open an account on Tailscale without using my Github account the other day but couldn't.
I believe openid.net and Ubuntu One were providing this service a long time ago but they discontinued it.
Sadly the amount of money you need to spend on security & support for such a service does make offering such a service (particularly for free) not viable for smaller entities, there are some big economies of scale to make these things viable that work particularly well if you can also get big companies to pay for commercial offerings.
It's using python to do the work but it should be straight forward to implement it in anything you want. Most of the complex stuff is related to decoding and checking the JWT tokens.
I'm using exactly this hand written client on a production project to authenticate with keycloak for like a year and everything's working perfectly!
PS: I know that there are way too many ads to my site. Unfortunately I haven't found the time to properly configure google ads and haven't found anything better :( Use an ad-blocker to read it.
PS. Be cautious with using subjective words like "simple". It can be really off-putting as a reader if you think something is difficult and the author claims it's simple.
I agree it can be a little intimidating for a novice in the space!
I'm still not convinced that OIDC is easy. Keycloak hides enormous complexity and that's not because the developers were bored. One example is the huge number of settings for various timeouts: SSO timeouts, client timeouts, various token timeouts.
One lesser known hack is to search the friendly Estonian site [1] for a cheaper version of the standard - they often create their own versions of the standards which much pretty contain the exact same content as the original. Unfortunately, in this case, it seems they only are offering the actual standard at a similar price [2]. Sad dog face.
It could be worthwhile to monitor the website to see if they release their own version for a better price in the future. Usually, their prices are ~10% of the original price (one more data point that Estonia does cool stuff).
We deal with the rather shady standardization organizations quite a lot as we work in medical device compliance [3]. I've heard all the usual arguments: "But standardization costs money!", "These organizations are doing good work!", etc., etc. No. I completely disagree. If something's a standard, that in my opinion makes it similar to a law - people should be able to follow it, and that requires people to freely access it. The EU Advocate General seems to agree [4]. And there are lots of standardizations which don't rely on shadily offering PDFs for money: ECMAScript and ANSI C come to mind, but the list goes on.
[1] https://evs.ee [2] https://www.evs.ee/en/search?OnlySuggestedProducts=false&que... [3] https://openregulatory.com/accessing-standards/ [4] https://openregulatory.com/maybe-eu-standards-are-becoming-f...
Congratulations to the achievement that is OIDC!
ISO standards are not publicly accessible standard. OIDC was always publicly accessible, now I dunno. ISO OIDC is and will be meaningless. ISO is a racket to keep people with no useful skills employed and the organization should be fined for the tax payer money they took and should be sanctioned by every nation in the world.
Hopefully the draft versions are freely available somewhere and closely resemble the final product^W specification.
Still, I agree with your main point: let's hope future OIDC ISO spec(s) is/are still freely available in some form.
> Publicly Available Specifications have a maximum life of six years, after which they can be transformed into an International Standard or withdrawn.
So is the plan to transform this into a standard after that period? Was a PAS application chosen because (it sounds like) it goes through the standards body quicker? So this gives an intermediate ISO seal of validation until this becomes a full International Standard?
And if you are especially unlucky might be so painful to use that you could say it doesn't work.
But for a lot of use cases it does work well.
In some context it might safe you not just developer month, but years (e.g. certain use cases with certification/policy enforcement aspects).
But it also can be a story from running from one rough edge into another where you project lead/manager starts doubling or tripping any time estimate of stories involving Keycloak.
E.g. the REST API of Keycloak (for programmatic management of it) is very usable but full of inconsistencies and terrible badly documented (I mean there is a OpenAPI spec but it's very rarely giving you satisfying answers for the meaning of a listed parameter beyond a non descriptive 3 word description). (It's also improving with each version.)
Similar multi tenancy can be a pain, depending on what exactly you need. UMA can be grate or a catastrophe, again depending on your specific use cases. SSO User management can just fine or very painful. There is a UI for customizing the auth flow, but it has a ton of subtle internal/impl. detail constraints/mechanics not well documented which you have to know to effectively use it so if you can't copy past someones else solution or only need trivial changes this can be a huge trap (looking like a easy change but being everything but that)...
The build in mail templates work, but can get your mail delivery (silently) dropped (not sure why, maybe some scammers used Keycloak before).
The default user facing UI works but you will have to customize it even if just for consistent branding and it uses it's own Java specific rendering system (and consistent branding here isn't just a fancy looks goal, one of the first things though on scam avoidance courses for non technical people is, that if it looks very different it's probably a scam and you shouldn't log in).
I think Keycloak is somewhat of a no brainer for large teams, but can be a dangerous traps for very small teams.
BUT there's also a lot of pain in using Keycloak, especially when you build on top of it. One of the great things about Keycloak is, that it can be seen as an extendible platform. Nearly everything is customizable. However, it takes a lot of time and effort to learn the programming model and especially the documentation is a PITA. It has been even worse 2 years ago, but it's still a lot of fiddeling around and usual I find myself in reading the Keycloak source (which has a very mixed quality btw.) as no documentation is available.
Multi tenancy is indeed also a hard topic. We heavily utilize multiple realms (I think we should have hit 100 already) for that, but newer developments in Keycloak now offer an additional model based around organizations [1].
One thing that bothers me personally is that the storage model of keycloak does not offer zero-downtime migrations and the work on refactoring it has been paused for now. Since we have a solution based on Keycloak, we deploy Keycloak regulary with every feature. However, the embedded infinispan cache makes this hard on scale as during redeployments Keycloak become unresponsive due to cache leader synchronization and eventually drops some caches, leading to user sessions being terminated. Fortunally Keycloak finally has support for an external Infinispan cache [2] and moved to protocol buffers for serialization [3], which should make upgrades smoother, and grounds the road to zero-downtime deployments.
After all this, let me emphasize that for most of it the Keycloak team is aware and is working hard in moving forward. I started using Keycloak with Version 11 and a lot has changed for the better.
> I think Keycloak is somewhat of a no brainer for large teams, but can be a dangerous traps for very small teams.
I would not agree on that. You don't necessarily need a large team for operating Keycloak and smaller teams are probably not that big of an issue. At least not for operating Keycloak. What I feel to be more a problem is, that many (at least of our) clients are not understanding OAuth 2.0 and OIDC well, and therefore puts a larger burdon on support work. Also there must be at least someone very knowledgable of Keycloak to give guidance to the rest of the team, but in general I would not say it's dangerous. We have even a team operating their own keycloak and using our keycloak to perform OIDC federation (don't ask me why tho, I never understood it).
[1] https://www.keycloak.org/2024/06/announcement-keycloak-organ... [2] https://github.com/keycloak/keycloak/issues/28745 [3] https://www.keycloak.org/docs/latest/release_notes/index.htm...
this is why I said large team, you need people with knowledge, but on small teams you likely can neither hire them (if the team already exist) or have resource to spare on giving someone a lot of time to get into it. And having just one specialist would be very risk to actually you need 1.5-2 people. Things become worse if you also need to have plugins and are not a Java workshop at all (as even if you don't write them you should review 3rd party plugins).
Through I guess that doesn't apply if you only do very straight forward thing, like no multi tenant no fancy anything, then a small team work well. too.
I mainly said it can be dangerous as small new startups are often on a very very tight time schedule so the delays yo can run into if you don't have a Keycloak expert can be quite hurtful for them.
KeyCloak is also very good, but I'd run is as a container due to the quick release/update cycle. If I had to do our infrastructure over, I'd probably go for KeyCloak, just because it's the most used.
Looks like it doesn't support multiple issuers: " CAS primarily supports a single issuer per deployment/host." Have you run into any issues with that?
It also looks like it supports a number of optional standards: DPoP, JARM, PAR. Have you seen use cases for these?
Yeah, we only have the one issuer, so it's not a concern. For pretty much every KeyCloak project I've done, we have also favored doing separate deployments for separate issuers, so I'd say it's not much of an issue, in most cases.
Regarding the optional standards, no. We've not run into an clients that would require or in many cases even support anything but the most basic OpenID Connect. I'm sure there's a point to supporting them, but I've never seen it being needed for your average use case.
closed source self hosted: adfs
hosted: okta (auth0) google microsoft github amazon
these are just the ones that were viable 2 years ago
When I looked at it a few years ago[0] it seemed like a modernization of OAuth (which still uses form posts(?!?)). But I'm worried about uptake, myself. Haven't had a single client request it or bring it up.
[0] https://mailarchive.ietf.org/arch/msg/txauth/uEXUsdk43TbPlls...
https://www.rfc-editor.org/rfc/rfc9635.html
The group's other major work, the GNAP RS specification, has passed IESG final review and is in the queue with the RFC editor. This means that it's finished except for some final text edits and will be a final RFC soon too:
https://www.rfc-editor.org/current_queue.php#draft-ietf-gnap...
The group closed because it didn't have any more work to do.
As for adoption, we've seen it implemented in a handful of spaces around the internet, most of them in places where there are core problems with the OAuth model or GNAP offers cleaner support for some key feature like ephemeral clients or key management.