SSH certificates have been around for a while now, so you can create an in-house SSH CA, so that they are short-lived (compared to on-laptop keys) and you have to authenticate to get a fresh one.
To automate getting SSH certs there are a number of options, including the step-ca project, which can talk to OAUTH/OIDC systems (Google, Okta, Microsoft Entra ID, Keycloak):
* https://smallstep.com/docs/step-ca/provisioners/#oauthoidc-s...
as well as cloud providers:
* https://smallstep.com/docs/step-ca/provisioners/#cloud-provi...
There are commercial offerings as well:
* https://www.google.com/search?q=centrally+managed+ssh+certif...
The advantage of opkssh is that there is only one trusted party, your IDP.
While not available in opkssh yet, OpenPubkey even has a way of removing the trust assumption in your IDP.
I wonder if step-ca would ever consider using opkssh or the OpenPubkey protocol
Ideally IDPs are CAs for identity and ID Tokens have a public key field.
There are neat projects and standards to do this like OIDC-squared [0] and OIDC4VC [1] but it is unclear if IDPs will implement them if they are standardized. We do have DPoP now [2] but it isn't available for any of the usecases that are important to me. OpenPubkey is largely an productive expression of my frustration with public keys in tokens being a promised feature that never arrives.
[0]: OIDC-squared https://jonasprimbs.github.io/oidc-squared [1]: OIDC4VC https://identity.foundation/jwt-vc-presentation-profile/ [2]: RFC-9449 OAuth 2.0 Demonstrating Proof of Possession (DPoP) - https://datatracker.ietf.org/doc/html/rfc9449
> Unfortunately, while ID Tokens do include identity claims like name, organization, and email address, they do not include the user’s public key. This prevents them from being used to directly secure protocols like SSH
This seems like dubious statement. SSH authentication does not need to be key based.
I understand the practicality of their approach, but I would have preferred this to be proper first-class authentication method instead of smuggling it through publickey auth method. SSH protocol is explicitly designed to support many different auth methods, so this does feel like a missed opportunity. I don't know openssh internals, but could this have been implemented through gssapi? That's the traditional route for ssh sso. If not gssapi, then something similar to it.
Let's say you just use an ID Token as a bearer token to authenticate to SSH. The SSH server now has the secret you used to authenticate with. Doesn't this introduce replay attacks where the SSH server can replay your ID Token to log into other SSH servers?
Whereas if your ID Token functions like a "certificate" issued by your IDP binding your identity to a public key, it is no longer a secret. You can just use your public key to prove you are you. No secrets leave your computer.
My motto: always use public key rather than a bearer secret if possible.
> I understand the practicality of their approach, but I would have preferred this to be proper first-class authentication method instead of smuggling it through publickey auth method
Me too. I have a PR open to SSH3 (not connected with OpenSSH) so it can be support OpenPubkey as a built-in authentication mechanism.
Because that has two trusted parties: the IDP and the SSH CA. OPKSSH has just one trusted party: the IDP.
> This ensures that the user is both in possession of the physical device and that the credential can't be stolen without stealing the device, unlike the bearer token examples here. Currently we're offering support for GitHub and GitLab authentication but it works out of the box with standard ssh tooling as well. It just currently requires manually handling user provisioning for standard ssh access.
That sounds valuable.
Have you looked in OpenPubkey, the cosigner protocol supports binding hardware tokens to ID Tokens? Although not as fancy as having the SSH key pair live in the hardware token but maybe we could figure out a way to get the best of both worlds.
I have looked into OpenPubKey briefly in the past but haven't spent a ton of time with it. We were going in a very different direction and it didn't seem particularly useful based on our goals or what we wanted to achieve.
edit: Looking at the documentation https://docs.bastionzero.com/openpubkey-ssh/openpubkey-ssh/i... It seems like to use OpenPubKey you also need a fairly modern version of OpenSSH. It also requires that the user authenticating have sudo access on the machine, which doesn't sound great. It's not clear to me whether it's possible for the existing authorized_keys file to co-exist or whether that's just to stop access using existing keys but using the standard ssh certs will co-exist allowing for a non-binary rollout if there are use cases that need to be worked around.
> It seems like to use OpenPubKey you also need a fairly modern version of OpenSSH.
On versions of OpenSSH older than 8.1 (2019), you may run into issues if you have a huge ID Token. That shouldn't be a problem for standard sized ID Tokens, some enterprise OIDC solutions put the phone book in an ID Token and we have to care about that.
> It also requires that the user authenticating have sudo access on the machine, which doesn't sound great.
The user authenticating does not need sudo access. You only need sudo access to install it. You need sudo to install most software in on servers.
> It's not clear to me whether it's possible for the existing authorized_keys file to co-exist or whether that's just to stop access using existing keys
opkssh works just fine in parallel to authorized_keys. We are using AuthorizedKeyCommand config option in sshd_config, so opkssh functions like an additional authorized_keys file. My recommendation is that you use authorized_keys as a breakglass mechanism.
secrets can be made unique per connection and single use
GSSAPI can be more secured than public/private key if configured right.
Kerberos can be very secure, much more so than CA-based or generally asymmetric crypto based approaches. Kerberos (if you ignore some extensions) uses symmetric cryptography, so it is less vulnerable to quantum computers. Use AES256 and you are fine, a quantum attacker can at the most degrade this to a 128bit level (according to current theories). Also, no weak RSA exponents, elliptic curve points at zero, variable-time implementations or other common pitfalls of asymmetric crypto. The trusted third party ("KDC" in Kerberos) distributes keys ("tickets") for pairs of user and service-on-server, so mutual authentication is always assured, not like in ssh or https where you can just ignore that pesky certificate or hostkey error. Keys are short-lived (hours to weeks usually), but there are builtin mechanisms for autorenew if desired. Each side of a key can also be bound to a host identity, so stealing tickets (like cookies in HTTP) can be made harder (but not impossible). The KDC can (theoretically, rarely implemented) also enforce authorisation by preventing users from obtaining tickets for services they are not authorized to use (the original idea of Kerberos was to just use it for authentication, the authorisation step is done by the service after authentication has been established).
Single-Sign-On with all the usual protocols is included automatically, you log in to your workstation and get a ticket-granting-ticket that can then be used to transparently get all the subsequent tickets for services you are using. The hardest one to implement is actually HTTP, because browsers suck and just implement the bare minimum.
However, the whole of Kerberos implementations is ancient, 1990s era software, packed with extensions upon extensions. You don't really want to expose a KDC to the open internet nowadays. The whole thing needs a redesign and rewrite in something safer than 1990s era C.
Oh, and there are mechanisms for federation like trust relationsships between realms and KDCs, but nobody uses those beyond merging internal corporate networks.
Due to an incredible bloat of AD and entire Windows/Azure ecosystem, it has an enormous attack surface (multiply the universe of all windows ecosystem by the decades of old versions being supported for compatibility), and any vulnerability in the ecosystem (past and present) can lead to escalation and compromise of the Active Directory itself.
so is Kerberos secure? as a protocol it is fine, cause it was developed at MIT by smart people.
is MSFT AD/Windows ecosystem secure? HELL NO, stay away
The thing is most enterprises want "user disabled" to be instant.
Which of course leads to SSH keys all over the place anyway.
Though I'd still prefer to authenticate to something like Vault's SSH engine and get a very short lived SSH certificate instead. No new software to install on your servers, just the CA key.
- client connects to SSH server at IP X.X.X.X or hostname SomeHost
- redirected to oAuth server
- Client signs in and receives token scoped to X.X.X.X or hostname SomeHost
- Client provides token to SSH server
(Am also not really a fan of having to eventually use a browser for authenticating a terminal session, but that's another problem.)
That sounds awful, I hope this is not the direction we are heading towards.
It's not a common way to do it, but it's definitely a possibility.
[0] https://developers.cloudflare.com/cloudflare-one/connections...
I mean, the idea is nice. There's an alternative implementation being used already in some parts of the world, but their own OIDC provider of their choice.
Decentralization is the key here.
I can neither confirm nor deny the pun is intended.
Is it better than passwords? 100% - is it perfect? It does not have to be for a lot of use cases.
See: https://github.com/EOSC-synergy/ssh-oidc
It's not hard to install, and works as advertised, plus it can talk with any OIDC provider your choice, incl. yours.
If it gets breached, there will be significantly more problems than unauthorized ssh login.
(and this is a beauty of this compared to something like sshca: there is only one party that you need to trust, and you can choose a party that's unlikely to be breached)
But that is rarely the case any more. People use their own devices (BYOD) that aren't integrated into AD at all, they're using them outside of the office which means there is no VPN available at boot time to deal with token issuance, and the modern "zero trust" crap that uses weird packet filtering black magic instead of proper tun/tap virtual ethernet devices often doesn't play too nice with archaic authentication tools.
On top of that, implementing support for Kerberos in a Dockerized world is just asking for pain.
kerberos is old and clunky but conceptually it got so much right. I’m so sick of the modern idea that i should wake up and babysit my machine through N different oauth dances to log in to all the services i need on a daily basis. once I authenticate once I should be implicitly authenticated everywhere.
Doing this on the web requires being really careful design because you can't trust a javascript client sent to you by the party whose scope you want to control. They could just send you a javascript client that approves a different scope. You still need to do something like the OAuth/OIDC origin-based isolation dance.
this meant that at most you had a short flash on screen for web apps... which is a bit like OIDC/SAML login on windows domains (but I did it with keycloak back then)
[1] https://docs.goauthentik.io/docs/users-sources/sources/proto...
This is for three reasons:
First, the SSH-CA+Hardware method does not require call-out to third-party code from SSHD, and thus minimises attack surface and attack vectors.
Second, the SSH-CA+Hardware method completely prevents key exfiltration or re-use attacks. Yes, I understand that the SSH keys issued by OPKSSH (or similar tools) are short-lived. But they are still sitting there in your the .ssh directory on your local host, and hence open to exfiltration or re-use. Yes it may be a short-timeframe but much damage can easily be done in a short timeframe, for example exfiltrate key, login, install a backdoor and continue your work via the backdoor.
Finaly, the SSH-CA+Hardware method has fewer moving parts. You don't even need software tools like step-ca. You can do everthing you need with the basic ssh-keygen command. Which means from a sysadmin perspective, perhaps especially sysadmin "emergency break-glass" perspective, you do not need to rely on any third-party services as gatekeeper to your systems.
Depends how far up the chain you want to go (e.g. use step-ca or not), but at the most primitive level, you are looking at something along the following lines of the below (the below is based off my rough notes, I might have missed something).
Note that I have ignored any Yubikey setup considerations here like setting PIN, touch-requirement etc. etc.
I have also assumed plain Yubikey, not the YubiHSM. The YubiHSM comes with SSH certificate signing functionality "out of the box".
Client Yubikey:
- Use Yubikey ykman[1] to generate a PIV key according to your tastes
- Grab the key in ssh format with `ssh-keygen -D $path_to/libykcs11 -e > $client_key.pub`
Issuer Yubikey: - Use Yubikey ykman[1] to generate a PIV key according to your tastes
- Grab the key in ssh format with `ssh-keygen -D $path_to/libykcs11 -e > $issuer_key.pub` (save this for the next step and also put it into your sshd CA config)
- Sign with the issuer Yubikey with `ssh-keygen -s $issuer_key.pub -D $path_to/libykcs11 -I $whatever_identity -n $principal_list -V +$validity_period $client_key.pub`
(libykcs11 is the yubikey library,it ships with yubikey-piv-tool[2])[1] https://docs.yubico.com/software/yubikey/tools/ykman/PIV_Com... [2] https://developers.yubico.com/yubico-piv-tool/Releases/
==== Edit to add links to various more verbose discussions on the subject (in no particular order):
- https://liw.fi/sshca/
- https://goteleport.com/blog/how-to-configure-ssh-certificate-based-authentication/
- https://jamesog.net/2023/03/03/yubikey-as-an-ssh-certificate-authority/
- https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/deployment_guide/sec-ssh_certificate_pkcs_11_token
Also some verbose discussions on the possibility of doing so with FIDO2. Altough note that the native version of ssh on Apple OS X does not support FIDO2, therefore if you want native Apple support you are best sticking with the PIV method instead. - https://developers.yubico.com/SSH/Securing_git_with_SSH_and_FIDO2.html
- https://medium.com/@harrishcluo/yubikey-ssh-git-super-secure-your-development-workflow-2-2-1899379fb882
- https://blog.millerti.me/2021/05/16/strengthen-github-ssh-access-with-fido2s-pin-support/
We keep all this stuff behind WireGuard, which is what I would recommend everybody do.
But Tailscale ssh has the identity stuff built in too.
It’s similar to SSH in that it streams stdio from the server to a thin-client, but that’s where the similarities end. It has additional commands, like open a browser to a URL and set “cookies” on the client terminal from the sever.
When all of those commands are put together, you get something like https://github.com/terminalwire/demodx/blob/main/app/termina... that can open the users browser to an SSO, authorize the CLI, then set a token/nonce/whatever as a “cookie” in the users terminal so they can authenticate and run commands against the SaaS.
My intention isn’t to replace SSH—it’s still the best protocol for a lot of things, but I have found it cumbersome to use it to build CLIs for SaaS, which is why I built Terminalwire.
What did you use to build that slick explainer video?
I've been looking for something to record demos of opkssh. What I have now isn't cutting it.
That agent (Python, single-file https://github.com/userify/shim) sticks with decentralized regular keys and only centralizes the control plane, which seems to be more reliable in case your auth server goes offline - you can still login to your servers (obviously no new users or updates to existing keys). It just automates user and sudo configuration using things like adduser and /etc/sudoers.d. (It also actively kills user sessions and removes the user account when they're deleted, which is great for when you're walking someone out in case they have cron-jobs or a long-running tmux session with a revenge script.)
This project looks powerful but with a lot of heavy dependencies, which seem like an increased surface area (like Userify's Active Directory integration, but at least that's optional)
You benefit from more reliable shipping delivery times, no more mysterious city-of-industry->ftmeade->sanfrancisco detours or hardware that fails prematurely due to uncleaned flux or whiskers from implant installations.
Walking this through, given that OpenID Connect is specifically mentioned vs. bare OAuth2, I assume the ID token signatures are themselves verified by looking up ${ISSUER_URI}/.well-known/openid-configuration and following the jwks_uri found there. Is the JWKS response cached? Can it be pre-seeded and/or replaced with an offline copy?
[1]: https://github.com/openpubkey/opkssh/blob/main/README.md#etc...
[2]: https://github.com/openpubkey/opkssh/blob/main/README.md#etc...
Thanks for asking this. I don't see any reason why you use the same IdP with two different Client-IDs. I haven't tested this, but it is doesn't work currently, I'd like to add it as a feature.
Your description of the protocol is spot on. OpenPubkey currently only works with ID Tokens.
>Is the JWKS response cached? Can it be pre-seeded and/or replaced with an offline copy?
Currently the JWKS response is not cached, but caching support is a feature we want to add.
What is the interest in a pre-seeded copy? Availability concerns?
As for pre-seeded/offline JWKS, yeah the biggest concern is around availability. The pre-seeded case would handle fresh VM setups before networking might be fully configured (though other auth methods as fallback would be good enough in most cases, I think). Completely offline JWKS would also be useful for machines with no outbound connectivity. Both use cases are again pretty niche though.
I've been thinking about this as breakglass problem. How do you get into your server if your IDP is offline or you lose internet connectivity. My recommendation has been to have a standard public key SSH account breakglass.
A pre-seeded JWKS or alternative JWKS would let you have the same policy controls but allow you to create valid keys in extreme circumstances. I really like this.
I created an issue to track this. Let me knof you want to do the implementation https://github.com/openpubkey/opkssh/issues/44
The good news is you only have to login through the browser once in the morning and then you can use the generated ssh key all day long.
It seems that the goal is to minimize the number of applications into which users view entering their credentials as "normal". The obvious missing piece then is some standardized FOSS project to handle CLI login.
Of course that is also unlikely to go over well since what the centralized providers actually want (which is fundamentally incompatible with user freedoms) is attestation. Arbitrarily blessing a few browser implementations that are extremely difficult to compile on your own is just a roundabout method to approximate that (IMO anyway).
Edit: It seems like a mistake to me to conflate anit-phishing efforts and IdP. Anti-phishing should be via hardware tokens or TOTP or whatever. IdP should be about large corporations managing user accounts or about individuals gaining convenience, including by serving as an adapter so that the latest popular standard can be used without each downstream service needing to adopt it.
The users disliked the redirects to and from the MFA provider just to log in and receive a signed SSH certificate, but there was no practical way to perform logins in the terminal without creating a whole new protocol and expanding the timeline of the project.
Doesn't it "just" require a CLI client that can speak both FIDO U2F and whatever the MFA provider uses? But yeah point taken.
Even if Google and Microsoft don't support it could a FOSS CLI client capable of speaking all the relevant protocols have resolved your issue? With OpenPubkey gaining support it seems to me that a service could potentially support exclusively that and in doing so cover all necessary auth methods simultaneously, at least assuming the provider is comfortable self hosting an IdP solution.
If I take a sabbatical then writing a client like that sounds interesting. Part of it would be inventing a new MFA provider, as the existing MFA APIs don't expose a "sign with U2F device" authentication method, as far as I know.
Keycloak for example provides an API for the password flow. So webauthn via API definitely isn't too far off of what it already provides.
Will test this for my current use-case and hopefully contribute in the future!
2. Looks like this streamlines the server trusting the client. Does it do anything for the client trusting the server? I feel gross saying it, but I almost wonder if we should be moving towards some sort of TLS-based protocol for remote login rather than doubling down on SSH, due to the assurances provided by CAs.
(see https://ubuntu.com/blog/authd-oidc-authentication-for-ubuntu...)
OPKSSH covers only logging in through SSH to an existing user account, while authd covers all forms of login (console, graphical, SSH) and user/group management. The latter makes it much more of a full AAA product rather than just a new way to login with SSH. This means it's a deeper investment, with implications for network file systems (as covered in the docs), while OPKSSH can be added on top of just about any existing infrastructure.
In terms of process, authd uses the Device Authorization Flow to handle logins, which is more vulnerable to phishing. It also requires both sides to have online access to the IdP, whereas the ID token-based approach of OPKSSH allows the authenticating side to have no (*) or limited outbound connectivity. Also, authd seems to support only Microsoft and Google as IdPs right now, whereas OPKSSH (since it builds on OpenPubkey) supports any OpenID Connect IdP.
* = In theory, at least; the current implementation doesn't fully deliver on this, though the one online resource it does need is fairly static and quite cacheable
Sadly I can not offer an opinion as I don't know how authd works. I intend to find.
As long as you can upload some kind of key to an external system (eg: short-lived ssh certificate) you can then query that certificate via AuthorizedKeysCommand.
Edit: just saw the comment by the author of the post (https://news.ycombinator.com/item?id=43471793). Yep, it's AuthorizedKeysCommand.
Good job!
If you just try to stuff a ID Token into an SSH key and use AuthorizedKeysCommands you introduce replay attacks because the SSH server can pull your ID Token out and stuff it into another SSH key and replay it to other SSH servers to impersonate you. Opkssh doesn't have this weakness because it used OpenPubkey [0].
The real trick here is OpenPubkey. OpenID Connect gives you ID Tokens which don't contain public keys. OpenPubkey tricks your OpenID Connect IDP into including a public key you choose in the ID Token it issues. This turns ID Tokens into certificates without requiring any changes to the IDP. This makes ID Tokens safe to use in SSH.
The signed IdP claims aren't a secret. In OpenPubkey, they function like certificate for the user's public key. This makes them useless for replay attacks in opkssh.
The signed IdP claims are also scoped to a Client-ID specific for opkssh, so non-opkssh OpenID Connect services will reject them.
With OpenPubkey and by extension opkssh, your IDP is functioning like the SSH CA by signing the public key that you make the SSH connection with. Thus, you have one fewer trusted party and you don't have maintain and secure an SSH CA.
Beyond this, rotating SSH CAs is hard because you need to put the public key of the SSH CA on each SSH server and SSH certs don't support intermediate certificates. Thus if you SSH CA is hacked, you need to update the CA public key on all your servers and hope you don't miss any. OpenID Connect IDPs rotate their public keys often and if they get hacked and they can immediately rotate their public keys without any update to relying servers.
New CA is minted, public key is added to the accepted list, client signing start using the new CA and you remove the old after a short while.
If missing servers is a common problem it sounds like there are some other fundamental problems outside just authenticated user sessions.
> If missing servers is a common problem it sounds like there are some other fundamental problems outside just authenticated user sessions.
On one hand yes, on the other hand that is just the current reality in large enterprises. Consider this quote from Tatu Ylonen's (Inventor of SSH) recent paper [0]
“In many organizations – even very security-conscious organizations – there are many times more obsolete authorized keys than they have employees. Worse, authorized keys generally grant command-line shell access, which in itself is often considered privileged. We have found that in many organizations about 10% of the authorized keys grant root or administrator access. SSH keys never expire.”
If authorized keys get missed, servers are going to get missed.
opkssh was partially inspired by the challenges presented in this paper.
[0]: Challenges in Managing SSH Keys – and a Call for Solutions https://ylonen.org/papers/ssh-key-challenges.pdf
The trick is to use your SSH config to intercept SSH connections so the got to a local SSH server, this triggers ProxyCommand and let's you create the cert and then forward those packets into an outgoing SSH connection you don't intercept.
SSH --> Local SSH Server --> ProxyCommand (create cert) --> SSH --> Remote SSH Server
A missing piece of the puzzle for me is general OSS tooling to provision the Linux OS users. While it works in some environments to grant multiple parties access to the same underlying OS users, it’s necessary (or at least easier) in others to have users accessed named user accounts.
Step-ca makes good use of NSS/PAM to make this seamless when attached to a smallstep account (which can be backed by an IdP and provisioned through SCIM). While I could stand up LDAP to accommodate this use case, I’d love a lightweight way for a couple of servers to source users directly from the most popular IdP APIs. I get by with a script that syncs a group every N minutes. And while that’s more than sufficient for a couple of these use cases, I’ll own up to wanting the shiny thing and the same elegance of step-ca’s tooling.
And a walkthrough (2020): http://tech.ciges.net/blog/openssh-with-x509-certificates-ho...
I’m not trying to downplay actually doing it, but it’s been possible since openid connect was invented.
It has been possible since OpenID Connect was invented but figuring out how to get a public key into an ID Token without having to update IDPs or change the protocol in anyway was not known until we published OpenPubkey[0]. OpenID Connect was not designed to do this.
Figuring out how to smuggle this additional information into OpenSSH without requiring code changes or adding a SSH CA required a significant amount of work. I could be wrong but as far as I am aware the combined use of smuggling data in SSH public keys with AuthorizedKeyCommand to validate that data was not done until opkssh.
This was three years of careful work of reading through OpenID Connect specs, SSH RFCs, reading OpenSSH source code to get this to be fully compatible with existing IDPs and OpenSSH.
[0]: OpenPubkey: Augmenting OpenID Connect with User held Signing Keys (2023) https://eprint.iacr.org/2023/296
It’s just that nobody really wants to (OpenID connect became a lot easier to understand when I read the spec, but I never got anywhere close to enjoying it), hence, we didn’t have this until now.
I think tailscale SSH requires you to run their daemon on the server, correct?
Is anybody aware of something like this that can be automated for things like ansible or see a way to use this there?
Doesn't ansible already get you all of this? What is the feature gap you are looking to fill?
That said you definitely use opkssh in automation. OpenPubkey already supports the github-action and gitlab-CI OpenID Providers so in theory you could use opkssh to let a github-action or gitlab-CI workflow ssh into servers under that workflows identity. That is, have a policy on your SSH server that allows only "workflows from repo X triggered by a merge into main ...".
Additionally you can always do machine-identity using OpenID Connect either by running your own JWKS server.
While this works in OpenPubkey, we haven't added this to opkssh yet but we have an issue for it. If you want support for this add your usecase as a comment
My big concern is how we centralize accounts. Not just data access, but like how EVERYTHING is tied to your email. Lose access? You're fucked. Worse, it's very very hard to get support. I'm sure everyone here is well aware of the many horror stories.
Personally I had a bit of a scare when I switched from Android to iPhone. My first iPhone needed to be replaced within 2 weeks and I hadn't gotten everything covered over and not all my 2FAs had transferred to the new phone. Several had to be reset because adding a new 2FA OTP voided the old ones. And since for some reason bitwarden hasn't synched all my notes I had to completely fall back on a few. Which made me glad I didn't force 2FA on all accounts (this is a big fail!!!)
Or even this week, Bitwarden failed on me to provide security keys to sites. The popup would appear but the site had already processed the rejection. Took a few restarts before it was fixed.
The problem I'm seeing here is if we become so dependent on single accounts then this creates a bigger problem than the illness we're trying to solve. While 90% of the time things are better when things go wrong they go nuclear! That's worse!
Yeah, I know with SSO you don't have to use Google/Apple and you can be your own authority. But most people aren't going to do that. Hell, many sites don't even offer anything except Google and Apple! So really we're just setting up a ticking time bomb. It'll be fine for 99% of people 99% of the time, but for the other cases we're making things catastrophic. There's billions of people online so even 1% is a huge number.
Even worse, do we trust these companies will always be around? In your country? To give you proper notice? Do you think you'll even remember everything you need to change? These decisions can be made for you. Even Google accidentally deletes accounts.
So what I really want to see is a system that is more distributed. In the way that we have multiple secure entries. Most methods today are in the form of add 2FA of their choosing and suggest turning off fallback, which is more secure but can fuck you over if it fails. So if we go SSO then this shouldn't replace keys, like the article suggests. Keys are a backup. There should be more too! But then you need to make people occasionally use them to make sure they have that backup. And yes, I understand the more doors there are the bigger attack surface but literally I'm just arguing to not put all our eggs in one basket
The value of opkssh makes sense in an environment in which already have OpenID Connect as the foundation for identity in your system.
OpenPubkey[0], the protocol opkssh is built on, supports cosigners, which parallel identity attestations. OpenPubkey is currently is designed to use cosigners purely for security, i.e., to remove the IDP as a single point of compromise.
OpenPubkey is built on JSON Web Signatures and JSON Web Signatures can support any number of signers. One could easily extend OpenPubkey to something like, 0x1234 is Alice's public if her public key signed by 7 out of 10 identity cosigners.
What you are describing is the same dream I have: decentralized, secure, human-meaningful names. This is hard to build [1] and you have to start sometime, so I started with the existing identity provider infrastructure but that the beginning. If you are interested in building this future, come work on https://github.com/openpubkey/openpubkey/
[0] OpenPubkey: Augmenting OpenID Connect with User held Signing Keys https://eprint.iacr.org/2023/296
[1] Zooko's triangle is a trilemma of three properties that some people consider desirable for names of participants in a network protocol https://en.wikipedia.org/wiki/Zooko%27s_triangle
I'm glad to hear that the protocol supports cosigners. (Next part is definitely described poorly) Is there going to be expansion so that there are "super authorities"? I'm thinking something like how tailscale's taillock works. So there are authorities that can allow access but super-authorities that allow for the most sensitive operations.
I am interested but like many, have other priorities. Unfortunately I think for now I'll be off on the sidelines, but I do like to learn more and I appreciate your explanations.
... apparently in the form of a whole new implementation.
Not realistic. If it's not in OpenSSH, it effectively doesn't exist.
opkssh uses the OpenSSH AuthorizedKeysCommand configuration option like AWS instance-connect to add OpenID Connect validation to OpenSSH authentication.
``` opkssh login ```
Generates a valid ssh key in `~/.ssh/`
Then run bog standard ssh or sftp
``` ssh user@hostname ```
ssh will pull this ssh key from `~/.ssh/` and send it to sshd running on your server. If this key isn't in an AuthorizedKeys file sshd will send it to the AuthorizedKeysCommand which if configured to be `opkssh` will check your OpenID Connect credentials.
I do see <https://github.com/openpubkey/opkssh/issues/6#issuecomment-2...> so I'm glad it's conceptually on the radar, I'm just saying I'm surprised it wasn't part of Cloudflare's best practices already
1: https://github.com/openpubkey/opkssh/blob/v0.3.0/commands/lo...
1: although I don't think I'm the target audience for trail-blazing SSH auth; am a much, much bigger fan of just using X509 CA auth using short-term certs; it's much easier to reason about IMHO
I'm cheating you a little bit, though, because for the most part once a VM gets kubelet on it, I'm off to the races. Only in very, very, very bad circumstances does getting on the actual Node help me
I also recently have started using <https://docs.aws.amazon.com/systems-manager/latest/userguide...> to even get sequestered cluster access via $(aws ssm start-session --document-name AWS-StartPortForwardingSessionToRemoteHost) although the "bootstrapping" problem of finding the instance-id to feed into --target is a pain. I wish they offered https://docs.aws.amazon.com/systems-manager/latest/userguide... in the spirit of "yeah, yeah, just pick one" versus making me run $(aws ec2 describe-instances --filter | head -n1) type thing
"Key management improvements on the client and SSH agent support": https://github.com/openpubkey/opkssh/issues/6#issuecomment-2...
Looks like this is a sidecar application. So potentially very useful, also potentially very brittle.
Let's hope no backdoor will be added there.
You can deploy and use in a completely closed system.
At least that is my belief, do people here think my speculation is correct ? I checked https://undeadly.org and no mention of anything like this.
FWIW, I will never use this.
opkssh uses AuthorizedKeysCommand field in sshd_config. OpenBSD added this config field to OpenSSH to enable people to do stuff like opkssh or instance-connect without needing to do code patches. OpenSSH is really smart about enabling functionality like this via the config.