[0] https://github.com/lucia-auth/lucia
has he written up why? lots to learn here
edit: oh: https://github.com/lucia-auth/lucia/discussions/1707
this is great. he saw the coming complexity explosion, that the library was no longer useful to him personally, and took the humble route to opt out of the Standard Model of library slop development. rare.
The thing is, 99% of people really do just need 'log in / log out', and this is an incredibly useful thing to have as a library.
If you need Web 8.0 passkeys served via WASM eliptic curve sockets or whatever, sure, roll your own or use Auth0. But it feels really silly for the consensus around auth to be 'oh, you're making a CRUD cooking app to share your love of baking? cool, well here's the OAuth spec and a list of footguns, go roll some auth'. It's not a good use of that person's time - they should be focussed on their actual idea rather than being forced to reinvent plumbing - and tons of people are going to get it wrong and end up with effectively no auth at all.
Would love to learn where we missed on the developer experience. Can you email me? mg@workos.com
We have hundreds of happy customers using AuthKit including high-demand apps like Cursor. Lots more features coming too.
> Email addresses are case-insensitive.
From https://thecopenhagenbook.com/email-verification
The email standard says they are case sensitive.
If you lowercase emails during send operations, the wrong person may get the email. That's bad for auth.
Some (many) popular email providers choose to offer only case-insensitive emails. But a website about general auth should recommend the general case.
https://stackoverflow.com/questions/9807909/are-email-addres...
Side remark: It is not always clear/obvious what's the other case of a given character is, and may change over time. For example, the German capital ß was added in 2008 to Unicode. So it's best to avoid case sensitivity where you can, in general programming.
The standard says one thing, yet implementers do another. In this case following the letter of the standard gets you in trouble in the real world.
I agree. First, you have tons of websites using the wrong input field (“text” instead of “email”) which often results in capitalized inputs without user intent. Then you have the non-techies who would absolutely not remember this little gotcha, and put in randomly interchangeable casing depending on who knows what. Some people still thinks capitalization looks more formal and correct, for instance.
So what’s the benefit of adhering to the standard strictly? Nothing that solves real-world issues afaik. There is only downside: very simple impersonation attacks.
That said, there is a middle ground. Someone put it like this: store and send user input the way they entered it but used the canonical address for testing equality, eg in the database.
The official email standards basically say to treat email addresses as a binary format. You aren't even allowed to do NFC / NFD / NFKC etc normalization.
https://github.com/whatwg/html/issues/4562#issuecomment-2096...
Unicode has some standards which are slightly better, but they're only for email providers to restrict registering new email addresses, and it still doesn't suggest case-insensitivity.
https://www.unicode.org/reports/tr39/#Email_Security_Profile...
I'm tempted to write an email standard called "Sane Email" that allows providers to opt into unicode normalization, case insensitivity (in a well-defined way), and sane character restrictions (like Unicode's UTS #39).
Currently the standards allow for pretty much _any_ unicode characters, including unbalanced right-to-left control characters, and possibly even surrogates.
Websites are supposed to store email addresses as opaque binary strings.
I think the overly permissive standards are what are holding back unicode email addresses.
This means that you send emails to the case-sensitive address originally entered, but the user is free to login case insensitively.
The downside is that you cannot have two distinct users with emails that only differ in their case. But I feel rather OK about that.
I would argue the risk matrix of having case-insensitive emails looks much better than the risk matrix of having case-sensitive emails (meaning, you should lowercase all the emails and thus The Copenhagen Book is right, once again).
In most cases an accented character is a typo. If you have a non ASCII email I guess you are used to pain on the internet.
MacAdam is a surname, like the Scottish engineer John Loudon McAdam who invented the road construction known as "macadam". "Sandy.MacAdam@example.com" comes across rather different than "sandy.macadam@example.com".
A hypothetical DrAbby@example.com probably would prefer keeping that capitalization over "drabby@example.com".
I'm sure there are real-world examples.
On a related note, I knew someone with an Irish O'Surname who was very particular that the computer systems support his name. (As https://stackoverflow.com/questions/8527180/can-there-be-an-... puts it, "People do have email addresses with apostrophes. I see them not infrequently, and have had to fix bugs submitted by angry Hibernians.") No doubt some of them also want to see the correct capitalization be used.
A possibly better alternative is to recommend that the normalization be used only for internal use, while using the user-specified address for actual email messages, and to at least note some of the well-known issues with normalizing to lower-case.
And they can keep that capitalization when they type in their login or otherwise share their email address with the world. Are you suggesting that this Dr. Abby user would be offended that the website’s authentication infrastructure ends up working with it as lowercase?
Crypto relies on number theory and a complexity theoretical assumption that N!=NP (i.e. that there exists one-way/trapdoor functions).
I think it is opaque by the very nature of how it works (math).
Understanding finite fields or elliptic curves (integer groups really) made me able to grok a lot of crypto. It is often a form of the discrete-logarithm problem somehow.
- I learned the other day here on HN that SHA256 is vulnerable to a length extension attack, so if you want 256 bits of SHA goodness, you should use SHA512 and truncate it to 256 bits. This is terrible naming! Name the bad one "SHA-DoNotUse" if it's broken and this is known from the start. Why does it even exist?
- For the first decade or so of JWT library support, many verifiers happily accepted "alg: 'none'" payloads, letting attackers trivially bypass any actual verification. If you wanted JWT safely, you were supposed to know to tell the verifier to only accept the algorithms you were going to use when creating tokens.
- Hash algorithms have names such as "MD5", "SHA1", "bcrypt" and "argon2", ie meaningless character soup. I can't blame novice programmers for just using whatever hash algorithm is the default of their language's hash function, resulting in MD5-hashed passwords being super common until about a decade ago.
Security resources and libraries for programmers should be focused on how a thing should be used, not on how it works. That's what this book gets right (and what its page on elliptic curves gets so wrong).
Or, for another example, my favourite bit of crypto library design is PHP's `password_hash()` function[0]. They added it to the language after the aforementioned decade of MD5-hashed passwords, and that fixed it in one fell swoop. `password_hash()` is great, because it's designed for a purpose, not for some arbitrary set of crypto properties. The purpose is hashing a password. To verify a hashed password, use its brother `password_verify()`. Easy peasy! It's expertly designed, it supports rehashing passwords when necessary, and you don't need to understand any crypto to use it! I don't understand why all other high level programming languages didn't immediately steal this design.
I mean why can't all security libraries be like this? Why do most encryption libs have functions named "crypto_aead_chacha20poly1305" instead of "encrypt_message_symmetrically"? Why do they have defaults that encourage you to use them wrong? Why do they have 5 nearly identically named functions/algorithms for a particular purpose but actually you shouldn't use 4 of them and we won't tell you which ones? Do you want GCM or CCM? Or do you prefer that with AEAD? Do you want that with a MAC, and HMAC, or vanilla? Gaah I just want to send a secret! Tell me how to get it right!
[0] https://www.php.net/manual/en/function.password-hash.php
I don’t agree with you for various reasons.
I learnt all I know about crypto from online resources. It’s perhaps a question of taste, so let’s just skip that one.
It’s all good that you can easily hash a password in PHP without knowing what happens[0]. If you need to interface with another language/program however, it’s not as convenient anymore.
I am a fan of understanding what you are doing. Also in crypto.
[0]: But not really though. You need to trust that the PHP-team is competent and understand security. They don’t have the best track record there IMHO.
(Note that I disagree that "HMAC-SHA256" qualifies as an abstraction. It's just more character soup)
Who do you think writes these standards? NSA loves footguns, the more footguns the better. Also these things are contextual, length extension is a problem for MAC, but not a problem for a password hash or a secret token.
It just so happens that for PHP that library is the STL.
How should e.g. a C++ program know how PHP encrypts something through “encrypt_message_symmetrically”?
Embedded machinery has other needs and resources than e.g. online banking. So we can’t just have one algorithm for symmetrical/asymmetrical crypto.
It's also perfectly imaginable for such a library to evolve over time, as insights in the security community improve. Eg it could add support for more algorithms, change defaults, etc. And it could provide helpful tools for long-time users to migrate from one algorithm/setting to another with backward compatibility.
It's hard to do, sure. But it rubs me the wrong way that the same people who keep repeating "don't roll your own crypto!" make it so needlessly hard for non-crypto people to use their work.
I think libsodium comes close to this ideal, but I still feel like it's pretty hard to navigate, and it mixes "intended for noobs" with "know what you're doing" functions in a single big bag. In a way, JWT is another thing that comes close, if only it was more opinionated about algorithms and defaults. Paseto (a JWT contender that, afaik, never made the splash I'd hoped it would) seems great, and I guess my entire rant boils down to "why doesn't something like Paseto exist for every common security use case?"
I haven't read it, but I plan to, eventually. There's a book titled "Cryptography Engineering: Design Principles and Practical Applications" that could help you.
Yes
There's nothing stopping library authors from choosing good defaults for well defined use cases. My beef is that mostly this isn't done, and often neither does documentation about security. That's what I like about this "Copenhagen Book": it gets this right. It starts with the use case, and then goes down to clear recommendations on how to combine which crypto primitives. Most resources take it the other way around, they start with explaining the crypto primitives in hard to understand terms and then if you're very lucky, tell you what pitfalls to avoid, and mostly never even make it to the use case at all.
With all the respect, I'm a bit skeptical about this document for such reasons:
- Name is quite pompous. It's a very good marketing trick: calling some document like if it was written by group of researchers from a Copenhagen university. :)
Yes, Lucia is a relatively popular library but it doesn't mean that it is promoting best practices and that its author should be considered an authority in such important field unless opposite is proven.
- I don't like some aspects of Lucia library design: when user token is almost expired - instead of generating new security token Lucia suggesting just to extend life of existing one. I see it as a very insecure behavior: token lives forever and can be abused forever. This violates one of the security best practices of limited token lifetime.
But both Lucia and "Copenhagen Book" encourages this practice [1]:
``` if time.Now().After(session.expiresAt.Sub(sessionExpiresIn / 2)) { session.ExpiresAt = time.Now().Add( updateSessionExpiration(session.Id, session.ExpiresAt) } ```
[1]: https://thecopenhagenbook.com/sessions#session-lifetime
The link you posted shows code to extend the session, which is common practice (it's called rolling session), not to "extend" the token's life (which should be impossible, a token needs to be immutable in the first place, which is why refreshing a token gives you a new token instead of mutating the original).
In the first scenario, the attacker steals the token and uses it forever. In the second, the attacker steals the token and they'll get a fresh one near its expiry. They can still impersonate the user forever. The user might notice something when they get kicked out (because the attacker renewed the token, rendering the old one invalid) but it's unlikely. For good UX, you need a grace period anyway, otherwise legitimate users can have problems with parallel requests (request one causes a token refresh, request two gets rejected because it was initiated before the first one was completed).
You can use a second token (a refresh token) but it only pushes the risk to the second token. Now we need to worry about the second token being stolen and abused forever.
Refresh tokens are useful for not having to hit the database on every request though: Typically, the short lived session token can be validated without hitting the db (e.g. it's a signed JWT). But it means that you can't invalidate it when stolen, it will be valid until expiry time so the expiry time has to be short to limit the damage. For the refresh token, on the other hand, you do hit the db. Using a second token doesn't add any security, hitting the db does, because the refresh token can be invalidated (by deleting it from the db).
Lucia always hits the db (at least in their examples), so you can invalidate tokens anytime. To mitigate risks, you can allow the user to see and terminate their active sessions (preferably with time, location, and device info: "Logged in 12 AM yesterday from an iPhone in Copenhagen"). You could also notify the user when someone logs in from a new location or device.
That's about all you can do. There's simply no fully secure way of implementing long-lived sessions.
- Session token has two timepoints: validUntil and renewableUntil. - If now > validUntil && now < renewableUntil - I'm regenerating session token.
This way user is not logged out periodically but session token is not staying the same for 5 years.
But maybe I'm just overthinking it. :)
For my application the token is valid for a few months, but we will automatically issue you a new one when you make requests. So the old token will expire eventually. But the client will update the token automatically making your "session" indefinite.
So when you throw away a drive that you had sitting in the junk drawer for a year that token is inert. Even if you are using a cloned machine that is still extending the same "session".
Nice to hear someone touch on one of them: you absolutely NEED to use a transaction as a distributed locking mechanism when you use a token.
This goes double/quadruple for refresh tokens. Use the same token more than once, and that user is now signed out.
It doesn't matter if your system runs on one machine or N machines; if you have more than one request with a refresh token attached in flight at once - happens all the time - you are signing out users, often via 500.
Refresh tokens are one-time use.
The other thing devs and auth frameworks miss is the "state" parameter.
Seriously, the way auth in general is developing right now, I think we approach a point of insecurity through obscurity.
And with applications states, you need to adapt the application logic to authentication and the application then would have to check if someone maybe stole your refresh token.
In that document, refresh token rotation is preferred, but it also addresses the obvious difficulty in clustered environments: https://datatracker.ietf.org/doc/html/rfc6819#section-5.2.2....
What do you mean?
I'm so tired of having my day constantly interrupted by expiring sessions. GitHub is my least favourite; I use it ~weekly, so my sessions always expire, and they forced me to use 2FA so I have to drag my phone out and punch in random numbers. Every single time.
As well as being terrible UX, though I have no evidence to back this up, I'm pretty sure this constant logging in fatigues users enough to where they stop paying attention. If you log into a site multiple times a week, it's easy for a phishing site to slip into your 60th login. Conversely if you've got an account that you never need to log into, it's going to feel really weird and heighten your awareness if it suddenly does ask for a password.
Regardless, companies should learn that everyone has a different risk appetite and security posture, and provide options.
Side-note, Github's constant session expiry & 2FA annoyed me so much that I moved to Gitea and disabled expiry. That was 90% of the reason I moved. It's only available on my network too, so if anything I feel I gained on security. Companies 100% can lose customers by having an inflexible security model.
Roughly, that it should continue for 30 days if used within 30 days.
A few weeks back I logged in right before a recurring meeting to take notes, and for several weeks running it's been interrupting me in the middle of that meeting to force me to log in again.
Part of the issue is I use it weekly from 3 different devices, so there's always one device that needs another login.
I know it's not my browser, as on my own web apps I set the maxAge of my session cookies to 10 years and they work perfectly.
This is at least what we do for our web application, where users are automatically refreshed indefinitely unless they are inactive for more than a few days (enough to cover Saturday/Sunday when they are not working). We have an access token that is refreshed in 5 minute intervals. The refresh request also provides a new refresh token with an extended expiration. A deactivated user can use it for a maximum of a few minutes until the access token expires, because the refresh request will fail. It's fine for our use case, but it may not be for everyone. We could potentially include a token black-list in the backend for emergency uses, but we haven't seen the need for it yet.
Instead of implying that others are ignorant in a low-value comment, list out why you think this is a bad idea.
"All admins" pushed the ridiculous password rules and renewals for decades with similar confidence that they now push the progressively more byzantine MFA schemes.
I recently learned about the SRP protocol [1], and I’m surprised that it’s not more widely used/mentioned: with a relatively simple protocol, you can do a ZKP and generate a session token between the server and client in one fell swoop.
[1]: https://en.wikipedia.org/wiki/Secure_Remote_Password_protoco...
I'll keep an eye on these comments to see if there are any dissenting opinions or caveats but I know I'll be reviewing this against my own auth projects.
One thing I would like to see would be a section on JWT, even if it is just for them to say "don't use them" if that is their opinion.
One of my most frequent criticisms of teams responsible for security is that they spend a lot of time telling people what not to do instead of proactively providing them with efficient tools or methods for doing what they want to do.
It looks like they mean authentication but it would be nice if they were clear.
Plus there’s the less-often-discussed task of protecting some of your users from other users, such as Google vetting their html5 ads for malware, and military (all B2B?) contractors trying to write tools that aren’t useful to insider threats. It’s worse than either auth* domain IMO, as it usually involves unavoidable tradeoffs for benign users; I haven’t read this book in full but I suspect it didn’t make the list!
TBF, I’m not sure it even has a standard name yet like the other two… anyone know enough to correct me? Maybe… “encapsulation”? “Mitigation”? The only “auth*” term left is arguably “authorship”, which doesn’t really fit https://www.thefreedictionary.com/words-that-start-with-Auth
Edit; I think I just taught myself what complex authorization is! I’ve always treated it as role management, but “what roles can do what” does also fit, I have now realized. Sorry y’all - leaving it up in case it’s a learning experience for others lol
There's also lots of potential levels of granularity and thus complexity, with the most granular (that I've seen) being able to model access through time as a continuum down to the individual field of each object in the business, based on wide arrays of arbitrary other factors. Think modeling problems like:
> "If condition X in the business is true then I want user X to be unable to view/edit the 'foobar' field of entity 'powzap', and I only want this rule to be true on Tuesdays of the months April and October".
That's a tough problem to tackle with a lot of subtlety to wrangle.
- Complicated authorization systems bleed through everything else, adding exponential complexity. Maybe, as an industry, we should seek better tradeoffs? One example I can think of is preferring auditing over authorization. It's a lot easier to build a generic, unified auditing system and interface than to build sleek, fluent UIs that also have to accommodate arbitrarily complex authz behaviors.
- OTOH, I'm very keen on fine-grained controls over what data I grant third parties access to. For example, I want to be able to say, "grant this lender access to the last 18 months of account balance for this specific account" and exactly no more or less.
Side note: there is a trivial case where authentication is reduced to “whoever is physically holding/interacting with the system”. This is when either the operation to be authorized is relatively low risk (changing the channel on the TV with the line-of-sight IR remote control) or when you’re depending on physical security controls to prevent access to people who shouldn’t be doing the thing, e.g. requiring data center technicians to badge in before they can go into the server room and start disconnecting things.
Embedded browsers make it impossible (literally in some cases, figuratively in others) to use social OAuth. If you click a link on Instagram, which by default opens in Instagram's browser, and that link has "Sign in with Google", it simply will not work, because Google blocks "insecure browsers", which Instagram is one. There are even issues getting "Sign in with Facebook" to work, and Meta owns Instagram and Facebook! The Facebook embedded browser suffers from similar issues.
It's virtually never useful to me when I click on a link in Slack or whatever, then respond to a text message, and go back to my browser expecting to find my page there, and it's nowhere because Slack has gobbled it up in its own browser.
Fortunately I just checked and there's a way to disable the embedded browser in Slack.
> When comparing password hashes, use constant time comparison instead of ==.
If you were comparing plaintext you'd get some info, but it seems overly cautious when comparing salted hashes. Maybe anticipating an unknown vulnerability in the hash function?
At a previous employer, people built some tool that auto-built Kube manifests and so on. To be honest, I much preferred near raw manifests. They were sufficient and the tool actually added a larger bug space and its own YAML DSL.
It seems MitID isn't mentioned in The Copenhagen Book: https://www.google.com/search?q=site%3Athecopenhagenbook.com...
Iceland and the Faroes follow the same one-for-all approach: https://www.audkenni.is/, https://www.samleikin.fo/.
Things are a bit more fragmented in Finland, Norway and Sweden: https://www.norden.org/en/info-norden/electronic-identificat..., https://www.norden.org/en/info-norden/electronic-identificat..., https://www.norden.org/en/info-norden/electronic-identificat...
So, it's maybe not too much of a stretch to say that "a Copenhagen way" to authenticate is to integrate with MitID, either through a certified broker or by becoming one: https://www.mitid.dk/en-gb/broker/broker-certification/
Two tradeoffs I see is that it is a bit abstract, and also a bit brief/succinct in some places where it just says it as it is and not the why. But neither of those are really negatives on my book, just concessions you have to make when doing a project like this. You can dig deeper in any topic, and nowadays libraries have pretty good practical setups, so as a place where it is all bound together as a single learning resource is AMAZING. I'm even thinking of editing it and printing it!
Also when it's set to strict? Or if it requires a PUT or other method that doesn't work with top-level navigation? Is it about ancient or obscure browsers that didn't/don't implement it (https://caniuse.com/same-site-cookie-attribute)?
I especially appreciated the note that while UUIDv4 has a lot of entropy, it’s not guaranteed to be cryptographically secure per the spec. Does it matter? For nearly all applications, probably not, but people should be aware of it.
> Set all the other bits to randomly (or pseudo-randomly) chosen values.
Section 4.5, which is actually scoped to UUIDv1, does hint at it being a good idea:
> Advice on generating cryptographic-quality random numbers can be found in RFC1750.
But absolutely nothing stops you from doing this:
import random
import string
def terrible_prng(k: int) -> int:
_int = int("".join(random.choices(string.digits, k=k)))
return _int << (128 - _int.bit_length())
def make_uuid_v4(_int: int) -> str:
_int &= ~(0xC000 << 48)
_int |= 0x8000 << 48
_int &= ~(0xF000 << 64)
_int |= 4 << 76
_uuid_hex = "%032x" % _int
return "%s-%s-%s-%s-%s" % (
_uuid_hex[:8],
_uuid_hex[8:12],
_uuid_hex[12:16],
_uuid_hex[16:20],
_uuid_hex[20:],
)
They'll all be RFC4122-compliant (depending on how you interpret "randomly chosen values", since with `k=10`, for example, only the first field will be unique), but terrible, e.g. '83f1a1ea-0000-4000-8000-000000000000'.In fairness, RFC9562, which supersedes RFC4122, says this in Section 6.9:
> Implementations SHOULD utilize a cryptographically secure pseudorandom number generator (CSPRNG) to provide values that are both difficult to predict ("unguessable") and have a low likelihood of collision ("unique").
And the RFC2119 definition of SHOULD requires that you "[understand and carefully weigh the] full implications before choosing a different course."
I don't think it protects against timing attack because the common way of doing it is just to use sha256 and use the resulting hash to do a lookup in the database. This is not a fixed time operation
- Use libraries like zxcvbn to check for weak passwords.
These rules might be good for high-security sites, but it's really annoying to me when I have to generate a length-15 string password with special characters and uppercase for some random one-off account that I use to buy a plane ticket or get reimbursed for a contact lens purchase.
Now they can be told they only have to remember a single password and that makes a difference, though it does need to be stressed that this particular password should be more secure than "password". They remember a single password -- which is ideally hard to guess -- then copy the randomly generated password for whatever account and paste it in the login form.
A real worry is the possibility of a password manager service being compromised. However, these companies hire security experts and do regular audits of their systems and practices, which, when compared to the opsec of those who choose "password" for their password, is obviously beneficial. So of course we collectively decided that single points of failure are "good"; they are far better than what we had before.
(Admittedly, perhaps one attack that's enabled is to discover services that are used by an individual via compromised data from the password manager service. I still get the feeling that such a compromise, even on a wide scale, is more easily done elsewhere.)
1. Back up my passwords on their server for a fee. Well, that's (alas) hackable, so if someone gets their password they will have everyone's password file. 2. Except each one is encrypted with that user's password, and in my case it's really long. So they'd then have to break each individual one. 3. Except signing in with my password on a new device requires my YubiKey as well, or one of my lost-my-YubiKey tokens, which also only I possess.
So I'm not as worried as I probably should be :-)
Security is always as weak as the weakest link.
Or is it just that AVAST is aghast because some site is trying to spread some common sense? (against silicon snake-oil maybe?)
Do you think you should blindly trust AVAST?
Maybe it's just their servers suffering from some hiccups, causing their clients to have burps?
This happened before to all of those applications from any vendor, including that built-in microsoft-thing.
Multiple times.
Maybe it's just because it's using too many technical terms about cryptology, in unusual ways.
Must be a bad h4xx0r then!
Bang! Automagically blacklisted by some black-box.
Don't be a cargo-culting fashion victim.
(...zalgorithms on crack, brainz out of whack...)
Obviously there are exceptions
Prediction: in 10 years nearly everyone will be using a password manager; it will come with their OS (Android or iOS) with browser plugins for other OS’s, and the integration with mobile apps and mobile web will be so tight that people will not even realize they are using passwords, most of the time.
Apple just massively revamped their own manager in the latest iOS release. They already have pretty good integration with mobile web and with App Store apps.
In the next couple of years I expect to see pw manager integration made a firm requirement for App Store apps, and I expect to see web standards for account signup and login that make pw managers reliable.
I suspect Google will follow suit although I am not familiar with Android’s capabilities in that area.
So in a few years you will not type an email address and password to sign up for things; the OS will prompt you: “foo.com is asking you to sign up, would you like to do this automatically?” and if you respond in the affirmative you’ll get a site-specific email address and password automatically created and stored for you, and that will be used whenever you want to log in. Recovery will shift to a mobile account centric workflow (Apple ID or Google account) rather than email based password reset links.
If a data breach is reported the pw manager app can notify you and give you a one-button-click experience to reset your password.
The downside is that if you get canceled by Apple or Google it will be a special kind of hell to recover.
I recently ran into an interesting problem -- my Microsoft account (used as a spam lightning rod) borked a passkey stored on a Fido token and refused a paswordless sign in. Same thing happened with a second backup token made by a different company. If I didn't have a password fallback, and that account was important, I would have a massive problem with no way to solve it. But the world has not yet gone completely insane, so I fired up my trusty KeePassXC and was in in less than a minute.
I love the idea of passkeys; I hate the experience of passkeys, especially when it comes to having to reach for my phone to log into a desktop web site.
And then realize that you need to support them, because they are the most universal solution there is for an average user. Email/Username + Password is the most portable way to do login as a user that we have invented.
Every project has some amount of "being quirky/different" capital. If your project is not explicitly trying to innovate, or does not for some particular reason need to be very secure, then do not spend that capital on confusing users with the login flow. You'll turn a bunch of users away and cause a whole lot of support tickets, for very little benefit. Make users only think about stuff by making it unintuitive or different if it's really worth it to your product.
It's a lot simpler to implement (just one flow instead of signin / signup / forgot), less catastrophic when your data is breached, piggybacks on the significant amount of work that already goes into securing email, gives you 90% of the benefits of 2FA / FIDO / Web Authn / whatever for free with 0 implementation cost, makes account sharing harder (good for business), and is easy to extend/replace with oAuth for specific domains.
No I won’t log into my email multiple times per day because you are too lazy to hash passwords.
It always depends on the audience but if your users are somewhat technically literate you need passwords.
Wouldn't systems like this put a lot of trust on their users? Say you use a magic link on an compromised wifi network, like in a hotel, coffee shop, airport and so on without being on a VPN. Which some users will inevitable do.
I completely agree with the "most use cases" though. As long as you can't change the associated e-mail without additional requirements.
(I've also seen phone + phone OTP, but oh please never ask me for a phone number ever again. My phone number should always only be for making and receiving calls, not for verifying any sort of identity or personhood.)
Of course, nothing beats the security and privacy of username + password + TOTP (or security key), but you can't necessarily expect normal users to know to do that (or how).
Hell, I've seen at least one site that keeps the login username (what you actually use to sign into your account) separate from the public username (what everyone else sees), just to even more disconnect the login credentials from anything a potential attacker would have access to. But this is overkill for most scenarios (that particular platform does have a good reason).
Also, magic links need to be designed so that I can login on my PC, and click the link on my phone, and be logged in on the PC.
Though I've really enjoyed using QR codes to login, that has been a really smooth modern experience.
I feel that way too - I hate it when I'm trying to log in on desktop and the email shows up as a push notification on my phone.
The problem is what happens if someone enters someone else's email address and that person unwittingly clicks on the "approve" link in the email they receive. That only has to happen once for an account to be compromised.
So now you need "enter the 4 digit code we emailed you" or similar, which feels a whole lot less magical than clicking on a magic link.
Presumably there are well documented patterns for addressing this now? I've not spent enough time implementing magic links to have figured that out.
Eh? In a sane magic link system, clicking the magic link grants the clicker access to the account. Right then and there, in the browser that opened the link.
If I enter my email in SomeSite, they send a magic link to my email address, and then Mallory intercepts that email and gains access to my SomeSite account just by opening the link (i.e. the link acts as a bearer token), that's completely broken.
If email is your master key to everything I would worry.
No.
If magic links only log you in on the device you click them on, they prevent a lot of phishing attacks.
With a setup like that, there's literally no way to impersonate your website and steal user credentials.
This comes at a cost of making logins on public computers less secure, and which of these is more important should be weighed on a service-by-service basis.
A website for making presentations should obviously choose "more phishing and easier to use on public computers", a service for managing your employees' HR records should obviously choose the opposite.
Two scenarios I had recently, where I absolutely, utterly hated this pattern:
* I did not remember the mail address for such a thing because I started (too late) to use a different mail address for every service, thanks to Apples iCloud hidden addresses. And because there was no corresponding password, there was no entry in my password manager. I since rectified that, but it’s annoying.
* I tried to login on an older Windows PC - the magic mail landed on my iPhone. And because cross-system technical standards are a thing of the past the only possibility to get the magic link to the other system was to transcribe it.
I absolutely despise this. Every time I want to quickly log into an app and check something, just to sit in front of my synchronising mail client, wondering if the email will arrive, be caught by the spam filter, or just have random delay of a few minutes. Awful.
It’s a nightmare if they also insist on short lived sessions.
So for me, email + email OTP is the way to go.
security key is at least somewhat better than TOTP because it's not (or less-)phishable
> Of course, nothing beats the security and privacy of username + password + TOTP (or security key), but you can't necessarily expect normal users to know to do that (or how).
Honestly, this just seems like a UX problem.The ways this is currently implemented are often terrible, but not always. I'll give an example: I recently did a stint at "Green company" and they gave me a yubi key. They also used Microsoft for most things. To login with Microsoft authenticator I type in my username and password, click yes on the next page, and then click yes on my phone. But to use the yubi key was needlessly frustrating. First, Microsoft doesn't let you use it as the default method (hardware key). So then you have to click "use another form of authentication", "hardware key", "next" (why? Idk), and then finally you pin and tap the key. A bunch of needless steps there and I'm not convinced this wasn't intentional. There's other services I've used working at other places where it's clean and easy: username + password, then pin+ tap key (i.e. hardware key is default!).
I seriously think a lot of security issues come down to UX. There's an old joke about PGP
How do you decrypt a PGP encrypted email?
You reply to the sender "can't decrypt, can you send it back in clear?"
It was a joke about the terrible UX. That it was so frustrating that this outcome was considered normal. But hey, we actually have that solved now. Your Gmail emails are encrypted. You have services like Whatsapp and Signal that are E2EE. What was the magic sauce? UI & UX. They are what make the tools available to the masses, otherwise it's just for the nerds.The world is not improved or made more robust if every experience online must be gated through some third-party vendor's physical widget (or non-trivial software).
There are parts of our lives that benefit from the added securiry that comes alongside that brittleness and commercial dependence, and parts that don't. Let's not pretend otherwise.
It might help your mental model to think about them as identical to hardware security keys. Except now you don't need to buy a specific hardware key, your password manager is it. You can also just use your hardware key as your passkey, same thing (as long as the key supports FIDO2).
Specifically for your question on what happens if you lose face/fingerprint sensor. So this would be assuming you use Android/iOS's password managers, in that case even with biometrics failing you can just use the code you set on your device as both have fallbacks.
Couldn't we just make password managers pretend that they're a Jubikey or similar?
Is it that Jubikeys don't offer any extra (master password / biometric) authn, and hence are only suitable as a second factor, where password managers can be used as both?
There are two critical things you lose with OAuth. First, it's centralization so you must trust that player and well now if that account is compromised everything down steam is (already a problem with email, who are the typical authorities). Second is privacy. You now tell those players that you use said service.
Let me tell you as a user another workflow. If you use bitwarden you can link Firefox relay, to auto generate relay email addresses. Now each website has not only a unique password, but a unique email. This does wonders for spam and determining who sells your data, AND makes email filters much more useful for organization. The problem? Terrible UX. Gotta click a lot of buttons and you destroy your generated password history along the way (if you care). No way could I get my parents to do this, let alone my grandma (the gold standard of "is it intuitive?" E.g Whatsapp: yes; Signal: only if someone else does the onboarding).
There's downsides of course. A master password, but you do control. At least the password manager passes the "parent test" and "girlfriend test", and they even like it! It's much easier to get them (especially parents) to that one complicated master passphrase that the can write down and put in a safe.
A lot of security (and privacy) problems are actually UI/UX problems. (See PGP)
OAuth recognized this, but it makes a trade with privacy. I think this can be solved in a better way. But at minimum, don't take away password as an option.
Sure many places only implement Google/Meta/Githun/Discord etc but that's not a requirement, specially for your own app. You can implement and run your own oAuth server if you so wished, much good it would be.
But regardless, that's why FIDO2 and webAuthN was developed, but even that has it's issues.
> You are assuming a lot about who your oAuth provider is
> Sure many places only implement
This doesn't change my concern, but yes, it deepens it. Sure, I known there can be an arbitrary authority, but does it matter when 90% don't allow another authority? I can't think of more than once I have seen another authority listen.