• ozuly
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
If I'm not mistaken, this is written by the author of Lucia, a popular auth library for TypeScript [0]. He recently announced that he will be deprecating the library and be replacing it with a series of written guides [1], as he no longer feels that the Lucia library is an ergonomic way of implementing auth. He posted an early preview of the written guide [2] which I found enjoyable to read and complements The Copenhagen Book nicely.

[0] https://github.com/lucia-auth/lucia

[1] https://github.com/lucia-auth/lucia/discussions/1707

[2] https://lucia-next.pages.dev/

  • swyx
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
> he no longer feels that the Lucia library is an ergonomic way of implementing auth

has he written up why? lots to learn here

edit: oh: https://github.com/lucia-auth/lucia/discussions/1707

this is great. he saw the coming complexity explosion, that the library was no longer useful to him personally, and took the humble route to opt out of the Standard Model of library slop development. rare.

  • ozuly
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
There is a Github Discussion where he goes into more detail. He also talks about it on his twitter:

https://github.com/lucia-auth/lucia/discussions/1707

https://x.com/pilcrowonpaper/status/1843258855280742481

What an insanely impressive dude. All that and he just started going to university this year.
I often wonder what universities brings to students who are already performing in professional life. I’m tempted to ask “So what is he going to teach?” to such stories.
  • baq
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Awareness about fundamentals and networking with other similarly gifted people, hopefully.
Maybe they were always interested in drama or philosophical studies, CS is not the only interesting topic in this life :)
Networking, physical and information resources, help with learning things he might have had trouble grokking, and the signal provided by a degree to get into organizations or events that might not have heard of him. If a very smart and already accomplished person doesn't worry about money, academia (at least attendance) can be a very good choice.
  • troad
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Yeah, but Lucia is just going to be immediately replaced with some other popular auth library.

The thing is, 99% of people really do just need 'log in / log out', and this is an incredibly useful thing to have as a library.

If you need Web 8.0 passkeys served via WASM eliptic curve sockets or whatever, sure, roll your own or use Auth0. But it feels really silly for the consensus around auth to be 'oh, you're making a CRUD cooking app to share your love of baking? cool, well here's the OAuth spec and a list of footguns, go roll some auth'. It's not a good use of that person's time - they should be focussed on their actual idea rather than being forced to reinvent plumbing - and tons of people are going to get it wrong and end up with effectively no auth at all.

Haha I've been working on my cooking app[0] (not ready yet, join the waiting list!), and for the last 1 month I've been implementing auth with AuthKit (bad experience IMHO, should have just self host SuperTokens in hindsight), experiencing what you described here 1:1

[0] https://prepbook.app

Hey, I’m the founder of WorkOS (which makes AuthKit).

Would love to learn where we missed on the developer experience. Can you email me? mg@workos.com

We have hundreds of happy customers using AuthKit including high-demand apps like Cursor. Lots more features coming too.

Been rolling my own auth today for luls. Thanks for this :)
  • nh2
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
This is wrong:

> Email addresses are case-insensitive.

From https://thecopenhagenbook.com/email-verification

The email standard says they are case sensitive.

If you lowercase emails during send operations, the wrong person may get the email. That's bad for auth.

Some (many) popular email providers choose to offer only case-insensitive emails. But a website about general auth should recommend the general case.

https://stackoverflow.com/questions/9807909/are-email-addres...

Side remark: It is not always clear/obvious what's the other case of a given character is, and may change over time. For example, the German capital ß was added in 2008 to Unicode. So it's best to avoid case sensitivity where you can, in general programming.

  • Sammi
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
The top stackoverflow answer you link to disagrees with you: "In practice though, no widely used mail systems distinguish different addresses based on case."

The standard says one thing, yet implementers do another. In this case following the letter of the standard gets you in trouble in the real world.

Yeah, case-sensitive email addressing seems like a horrid idea for a standard. For exactly the reason pointed out, that using only lowercase could result in the wrong person receiving emails. Expecting users who type in email addresses to respect case-sensitivity is wishful thinking at best.
> Expecting users who type in email addresses to respect case-sensitivity is wishful thinking at best.

I agree. First, you have tons of websites using the wrong input field (“text” instead of “email”) which often results in capitalized inputs without user intent. Then you have the non-techies who would absolutely not remember this little gotcha, and put in randomly interchangeable casing depending on who knows what. Some people still thinks capitalization looks more formal and correct, for instance.

So what’s the benefit of adhering to the standard strictly? Nothing that solves real-world issues afaik. There is only downside: very simple impersonation attacks.

That said, there is a middle ground. Someone put it like this: store and send user input the way they entered it but used the canonical address for testing equality, eg in the database.

The other side of that is handling case insensitivity in Unicode bug for bug compatible with email providers.
> handling case insensitivity in Unicode bug for bug compatible with email providers.

The official email standards basically say to treat email addresses as a binary format. You aren't even allowed to do NFC / NFD / NFKC etc normalization.

https://github.com/whatwg/html/issues/4562#issuecomment-2096...

Unicode has some standards which are slightly better, but they're only for email providers to restrict registering new email addresses, and it still doesn't suggest case-insensitivity.

https://www.unicode.org/reports/tr39/#Email_Security_Profile...

I'm tempted to write an email standard called "Sane Email" that allows providers to opt into unicode normalization, case insensitivity (in a well-defined way), and sane character restrictions (like Unicode's UTS #39).

Currently the standards allow for pretty much _any_ unicode characters, including unbalanced right-to-left control characters, and possibly even surrogates.

Websites are supposed to store email addresses as opaque binary strings.

I think the overly permissive standards are what are holding back unicode email addresses.

My practice on this is to store the user-provided case, but do case insensitive lookups.

This means that you send emails to the case-sensitive address originally entered, but the user is free to login case insensitively.

The downside is that you cannot have two distinct users with emails that only differ in their case. But I feel rather OK about that.

For applications storing user data on Postgres, the citext (case-insensitive text) type does just that.

https://www.postgresql.org/docs/current/citext.html

  • ikety
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Seems like the optimal solution to me
  • nh2
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Yes, this is also what we do.

It also allows adding exceptions, in case a customer shows up where you do need to support two users that differ only in casing.

I had case where you could register with your email and the email then became your username which was used to login. The email was case insensitive but the username created from the email was case sensitive, if you created the account with uppercase capital letter email, that was your username and case insensitive email didn’t work to login.
  • nsbk
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Let’s abolish capitalization altogether. Such a waste. And time zones next. UTC FTW! Then we can do away with languages, and cultural differences. Gazillions of LOC down the drain. I can hear the repositories shrinking already
While in theory it's true, in practice I've seen multiple systems that due to early implementation bugs they had multiple cases for the same email, which obviously were from the same person, think JohnDoe@example.com on one user entry and johndoe@example.com in another user entry. These were also coming from different systems, which made it all even more troublesome.

I would argue the risk matrix of having case-insensitive emails looks much better than the risk matrix of having case-sensitive emails (meaning, you should lowercase all the emails and thus The Copenhagen Book is right, once again).

It's worth noting that most reputable transaction email services only accept ASCII characters in email addresses so it's at least worth notifying the user that non ASCII emails are not allowed.

In most cases an accented character is a typo. If you have a non ASCII email I guess you are used to pain on the internet.

That still doesn't make it a good idea to normalize to lowercase. Some people are very particular about capitalization.

MacAdam is a surname, like the Scottish engineer John Loudon McAdam who invented the road construction known as "macadam". "Sandy.MacAdam@example.com" comes across rather different than "sandy.macadam@example.com".

A hypothetical DrAbby@example.com probably would prefer keeping that capitalization over "drabby@example.com".

I'm sure there are real-world examples.

On a related note, I knew someone with an Irish O'Surname who was very particular that the computer systems support his name. (As https://stackoverflow.com/questions/8527180/can-there-be-an-... puts it, "People do have email addresses with apostrophes. I see them not infrequently, and have had to fix bugs submitted by angry Hibernians.") No doubt some of them also want to see the correct capitalization be used.

A possibly better alternative is to recommend that the normalization be used only for internal use, while using the user-specified address for actual email messages, and to at least note some of the well-known issues with normalizing to lower-case.

  • danso
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
> A hypothetical DrAbby@example.com probably would prefer keeping that capitalization over "drabby@example.com".

And they can keep that capitalization when they type in their login or otherwise share their email address with the world. Are you suggesting that this Dr. Abby user would be offended that the website’s authentication infrastructure ends up working with it as lowercase?

I am suggesting that showing the normalized name, perhaps using it as the "To:" in an email, or presented in the UI, may annoy some users.
Wow, this is very nice. One of my pet peeves is how 90% of security resources seem designed to be absolutely inscrutable by non-security experts - especially anything from cryptography. Every single page in here however is clear, concise, to the point, and actionable, love it! (except the one on elliptic curves, which I find about as incomprehensible as most crypto resources).
Just a small comment/opinion on the inscrutability of crypto:

Crypto relies on number theory and a complexity theoretical assumption that N!=NP (i.e. that there exists one-way/trapdoor functions).

I think it is opaque by the very nature of how it works (math).

Understanding finite fields or elliptic curves (integer groups really) made me able to grok a lot of crypto. It is often a form of the discrete-logarithm problem somehow.

That’s not really true. I don’t have a link handy (might try to find later), but I read a rather scathing critique of the state of crypto math by a mathematician. The short summary is that crypto math is overwhelmingly unnecessarily inelegant.
I, for one, would be very interested in reading that.
I did not make my point well enough. I don't mind that crypto is inscrutable, that's fine (and unavoidable). Plenty other tech that I use every day is inscrutable (eg TCP, or HyperLogLog, or database query planners, or unicode text rendering, etc). I mind that resources about how to use crypto in software applications are often inscrutable, all the way down to library design, for no good reason. I mean stuff like:

- I learned the other day here on HN that SHA256 is vulnerable to a length extension attack, so if you want 256 bits of SHA goodness, you should use SHA512 and truncate it to 256 bits. This is terrible naming! Name the bad one "SHA-DoNotUse" if it's broken and this is known from the start. Why does it even exist?

- For the first decade or so of JWT library support, many verifiers happily accepted "alg: 'none'" payloads, letting attackers trivially bypass any actual verification. If you wanted JWT safely, you were supposed to know to tell the verifier to only accept the algorithms you were going to use when creating tokens.

- Hash algorithms have names such as "MD5", "SHA1", "bcrypt" and "argon2", ie meaningless character soup. I can't blame novice programmers for just using whatever hash algorithm is the default of their language's hash function, resulting in MD5-hashed passwords being super common until about a decade ago.

Security resources and libraries for programmers should be focused on how a thing should be used, not on how it works. That's what this book gets right (and what its page on elliptic curves gets so wrong).

Or, for another example, my favourite bit of crypto library design is PHP's `password_hash()` function[0]. They added it to the language after the aforementioned decade of MD5-hashed passwords, and that fixed it in one fell swoop. `password_hash()` is great, because it's designed for a purpose, not for some arbitrary set of crypto properties. The purpose is hashing a password. To verify a hashed password, use its brother `password_verify()`. Easy peasy! It's expertly designed, it supports rehashing passwords when necessary, and you don't need to understand any crypto to use it! I don't understand why all other high level programming languages didn't immediately steal this design.

I mean why can't all security libraries be like this? Why do most encryption libs have functions named "crypto_aead_chacha20poly1305" instead of "encrypt_message_symmetrically"? Why do they have defaults that encourage you to use them wrong? Why do they have 5 nearly identically named functions/algorithms for a particular purpose but actually you shouldn't use 4 of them and we won't tell you which ones? Do you want GCM or CCM? Or do you prefer that with AEAD? Do you want that with a MAC, and HMAC, or vanilla? Gaah I just want to send a secret! Tell me how to get it right!

[0] https://www.php.net/manual/en/function.password-hash.php

I get your point, thanks for clarifying.

I don’t agree with you for various reasons.

I learnt all I know about crypto from online resources. It’s perhaps a question of taste, so let’s just skip that one.

It’s all good that you can easily hash a password in PHP without knowing what happens[0]. If you need to interface with another language/program however, it’s not as convenient anymore.

I am a fan of understanding what you are doing. Also in crypto.

[0]: But not really though. You need to trust that the PHP-team is competent and understand security. They don’t have the best track record there IMHO.

FWIW sha256 isn't broken, it's just you need to be careful when you're using it to generate HMAC. This follows what other comments are saying where you shouldn't use crypto primitives directly and use abstractions that take care of the rough edges.
Yes, this is exactly what I'm arguing for. Abstractions like PHP's `password_hash()`.

(Note that I disagree that "HMAC-SHA256" qualifies as an abstraction. It's just more character soup)

>Why does it even exist?

Who do you think writes these standards? NSA loves footguns, the more footguns the better. Also these things are contextual, length extension is a problem for MAC, but not a problem for a password hash or a secret token.

Counterpoint: these crypto APIs are inscrutable for the same reason poisonous mushrooms are brightly coloured; they’re warning you away. It is extremely easy to mishandle crypto primatives, so if you’re reaching for crypto_aead_chacha20poly1305 and are confused it’s probably because you should be using a library that provides encrypt_message_symmetrically.

It just so happens that for PHP that library is the STL.

  • Sammi
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
No you are still misreading the point. The complaint isn't that something like crypto_aead_chacha20poly1305 exists. It's that encrypt_message_symmetrically doesn't exist in most places.
I am not sure I get the point here. Do you want just one standard for symmetrical encryption?

How should e.g. a C++ program know how PHP encrypts something through “encrypt_message_symmetrically”?

Embedded machinery has other needs and resources than e.g. online banking. So we can’t just have one algorithm for symmetrical/asymmetrical crypto.

It's perfectly imaginable for a library to exist that is designed for a specific use case (eg securely send a message to a recipient that already knows the key), is implemented across many languages and platforms, and defaults to the same algorithms and settings on all those platforms.

It's also perfectly imaginable for such a library to evolve over time, as insights in the security community improve. Eg it could add support for more algorithms, change defaults, etc. And it could provide helpful tools for long-time users to migrate from one algorithm/setting to another with backward compatibility.

It's hard to do, sure. But it rubs me the wrong way that the same people who keep repeating "don't roll your own crypto!" make it so needlessly hard for non-crypto people to use their work.

I think libsodium comes close to this ideal, but I still feel like it's pretty hard to navigate, and it mixes "intended for noobs" with "know what you're doing" functions in a single big bag. In a way, JWT is another thing that comes close, if only it was more opinionated about algorithms and defaults. Paseto (a JWT contender that, afaik, never made the splash I'd hoped it would) seems great, and I guess my entire rant boils down to "why doesn't something like Paseto exist for every common security use case?"

  • ·
  • 3 weeks ago
  • ·
  • [ - ]
> I mind that resources about how to use crypto in software applications are often inscrutable, all the way down to library design, for no good reason.

I haven't read it, but I plan to, eventually. There's a book titled "Cryptography Engineering: Design Principles and Practical Applications" that could help you.

Schneier’s book? I can fully recommend that. It comes at the solutions from a practical point of view instead of the theoretical one.
> Schneier’s book?

Yes

What would you have called the algorithms?
The character soup is fine. The problem is that people stop after that. Want security? Here, a bucket of character soup! Good luck!

There's nothing stopping library authors from choosing good defaults for well defined use cases. My beef is that mostly this isn't done, and often neither does documentation about security. That's what I like about this "Copenhagen Book": it gets this right. It starts with the use case, and then goes down to clear recommendations on how to combine which crypto primitives. Most resources take it the other way around, they start with explaining the crypto primitives in hard to understand terms and then if you're very lucky, tell you what pitfalls to avoid, and mostly never even make it to the use case at all.

Would be nice to see alternative documents for similar topics (e.g. something like OWASP Cheatsheet but from more practical point of view).

With all the respect, I'm a bit skeptical about this document for such reasons:

- Name is quite pompous. It's a very good marketing trick: calling some document like if it was written by group of researchers from a Copenhagen university. :)

Yes, Lucia is a relatively popular library but it doesn't mean that it is promoting best practices and that its author should be considered an authority in such important field unless opposite is proven.

- I don't like some aspects of Lucia library design: when user token is almost expired - instead of generating new security token Lucia suggesting just to extend life of existing one. I see it as a very insecure behavior: token lives forever and can be abused forever. This violates one of the security best practices of limited token lifetime.

But both Lucia and "Copenhagen Book" encourages this practice [1]:

``` if time.Now().After(session.expiresAt.Sub(sessionExpiresIn / 2)) { session.ExpiresAt = time.Now().Add( updateSessionExpiration(session.Id, session.ExpiresAt) } ```

[1]: https://thecopenhagenbook.com/sessions#session-lifetime

> when user token is almost expired - instead of generating new security token Lucia suggesting just to extend life of existing one

The link you posted shows code to extend the session, which is common practice (it's called rolling session), not to "extend" the token's life (which should be impossible, a token needs to be immutable in the first place, which is why refreshing a token gives you a new token instead of mutating the original).

My point is that token stays the same all the time instead of changing it over the time even for the same session.
If you destroyed a token and send a reply to the client with a new token, but the client already sent you a new request with old token, that request will be denied.
I don't think there's any difference between extending the existing session and creating a new one in Lucia's context.

In the first scenario, the attacker steals the token and uses it forever. In the second, the attacker steals the token and they'll get a fresh one near its expiry. They can still impersonate the user forever. The user might notice something when they get kicked out (because the attacker renewed the token, rendering the old one invalid) but it's unlikely. For good UX, you need a grace period anyway, otherwise legitimate users can have problems with parallel requests (request one causes a token refresh, request two gets rejected because it was initiated before the first one was completed).

You can use a second token (a refresh token) but it only pushes the risk to the second token. Now we need to worry about the second token being stolen and abused forever.

Refresh tokens are useful for not having to hit the database on every request though: Typically, the short lived session token can be validated without hitting the db (e.g. it's a signed JWT). But it means that you can't invalidate it when stolen, it will be valid until expiry time so the expiry time has to be short to limit the damage. For the refresh token, on the other hand, you do hit the db. Using a second token doesn't add any security, hitting the db does, because the refresh token can be invalidated (by deleting it from the db).

Lucia always hits the db (at least in their examples), so you can invalidate tokens anytime. To mitigate risks, you can allow the user to see and terminate their active sessions (preferably with time, location, and device info: "Logged in 12 AM yesterday from an iPhone in Copenhagen"). You could also notify the user when someone logs in from a new location or device.

That's about all you can do. There's simply no fully secure way of implementing long-lived sessions.

I think you’re reading into the name a little, haha. I’m interested in your alternative method for session token replacement, though! I think you make a good point, but I’m not an expert by any means.
Usually on low-risk projects where I don't want to bother myself with handling token pairs (or where it's impossible) I have similar simplified approach but regenerating token:

- Session token has two timepoints: validUntil and renewableUntil. - If now > validUntil && now < renewableUntil - I'm regenerating session token.

This way user is not logged out periodically but session token is not staying the same for 5 years.

But maybe I'm just overthinking it. :)

I agree with this. I think all tokens should expire. If you accidentally zip up an auth token in an application's config directory it is nice if it becomes inert after a while. If you extend the token it could live forever.

For my application the token is valid for a few months, but we will automatically issue you a new one when you make requests. So the old token will expire eventually. But the client will update the token automatically making your "session" indefinite.

So when you throw away a drive that you had sitting in the junk drawer for a year that token is inert. Even if you are using a cloned machine that is still extending the same "session".

There are two things that everybody misses about OAuth and they fly under the radar.

Nice to hear someone touch on one of them: you absolutely NEED to use a transaction as a distributed locking mechanism when you use a token.

This goes double/quadruple for refresh tokens. Use the same token more than once, and that user is now signed out.

It doesn't matter if your system runs on one machine or N machines; if you have more than one request with a refresh token attached in flight at once - happens all the time - you are signing out users, often via 500.

Refresh tokens are one-time use.

The other thing devs and auth frameworks miss is the "state" parameter.

There is no "one-time" over the network. Invalidating the refresh token immediately when the server recieves it is asking for trouble.
Too complicated, not suitable for everyday authentication in my opinion.

Seriously, the way auth in general is developing right now, I think we approach a point of insecurity through obscurity.

And with applications states, you need to adapt the application logic to authentication and the application then would have to check if someone maybe stole your refresh token.

Most systems implement a grace period for refresh token reuse for similar reasons. Transactions don’t really solve it. (Ex: You open two tabs quickly, hitting the server with the original refresh token twice)
You are probably familiar with a document called OAuth Threat model.

In that document, refresh token rotation is preferred, but it also addresses the obvious difficulty in clustered environments: https://datatracker.ietf.org/doc/html/rfc6819#section-5.2.2....

Sorry, but that doesn't work in real world applications. Multiple requests are fired simultaneously all the time. E. g. browsers starting with multiple tabs, smartphone apps starting and firing multiple requests etc.
  • rank0
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
> The other thing devs and auth frameworks miss is the "state" parameter.

What do you mean?

I wish more websites would grant you the option to say "I never want my session to expire until I log out, I understand the risks". The "remember me" button does nothing these days.

I'm so tired of having my day constantly interrupted by expiring sessions. GitHub is my least favourite; I use it ~weekly, so my sessions always expire, and they forced me to use 2FA so I have to drag my phone out and punch in random numbers. Every single time.

As well as being terrible UX, though I have no evidence to back this up, I'm pretty sure this constant logging in fatigues users enough to where they stop paying attention. If you log into a site multiple times a week, it's easy for a phishing site to slip into your 60th login. Conversely if you've got an account that you never need to log into, it's going to feel really weird and heighten your awareness if it suddenly does ask for a password.

Regardless, companies should learn that everyone has a different risk appetite and security posture, and provide options.

Side-note, Github's constant session expiry & 2FA annoyed me so much that I moved to Gitea and disabled expiry. That was 90% of the reason I moved. It's only available on my network too, so if anything I feel I gained on security. Companies 100% can lose customers by having an inflexible security model.

This resource has a good recommendation on how session lifetimes should work.

Roughly, that it should continue for 30 days if used within 30 days.

https://thecopenhagenbook.com/sessions#session-lifetime

Notion is the worst offender for me, not because of its frequency but because it will actually interrupt an active session and forces you to log back in. Worse, the session seems to last just a minute or two longer than one week, which means it will boot you this week just a little bit after you started your work last week.

A few weeks back I logged in right before a recurring meeting to take notes, and for several weeks running it's been interrupting me in the middle of that meeting to force me to log in again.

Logout on a Saturday and reset the cycle
If I'm right about the timings that would only work for a week unless I intentionally log out and in every Saturday. Otherwise it'll always log me out soon after the time that I first needed Notion the week before.
Are you sure you haven't toggled something in your browser settings? There are a plethora of settings that would wind up rendering those "remember me" buttons useless. Anecdotally, I haven't had any issues with staying logged into websites. Github being one of them.
Yes I am definitely confident with how my browser is configured. Our usage patterns/tolerances must simply be different.

Part of the issue is I use it weekly from 3 different devices, so there's always one device that needs another login.

I know it's not my browser, as on my own web apps I set the maxAge of my session cookies to 10 years and they work perfectly.

  • Tade0
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Some authentication providers cap token lifetime depending on pricing plan.
At least you can enjoy security theater.
Um, my session with GitHub never expires - weird.
Maybe it automatically extends it when you are using it?

This is at least what we do for our web application, where users are automatically refreshed indefinitely unless they are inactive for more than a few days (enough to cover Saturday/Sunday when they are not working). We have an access token that is refreshed in 5 minute intervals. The refresh request also provides a new refresh token with an extended expiration. A deactivated user can use it for a maximum of a few minutes until the access token expires, because the refresh request will fail. It's fine for our use case, but it may not be for everyone. We could potentially include a token black-list in the backend for emergency uses, but we haven't seen the need for it yet.

I suppose it's _gh_sess cookie which expires at end of session. The cookie lives as long as the browser is open.
[flagged]
Random users might not, but this is HN. You're getting down voted for your condescending tone about an issue that, for most technical people here, has extremely obvious risks.

Instead of implying that others are ignorant in a low-value comment, list out why you think this is a bad idea.

Seems like an "I understand the risks, keep me signed in" button would affect more than just HN users. Unless you have to prove you know the risks by logging into a HN profile with at least 1000 karma, or taking a short quiz on authentication best practices, or something.
  • baq
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
He’s right, though. I understand the risk of this session never expiring is the same as No sane person should ever request a token which never expires. Your fat tail of risk dominates the whole distribution of outcomes.
Nah, it's a stupid opinion to think you should just cache creds forever and doesn't deserve more than I've given. On a fuckin tech site of this caliber. This isn't a Facebook group and it's frankly frightening that I'd have to here, of all places.
It's not a stupid opinion to have different productivity/risk tradeoffs. A lot of "all admins" don't seem to care much about productivity, yank and unintended consequences.

"All admins" pushed the ridiculous password rules and renewals for decades with similar confidence that they now push the progressively more byzantine MFA schemes.

Nice.

I recently learned about the SRP protocol [1], and I’m surprised that it’s not more widely used/mentioned: with a relatively simple protocol, you can do a ZKP and generate a session token between the server and client in one fell swoop.

[1]: https://en.wikipedia.org/wiki/Secure_Remote_Password_protoco...

Its adoption was hindered by patents.
Just chiming in that I appreciate this resource. A lot of security advice is esoteric and sometimes feels ridiculous. Like a lawyer who advises you not to do anything ever. This guide was refreshingly concise, easy to follow and understand, and has good straight forward advice.

I'll keep an eye on these comments to see if there are any dissenting opinions or caveats but I know I'll be reviewing this against my own auth projects.

One thing I would like to see would be a section on JWT, even if it is just for them to say "don't use them" if that is their opinion.

  • stult
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
> Like a lawyer who advises you not to do anything ever.

One of my most frequent criticisms of teams responsible for security is that they spend a lot of time telling people what not to do instead of proactively providing them with efficient tools or methods for doing what they want to do.

I would have liked to see a section on SAML and a high-level overview of how to implement it.
we at workos wrote about it here: https://workos.com/blog/the-developers-guide-to-sso
I'd recommend email flow like email verification and password reset to last for several days if the secret token is strong enough. Email can be seen as a more secure system, so it may not be available immediately and everywhere.
  • efitz
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
By “auth” do they mean “authn” (authentication) or “authz” (authorization)?

It looks like they mean authentication but it would be nice if they were clear.

They discuss session tokens, passwords and webAuthn so both.
All of those things are authentication, not authorisation. https://www.okta.com/identity-101/authentication-vs-authoriz...
  • bbor
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
A) fair, b) I think this common distinction is a little overblown. Authorization is just a particularly straightforward CRUD feature, perhaps with some inheritance logic — authentication seems to be where 99% of all security sadness comes into play.

Plus there’s the less-often-discussed task of protecting some of your users from other users, such as Google vetting their html5 ads for malware, and military (all B2B?) contractors trying to write tools that aren’t useful to insider threats. It’s worse than either auth* domain IMO, as it usually involves unavoidable tradeoffs for benign users; I haven’t read this book in full but I suspect it didn’t make the list!

TBF, I’m not sure it even has a standard name yet like the other two… anyone know enough to correct me? Maybe… “encapsulation”? “Mitigation”? The only “auth*” term left is arguably “authorship”, which doesn’t really fit https://www.thefreedictionary.com/words-that-start-with-Auth

Edit; I think I just taught myself what complex authorization is! I’ve always treated it as role management, but “what roles can do what” does also fit, I have now realized. Sorry y’all - leaving it up in case it’s a learning experience for others lol

Authz is usually much more complex than strict authN since authz gets much more into the thorny people problems, things like "how do you build a system allowing arbitrary organizations of people (your customers) to systematize how they want the people within their organization to be able to access/change things." A better term I've heard is "governance" which is more indicative of the stodgy, thorny, people-oriented nature of the problem, just like governments!

There's also lots of potential levels of granularity and thus complexity, with the most granular (that I've seen) being able to model access through time as a continuum down to the individual field of each object in the business, based on wide arrays of arbitrary other factors. Think modeling problems like:

> "If condition X in the business is true then I want user X to be unable to view/edit the 'foobar' field of entity 'powzap', and I only want this rule to be true on Tuesdays of the months April and October".

That's a tough problem to tackle with a lot of subtlety to wrangle.

I have two reactions to this:

- Complicated authorization systems bleed through everything else, adding exponential complexity. Maybe, as an industry, we should seek better tradeoffs? One example I can think of is preferring auditing over authorization. It's a lot easier to build a generic, unified auditing system and interface than to build sleek, fluent UIs that also have to accommodate arbitrarily complex authz behaviors.

- OTOH, I'm very keen on fine-grained controls over what data I grant third parties access to. For example, I want to be able to say, "grant this lender access to the last 18 months of account balance for this specific account" and exactly no more or less.

  • efitz
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
What people want is authorization. Authentication is a painful activity that must be performed in order to do authorization properly in most cases.

Side note: there is a trivial case where authentication is reduced to “whoever is physically holding/interacting with the system”. This is when either the operation to be authorized is relatively low risk (changing the channel on the TV with the line-of-sight IR remote control) or when you’re depending on physical security controls to prevent access to people who shouldn’t be doing the thing, e.g. requiring data center technicians to badge in before they can go into the server room and start disconnecting things.

To be fair, once someone has physical access to the machine, them having full access is just a matter of time and effort. So at that point it's security-through-too-much-effort-to-bother.
There should be a unified theory that all auth can be stacked on top of. Like, a theory of secure communication, that deals with the problem of adding security/reliability/etc. properties to a communication channel.
  • efitz
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
There is a lot of formal research on both authentication and authorization. IIRC Butler Lampson’s Turing Award was based on work he did in that area: https://amturing.acm.org/award_winners/lampson_1142421.cfm
Authorization sadness is just hidden under complexity and you don't suspect anything until a breach. It needs xfail tests, but who writes them?
The first two are both authn. Is webAuthn about authz? (I don't doubt it.)
Somewhat related, I have a short rant about embedded browsers killing the web.

Embedded browsers make it impossible (literally in some cases, figuratively in others) to use social OAuth. If you click a link on Instagram, which by default opens in Instagram's browser, and that link has "Sign in with Google", it simply will not work, because Google blocks "insecure browsers", which Instagram is one. There are even issues getting "Sign in with Facebook" to work, and Meta owns Instagram and Facebook! The Facebook embedded browser suffers from similar issues.

Embedded browsers have many great use cases, but navigating to arbitrary links is not one of them.

It's virtually never useful to me when I click on a link in Slack or whatever, then respond to a text message, and go back to my browser expecting to find my page there, and it's nowhere because Slack has gobbled it up in its own browser.

Fortunately I just checked and there's a way to disable the embedded browser in Slack.

What is the rationale behind the following?

> When comparing password hashes, use constant time comparison instead of ==.

If you were comparing plaintext you'd get some info, but it seems overly cautious when comparing salted hashes. Maybe anticipating an unknown vulnerability in the hash function?

Good stuff. The model of "how to build" vs. "library that does" is a good idea when there's combinatorial explosion and you want to reduce the design space.

At a previous employer, people built some tool that auto-built Kube manifests and so on. To be honest, I much preferred near raw manifests. They were sufficient and the tool actually added a larger bug space and its own YAML DSL.

Anyone know why it's called the Copenhagen Book?
The guy seems to have quite a few projects with geography references for no specific reason. So I guess the answer is: it's catchy and easy to remember for most people.
He said that that he picks the names of the projects by randomly picking locations from a map.
  • fleb
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Authentication with Danish services tends to rely on MitID ("my ID"): https://www.mitid.dk/en-gb/about-mitid/

It seems MitID isn't mentioned in The Copenhagen Book: https://www.google.com/search?q=site%3Athecopenhagenbook.com...

Iceland and the Faroes follow the same one-for-all approach: https://www.audkenni.is/, https://www.samleikin.fo/.

Things are a bit more fragmented in Finland, Norway and Sweden: https://www.norden.org/en/info-norden/electronic-identificat..., https://www.norden.org/en/info-norden/electronic-identificat..., https://www.norden.org/en/info-norden/electronic-identificat...

So, it's maybe not too much of a stretch to say that "a Copenhagen way" to authenticate is to integrate with MitID, either through a certified broker or by becoming one: https://www.mitid.dk/en-gb/broker/broker-certification/

I was just reading this and was gonna recommend it to a friend! Saw the announcement from Lucia moving to a resource-based repo and digging deeper saw The Copenhagen Book, which IMHO is the best resource Auth-related I've seen in my 10+ years of career, all very well put together.

Two tradeoffs I see is that it is a bit abstract, and also a bit brief/succinct in some places where it just says it as it is and not the why. But neither of those are really negatives on my book, just concessions you have to make when doing a project like this. You can dig deeper in any topic, and nowadays libraries have pretty good practical setups, so as a place where it is all bound together as a single learning resource is AMAZING. I'm even thinking of editing it and printing it!

> CSRF protection must be implemented when using cookies, and using the SameSite flag is not sufficient.

Also when it's set to strict? Or if it requires a PUT or other method that doesn't work with top-level navigation? Is it about ancient or obscure browsers that didn't/don't implement it (https://caniuse.com/same-site-cookie-attribute)?

This looks ver useful if you're about to implement an Auth system. But I thinks it's worth noting that many things can be offered without authentication, i.e. without an account. I think it's worth noting that e.g. e-commerce can be (and in some rare but appreciated cases is) offered in "guest mode". Especially for smaller or more niche shops where return customers are less frequent, it's just good to keep that in mind.
It’s nice to see something other than “don’t roll your own, it’s dangerous.”

I especially appreciated the note that while UUIDv4 has a lot of entropy, it’s not guaranteed to be cryptographically secure per the spec. Does it matter? For nearly all applications, probably not, but people should be aware of it.

UUIDv4 has no more entropy than its generator, that's why it can be generated only by a CSPRNG. If you use a generator with 32 bits entropy, you can generate only 4 billion uuids and they will begin to collide after 64k ids due to birthday paradox.
RFC4122 doesn’t require the use of a CSPRNG. From Section 4.4:

> Set all the other bits to randomly (or pseudo-randomly) chosen values.

Section 4.5, which is actually scoped to UUIDv1, does hint at it being a good idea:

> Advice on generating cryptographic-quality random numbers can be found in RFC1750.

But absolutely nothing stops you from doing this:

    import random
    import string


    def terrible_prng(k: int) -> int:
        _int = int("".join(random.choices(string.digits, k=k)))
        return _int << (128 - _int.bit_length())


    def make_uuid_v4(_int: int) -> str:
        _int &= ~(0xC000 << 48)
        _int |= 0x8000 << 48
        _int &= ~(0xF000 << 64)
        _int |= 4 << 76
        _uuid_hex = "%032x" % _int
        return "%s-%s-%s-%s-%s" % (
            _uuid_hex[:8],
            _uuid_hex[8:12],
            _uuid_hex[12:16],
            _uuid_hex[16:20],
            _uuid_hex[20:],
        )
They'll all be RFC4122-compliant (depending on how you interpret "randomly chosen values", since with `k=10`, for example, only the first field will be unique), but terrible, e.g. '83f1a1ea-0000-4000-8000-000000000000'.

In fairness, RFC9562, which supersedes RFC4122, says this in Section 6.9:

> Implementations SHOULD utilize a cryptographically secure pseudorandom number generator (CSPRNG) to provide values that are both difficult to predict ("unguessable") and have a low likelihood of collision ("unique").

And the RFC2119 definition of SHOULD requires that you "[understand and carefully weigh the] full implications before choosing a different course."

  • wg0
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
It doesn't talk about ID tokens and JWT etc with the API only security/use case?
you only need to roll your own auth once and you can drop it in anywhere
I wonder why they recommend hashing server tokens in some cases. Is it so that someone who can read the database can’t hijack an account? Or am I misunderstanding why hashing is used?
  • jeltz
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
My guess is that so people who manage to access database backup cannot hijack accounts plus it gives a good defence against timing attacks as a bonus.
More generally it protects against anybody who has access to the database, including bad actors if it's leaked.

I don't think it protects against timing attack because the common way of doing it is just to use sha256 and use the resulting hash to do a lookup in the database. This is not a fixed time operation

Suppose you used the timing attack to recover the hash of a token, now you need to compute a preimage of the hash.
Exactly that. (Hijack session rather than account: any competently designed system should require re-auth before any action that would allow permanent account takeover).
[dead]
Well, web applications are not web sites, as the latest would support authentification for noscript/basic (x)html browsers.
This is a great guide, thanks.
Great guide, thank you.
- Passwords must be at least 8 characters long....

- Use libraries like zxcvbn to check for weak passwords.

These rules might be good for high-security sites, but it's really annoying to me when I have to generate a length-15 string password with special characters and uppercase for some random one-off account that I use to buy a plane ticket or get reimbursed for a contact lens purchase.

This is why password managers are wonderful - let it make one for you, and then it remembers it for the next time you need to hit that site.
Somehow with password managers we collectively decided that single points of failure where good…
For the 99% user, they're a huge step up in security. The password many will use for every site is itself a single point of failure on top of often being an incredibly guessable thing like "password" or "abc123". It being the same password for everything poses the security risk that a compromise of one company's data exposes your password for another company.

Now they can be told they only have to remember a single password and that makes a difference, though it does need to be stressed that this particular password should be more secure than "password". They remember a single password -- which is ideally hard to guess -- then copy the randomly generated password for whatever account and paste it in the login form.

A real worry is the possibility of a password manager service being compromised. However, these companies hire security experts and do regular audits of their systems and practices, which, when compared to the opsec of those who choose "password" for their password, is obviously beneficial. So of course we collectively decided that single points of failure are "good"; they are far better than what we had before.

(Admittedly, perhaps one attack that's enabled is to discover services that are used by an individual via compromised data from the password manager service. I still get the feeling that such a compromise, even on a wide scale, is more easily done elsewhere.)

I definitely see your point, but let's look at what Bitwarden does:

1. Back up my passwords on their server for a fee. Well, that's (alas) hackable, so if someone gets their password they will have everyone's password file. 2. Except each one is encrypted with that user's password, and in my case it's really long. So they'd then have to break each individual one. 3. Except signing in with my password on a new device requires my YubiKey as well, or one of my lost-my-YubiKey tokens, which also only I possess.

So I'm not as worried as I probably should be :-)

A rogue update to bitwarden gets uploaded by an attacker and the entire edifice collapses at once.

Security is always as weak as the weakest link.

All password managers should support downloading and backing up your passwords, right? You can even self host if you want, at least for Bitwarden.
You misunderstood: the risk isn't that the password managers can lose your password, it's that they can be compromised and when they are then all your accounts are compromised at once.
Yeah, at least most people eventually abandoned the ridiculous idea of mandatory password renewal ever six months …
i see that most examples are implemented in golang, if possible would also request the author to consider adding python and node.js snippets
What's with the name?
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
[dead]
[flagged]
[flagged]
[flagged]
your antivirus is arguably a malware itself. you don't need to give a 3rd party your entire internet browsing content to have a secure computer.
[flagged]
Why would that be? Did they try to sell you some silly con snake-oil?

Or is it just that AVAST is aghast because some site is trying to spread some common sense? (against silicon snake-oil maybe?)

Do you think you should blindly trust AVAST?

Maybe it's just their servers suffering from some hiccups, causing their clients to have burps?

This happened before to all of those applications from any vendor, including that built-in microsoft-thing.

Multiple times.

Maybe it's just because it's using too many technical terms about cryptology, in unusual ways.

Must be a bad h4xx0r then!

Bang! Automagically blacklisted by some black-box.

Don't be a cargo-culting fashion victim.

(...zalgorithms on crack, brainz out of whack...)

I like it but and I know it’s nit picky- is there a off or other book like thing one can “hold”
If you're doing auth in 2024, please consider not supporting passwords. Most people will never use a password manager, and even if they did it's not as secure as key-based approaches or OAuth2.

Obviously there are exceptions

  • efitz
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
> Most people will never use a password manager

Prediction: in 10 years nearly everyone will be using a password manager; it will come with their OS (Android or iOS) with browser plugins for other OS’s, and the integration with mobile apps and mobile web will be so tight that people will not even realize they are using passwords, most of the time.

Apple just massively revamped their own manager in the latest iOS release. They already have pretty good integration with mobile web and with App Store apps.

In the next couple of years I expect to see pw manager integration made a firm requirement for App Store apps, and I expect to see web standards for account signup and login that make pw managers reliable.

I suspect Google will follow suit although I am not familiar with Android’s capabilities in that area.

So in a few years you will not type an email address and password to sign up for things; the OS will prompt you: “foo.com is asking you to sign up, would you like to do this automatically?” and if you respond in the affirmative you’ll get a site-specific email address and password automatically created and stored for you, and that will be used whenever you want to log in. Recovery will shift to a mobile account centric workflow (Apple ID or Google account) rather than email based password reset links.

If a data breach is reported the pw manager app can notify you and give you a one-button-click experience to reset your password.

The downside is that if you get canceled by Apple or Google it will be a special kind of hell to recover.

Can you imagine a world where instead of sites prohibiting pasting into password fields, they prohibit hunt-and-pecking passwords? It's beautiful.
In 10 years time everyone will be using passkeys, not passwords.
And then losing access to everything when moronic automated Google systems ban your account for $REASON with no chance to appeal it.

I recently ran into an interesting problem -- my Microsoft account (used as a spam lightning rod) borked a passkey stored on a Fido token and refused a paswordless sign in. Same thing happened with a second backup token made by a different company. If I didn't have a password fallback, and that account was important, I would have a massive problem with no way to solve it. But the world has not yet gone completely insane, so I fired up my trusty KeePassXC and was in in less than a minute.

Well, they'd have to ban your account and destroy your device with the passkey before you could change it. I don't think they have that power (yet).
  • efitz
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Hahahaha good one.

I love the idea of passkeys; I hate the experience of passkeys, especially when it comes to having to reach for my phone to log into a desktop web site.

In 10 years time we'll have as many “why you should never use passkeys” on HN as we have for JWT nowadays.
  • gomox
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
Please, for the love of god, no.
It certainly looks that way. It's either going to be cell phone integration, or ER GLASSES(ex meta raybans). I would like to see the incorporation of a ring(real unintrusive wearable NFC I can activate or press<for presence confirmation> with my thumb by just raising my hand above the keyboard{For illustration, you ever seen guys spin their wedding band with their thumb as a twiddling activity??}).
> If you're doing auth in 2024, please consider not supporting passwords.

And then realize that you need to support them, because they are the most universal solution there is for an average user. Email/Username + Password is the most portable way to do login as a user that we have invented.

Yes, made this mistake in the past.

Every project has some amount of "being quirky/different" capital. If your project is not explicitly trying to innovate, or does not for some particular reason need to be very secure, then do not spend that capital on confusing users with the login flow. You'll turn a bunch of users away and cause a whole lot of support tickets, for very little benefit. Make users only think about stuff by making it unintuitive or different if it's really worth it to your product.

I wish I could upvote more than once!
Email + magic link is a lot better for most use cases.

It's a lot simpler to implement (just one flow instead of signin / signup / forgot), less catastrophic when your data is breached, piggybacks on the significant amount of work that already goes into securing email, gives you 90% of the benefits of 2FA / FIDO / Web Authn / whatever for free with 0 implementation cost, makes account sharing harder (good for business), and is easy to extend/replace with oAuth for specific domains.

Is actively avoid services that only offer magic link Auth - it’s the most annoying shitty method that pushes all the work on me.

No I won’t log into my email multiple times per day because you are too lazy to hash passwords.

It always depends on the audience but if your users are somewhat technically literate you need passwords.

> Email + magic link is a lot better for most use cases.

Wouldn't systems like this put a lot of trust on their users? Say you use a magic link on an compromised wifi network, like in a hotel, coffee shop, airport and so on without being on a VPN. Which some users will inevitable do.

I completely agree with the "most use cases" though. As long as you can't change the associated e-mail without additional requirements.

After you have logged in, you will get a session cookie/key that you have to send on every request. An adversary can just steal that session key from the compromised connection.
https end-to-end encrypts what’s in the address, except for the domain.
And that's also a great way to piss the user off…
Email + magic link is a pattern I keep seeing that's far more secure in practice. So is email + email OTP.

(I've also seen phone + phone OTP, but oh please never ask me for a phone number ever again. My phone number should always only be for making and receiving calls, not for verifying any sort of identity or personhood.)

Of course, nothing beats the security and privacy of username + password + TOTP (or security key), but you can't necessarily expect normal users to know to do that (or how).

Hell, I've seen at least one site that keeps the login username (what you actually use to sign into your account) separate from the public username (what everyone else sees), just to even more disconnect the login credentials from anything a potential attacker would have access to. But this is overkill for most scenarios (that particular platform does have a good reason).

I find it problematic if I do not have access to my email in the moment, or there is a glitch in the flow and I need to wait for the mail for some minutes, but that can also happen during 2FA, if email is used for that.

Also, magic links need to be designed so that I can login on my PC, and click the link on my phone, and be logged in on the PC.

Though I've really enjoyed using QR codes to login, that has been a really smooth modern experience.

"Also, magic links need to be designed so that I can login on my PC, and click the link on my phone, and be logged in on the PC."

I feel that way too - I hate it when I'm trying to log in on desktop and the email shows up as a push notification on my phone.

The problem is what happens if someone enters someone else's email address and that person unwittingly clicks on the "approve" link in the email they receive. That only has to happen once for an account to be compromised.

So now you need "enter the 4 digit code we emailed you" or similar, which feels a whole lot less magical than clicking on a magic link.

Presumably there are well documented patterns for addressing this now? I've not spent enough time implementing magic links to have figured that out.

> someone enters someone else's email address and that person unwittingly clicks on the "approve" link

Eh? In a sane magic link system, clicking the magic link grants the clicker access to the account. Right then and there, in the browser that opened the link.

I would argue that a magic link system has to only allow the click-through to grant access on the machine that initiated the login flow.

If I enter my email in SomeSite, they send a magic link to my email address, and then Mallory intercepts that email and gains access to my SomeSite account just by opening the link (i.e. the link acts as a bearer token), that's completely broken.

If someone has access to your email, they can recover passwords to everything. Email is the master key, treat it that way.
Use MFA and that is not the case.

If email is your master key to everything I would worry.

I assure you, most systems - even ones with MFA - can be reset via email.
That's a bit weird for me: I sat down at my laptop and attempted to sign into a site on my laptop, and at the end of the sign-in flow I'm not signed in on my laptop, I'm signed in on my phone.
> Also, magic links need to be designed so that I can login on my PC, and click the link on my phone, and be logged in on the PC.

No.

If magic links only log you in on the device you click them on, they prevent a lot of phishing attacks.

With a setup like that, there's literally no way to impersonate your website and steal user credentials.

This comes at a cost of making logins on public computers less secure, and which of these is more important should be weighed on a service-by-service basis.

A website for making presentations should obviously choose "more phishing and easier to use on public computers", a service for managing your employees' HR records should obviously choose the opposite.

> Email + magic link

Two scenarios I had recently, where I absolutely, utterly hated this pattern:

* I did not remember the mail address for such a thing because I started (too late) to use a different mail address for every service, thanks to Apples iCloud hidden addresses. And because there was no corresponding password, there was no entry in my password manager. I since rectified that, but it’s annoying.

* I tried to login on an older Windows PC - the magic mail landed on my iPhone. And because cross-system technical standards are a thing of the past the only possibility to get the magic link to the other system was to transcribe it.

  • 9dev
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
> Email + magic link is a pattern I keep seeing that's far more secure in practice.

I absolutely despise this. Every time I want to quickly log into an app and check something, just to sit in front of my synchronising mail client, wondering if the email will arrive, be caught by the spam filter, or just have random delay of a few minutes. Awful.

  • efitz
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
If the authentication session is long-lived then this is usually not too onerous; one round trip the first time you use it.

It’s a nightmare if they also insist on short lived sessions.

I hate it too. I always prefer TOTP. I never said this isn't shitty. Just that for normal users, it's more secure than passwords.
I first saw this with Anthropic. I clear my browser pretty regularly and this flow just adds so much friction. With a password manager plus totp I never really felt burdened by logging in every time I used a service. I hope this doesn't catch on.
I'm not a fan of email + magic link. I know of two security gateways which "click" on the link to check to see if it ends up going to a known malicious website. So then the end user calls in a trouble ticket, because the login authorization page says their magic link is already expired (before they even got it).

So for me, email + email OTP is the way to go.

> nothing beats the security and privacy of username + password + TOTP (or security key)

security key is at least somewhat better than TOTP because it's not (or less-)phishable

  > Of course, nothing beats the security and privacy of username + password + TOTP (or security key), but you can't necessarily expect normal users to know to do that (or how).
Honestly, this just seems like a UX problem.

The ways this is currently implemented are often terrible, but not always. I'll give an example: I recently did a stint at "Green company" and they gave me a yubi key. They also used Microsoft for most things. To login with Microsoft authenticator I type in my username and password, click yes on the next page, and then click yes on my phone. But to use the yubi key was needlessly frustrating. First, Microsoft doesn't let you use it as the default method (hardware key). So then you have to click "use another form of authentication", "hardware key", "next" (why? Idk), and then finally you pin and tap the key. A bunch of needless steps there and I'm not convinced this wasn't intentional. There's other services I've used working at other places where it's clean and easy: username + password, then pin+ tap key (i.e. hardware key is default!).

I seriously think a lot of security issues come down to UX. There's an old joke about PGP

  How do you decrypt a PGP encrypted email? 
  You reply to the sender "can't decrypt, can you send it back in clear?"
It was a joke about the terrible UX. That it was so frustrating that this outcome was considered normal. But hey, we actually have that solved now. Your Gmail emails are encrypted. You have services like Whatsapp and Signal that are E2EE. What was the magic sauce? UI & UX. They are what make the tools available to the masses, otherwise it's just for the nerds.
Better advice is to be honest about your product/project's social scope and make appropriate choices for that scope, or else let your users make that choice for themselves.

The world is not improved or made more robust if every experience online must be gated through some third-party vendor's physical widget (or non-trivial software).

There are parts of our lives that benefit from the added securiry that comes alongside that brittleness and commercial dependence, and parts that don't. Let's not pretend otherwise.

That seems false. Key-based approaches I understand to be less secure than passwords, albeit of course not if someone is reusing passwords found in breaches
  • wg0
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
And I can't get my head around passkeys yet. Haven't switched to them. Haven't developed a clear model of where's my private key exactly how many of them and how to get to them if my camera or fingerprint sensor isn't working etc.
Your keys are in your password manager of choice, you can create one per service+manager (eg. your google account can have one passkey using iCloud Keychain, and another one using bitwarden). If you lose access to your PM, the other recovery processes would take place.

It might help your mental model to think about them as identical to hardware security keys. Except now you don't need to buy a specific hardware key, your password manager is it. You can also just use your hardware key as your passkey, same thing (as long as the key supports FIDO2).

Specifically for your question on what happens if you lose face/fingerprint sensor. So this would be assuming you use Android/iOS's password managers, in that case even with biometrics failing you can just use the code you set on your device as both have fallbacks.

Why did we need a new standard for this, exactly?

Couldn't we just make password managers pretend that they're a Jubikey or similar?

Is it that Jubikeys don't offer any extra (master password / biometric) authn, and hence are only suitable as a second factor, where password managers can be used as both?

Good explanation, IMO.
I hate this take. I understand it and I don't want OAuth2 to not exist, but it isn't a *replacement*.

There are two critical things you lose with OAuth. First, it's centralization so you must trust that player and well now if that account is compromised everything down steam is (already a problem with email, who are the typical authorities). Second is privacy. You now tell those players that you use said service.

Let me tell you as a user another workflow. If you use bitwarden you can link Firefox relay, to auto generate relay email addresses. Now each website has not only a unique password, but a unique email. This does wonders for spam and determining who sells your data, AND makes email filters much more useful for organization. The problem? Terrible UX. Gotta click a lot of buttons and you destroy your generated password history along the way (if you care). No way could I get my parents to do this, let alone my grandma (the gold standard of "is it intuitive?" E.g Whatsapp: yes; Signal: only if someone else does the onboarding).

There's downsides of course. A master password, but you do control. At least the password manager passes the "parent test" and "girlfriend test", and they even like it! It's much easier to get them (especially parents) to that one complicated master passphrase that the can write down and put in a safe.

A lot of security (and privacy) problems are actually UI/UX problems. (See PGP)

OAuth recognized this, but it makes a trade with privacy. I think this can be solved in a better way. But at minimum, don't take away password as an option.

  • jpc0
  • ·
  • 3 weeks ago
  • ·
  • [ - ]
You are assuming a lot about who your oAuth provider is...

Sure many places only implement Google/Meta/Githun/Discord etc but that's not a requirement, specially for your own app. You can implement and run your own oAuth server if you so wished, much good it would be.

But regardless, that's why FIDO2 and webAuthN was developed, but even that has it's issues.

  > You are assuming a lot about who your oAuth provider is

  > Sure many places only implement 
This doesn't change my concern, but yes, it deepens it. Sure, I known there can be an arbitrary authority, but does it matter when 90% don't allow another authority? I can't think of more than once I have seen another authority listen.
You need the password for the lost passkey flow. Well, you don't need it, but it's an extra layer.