I think the primary issue is not the "send your face" (face info) to a server. The problem is that private entities are greedy for user data, in this case tying facial recognition to activities related to interacting with other people, most of them probably real people. So this creates a huge database - it is no surprise that greedy state actors and private companies want that data. You can use it for many things, including targeted ads.
For me the "must verify" is clearly a lie. They can make it "sound logical" but that does not convince me in the slightest. Back in the age of IRC (I started with mIRC in the 1990s, when I was using windows still), the thought of requiring others to show their faces never occurred to me at all. There were eventually video-related formats but to me it felt largely unnecessary for the most part. Discord is (again to me) nothing but a fancier IRC variant that is controlled by a private (and evidently greedy) actor.
So while it is good to have the information how to bypass anything there, my biggest gripe is that people should not think about it in this way. Meaning, bypassing is not what I would do in this case; I would simply abandon the private platform altogether. People made Discord big; people should make Discord small again if they sniff after them.
I know you meant as a service provider, but as a avid IRC (and an online game that conventionally alt-tabbed into a irc-like chat window) chatter as a young preteen in the 90s and 00s, I made a lot of online friends that I would not discover what they looked like IRL for decades, some never. People I was gaming with in the 90s, for the first time, I would see what they looked like over FB in a group made for the now-almost-dead game in the 10s. It was like "swordfish - man, where are you now? I don't even know your real name to find ya. shardz - you look exactly like I would picture ya!."
Just some musings.
Since there were no other websites like that back then, it was eventually overrun by non-IRC-users and transformed into what we'd now call a more generic social media platform. Something like the eternal September I guess. People started calling the gallery "IRC" as shorthand, which royally pissed off the original userbase. Fun times.
Then Facebook appeared and everyone moved there.
It's still up, but it's more of a historical relic these days. Not sure who, if anyone, still uses it: https://irc-galleria.net/
Wikipedia: https://en.wikipedia.org/wiki/IRC-Galleria
Poland's social media of choice was "Nasza Klasa" (lit. "Our Class"), the American alternative was called "Classmates" as far as I know. It was intended as a service that let you re-unite with your old classmates, designed with the way the Polish school system worked in mind. It was used for far more than that though, and was quite popular among kids who were still at school.
We're still in that era with messaging apps somehow. WHile the local alternatives have mostly died out, the world is now a patchwork of WhatsApp, Messenger and Telegram, with islands of iMessage, Line, KakaoTalk and WeChat thrown into the mix. Most countries have basically standardized on one of these, but they can't agree on which one.
As another 90s preteen, sure, but the internet today has a lot more pedos and groomers online than in the 90s, and preteens today easily share footage of themselves to those adult weirdos, which didn't happen in the 90s because mostly limitations of technology.
BUt if you look at tiktok live it's full of preteen girls dancing, and creepy old men donating them money to the point where tiktok live is basically a preteen strip club. We can't ignore these obvious problems just because we grew up with internet in the 90s and turned out alright.
We have to separate kids from adults on the internet somehow even though i distrust age-verifications systems as they basically remove your anonymity but a solution is inevitable even though it will be faulty and unpopular and people will try to bypass it.
If laws need to be made about something it should be to punish those parents who neglect to safeguard their children using the tools already available to them.
If the parental controls currently provided aren’t sufficient then they should be modified to be so - in addition to filtering, they should probably send a header to websites and a flag to apps giving an age/rating.
Not only is the internet forever, but what is on it grows like a cancer and gets aggregated, sold, bundled, cross-linked with red yarn, multiplied, and multiplexed. Why would you ever want cancer?
It's a false equivalence only if you decide to equate the two. My question wasn't worded that way. I'm curious to know if someone who oppose this type of laws is also for or against other laws that are dealing with similar issues in other contexts.
Also, as I said in another post, there are plenty of places, online, where you have to identify yourself. So this is already happening. But again, I'm personally interested in people's intuitions when it comes to this because I find it fascinating as a subject.
There isn't any way to achieve the same digitally.
I hope this becomes more widespread / standardized; the precursor for iDIN is iDEAL which is for payments, that's being expanded and rebranded as Wero across Europe at the moment (https://en.wikipedia.org/wiki/Wero_(payment)), in part to reduce dependency on American payment processors.
Just allowing a service provider to receive a third party attestation that you "allowed" still allows the third party to track what you are doing even if the provider can't. That's still unacceptable from a privacy standpoint, I don't want the government, or agents thereof, knowing all the places I've had to show ID.
I'm personally more interested in the intuition people have when it comes to squaring rejecting age verification online while also accepting it in a multitude of other situations (both online and offline)
In real world scenarios, I can observe them while they handle my ID. And systematic abuse(e.g. some video that gets stored and shows it clearly) would be a violation taken serious
With online providers it's barely news worthy if they abuse the data they get.
I'm not against age verification (at least not strongly), but I'd want it in a 2 party 0 trust way. I.e. one party signs a jwt like thing only containing one bit, the other validates it without ever contacting the issuer about the specific token.
So one knows the identity, one knows the usage But they are never related
I could be wrong but I think this is how the system we have in place in Italy works. And I agree that it's how it should work.
Personally I'm still trying to figure out where my position is when it comes to this whole debate because both camps have obvious pros and cons.
And when then, only when I'm in foreign countries.
It's weird how radicalized people get about banning books compared to banning the internet.
I don't think asking for age verification is the same as banning something. Which connection do you see between requiring age and free speech?
Second, it's turn-key authoritarianism. E.g. "show me the IDs of everyone who has talked about being gay" or "show me a list of the 10,000 people who are part of <community> that's embarrassing me politically" or "which of my enemies like to watch embarrassing pornography?".
Even if you honestly do delete the data you collect today, it's trivial to flip a switch tomorrow and start keeping everything forever. Training people to accept "papers, please" with this excuse is just boiling the frog. Further, even if you never actually do keep these records long term, the simple fact that you are collecting them has a chilling effect because people understand that the risk is there and they know they are being watched.
Maybe I'm wrong (not reading all the regulations that are coming up) but the scope of these regulations is not to ban speech but rather to prevent people under a certain age to access a narrow subset of the websites that exist on the web. That to me looks like a significant difference.
As for your other two points, I can't really argue against those because they are obviously valid but also very hypothetical and so in that context sure, everything is possible I suppose.
That said something has to be done at some point because it's obvious that these platforms are having profound impact on society as a whole. And I don't care about the kids, I'm talking in general.
Under most of these laws, most websites with user-generated content qualify.
I'd be a lot more fine with it if it was just algorithms designed for addiction (defining that in law is tricky), but AFAIK a simple forum where kids can talk to each other about familial abuse or whatever would also qualify.
I'm currently scrolling through this list https://en.wikipedia.org/wiki/Social_media_age_verification_... and it seems to me these are primarily focused on "social media" but missing from these short summaries is how social media is defined which is obviously an important detail.
Seems to me that an "easy" solution would be to implement some sort of size cap this way you could easily leave old school forums out.
It would no be a perfect solution, but it's probably better than including every site with user generated content.
An alternative to playing whac-a-mole with all the innovative bad behavior companies cook up is to address the incentives directly: ads are the primary driving force behind the suck. If we are already on board with restricting speech for the greater good, that's where we should start. Options include (from most to least heavy-handed/effective):
1) Outlaw endorsing a product or service in exchange for compensation. I.e. ban ads altogether.
2) Outlaw unsolicited advertisements, including "bundling" of ads with something the recipient values. I.e. only allow ads in the form of catalogues, trade shows, industry newsletters, yellow pages. Extreme care has to be taken here to ensure only actual opt-in advertisements are allowed and to avoid a GDPR situation where marketers with a rapist mentality can endlessly nag you to opt in or make consent forms confusing/coercive.
3) Outlaw personalized advertising and the collection/use of personal information[1] for any purpose other than what is strictly necessary[2] to deliver the product or service your customer has requested. I.e. GDPR, but without a "consent" loophole.
These options are far from exhaustive and out of the three presented, only the first two are likely to have the effect of killing predatory services that aren't worth paying for.
[1] Any information about an individual or small group of individuals, regardless of whether or not that information is tied to a unique identifier (e.g. an IP address, a user ID, or a session token), and regardless of whether or not you can tie such an identifier to a flesh-and-blood person ("We don't know that 'adf0386jsdl7vcs' is Steve at so-and-so address" is not a valid excuse). Aggregate population-level statistics are usually, but not necessarily, in the clear.
[2] "Our business model is only viable if we do this" does not rise to the level of strictly necessary. "We physically can not deliver your package unless you tell us where to" does, barely.
We know we cannot trust service providers on the internet to take care of our identifying data. We cannot ensure they won't turn that data over to a corrupt government entity.
Therefore, we can not guarantee free speech on these platforms if we have a looming threat of being punished for the speech. Yes these are private entities, but they have also taken advantage of the boom in tech to effectively replace certain infrastructure. If we need smart phones and apps to interact with public services, we should apply the same constitutional rights to those platforms.
https://en.wikipedia.org/wiki/List_of_pseudonyms_used_in_the...
Are private social media platforms "public services"? And also, you mentioned constitutional rights. Which constitution are we talking about here? These are global scale issues, I don't think we should default on the US constitution.
> We know we cannot trust service providers on the internet to take care of our identifying data.
Nobody needs to trust those. I can, right now, use my government issues ID to identify myself online using a platform that's run by the government itself. And if your rebuttal is that we can't trust the government either then yeah, I don't know what to say.
Because at some point, at a certain level, society is built on at least some level of implicit trust. Without it you can't have a functioning society.
This is somewhat central to being remain anonymous.
Protesters and observers are having their passports cancelled or their TSA precheck revoked due to speech. You cannot trust the government to abide by the first amendment.
Private services sell your data to build a panopticon, then sell that data indirectly to the government.
Therefore, tying your anonymous speech to a legal identity puts one at risk of being punished by the government for protected speech.
Again, this is a global issue. There is no first amendment here where I live. But the issue of the power these platforms have at a global level is a real one and something has to be done in general to deal with that. The problem is what should we do.
This is a stopgap at best, and to be blunt, it's naive. They can go on their friends' phones, or go to a shop and buy a cheap smartphone to circumvent the parental controls. If the internet is locked down, they'll use one of many "free" VPN services, or just go to school / library / a friend's place for unrestricted network access.
Parents can only do so much, realistically. The other parties that need to be involved are the social media companies, ISPs, and most importantly the children themselves. You can't stop them, but they need to be educated. And even if they're educated and know all about the dangers of the internet, they may still seek it out because it's exciting / arousing / etc.
I wish I knew less about this.
Not if the rule includes easy rule circumvention. For example, if you could parent-control lock the camera roll to a white list of apps.
Want to post on social media so your friends would see? No can do, but you can send it to them through chat apps. Want to watch tik-tok? Go ahead. Want to post on tik-tok? It's easier to ask parent to allow it on the list, then circumvent, and then the parent would know that their child has a tik-tok presence, and — if necessary — could help the child by monitoring it.
The current options for parent control are very limited indeed. You can't switch most apps to readonly, even if you are okay with your child reading them — it's posting you are worried about.
But in ideal world there would be better options that would provide more privacy and security for the child, while helping parents restrict options if they fell their child isn't ready to use some of the functions.
So I could explore things but not get into anything naughty.
When I decided to get into software dev I got my own cpu and my own phone once I had a job in dev.
Might seem pretty conservative but it worked, and I'm technical enough now. I wish I would have got into coding earlier but I've done alright so :shrug: Depending on the environment for my kids I'd move the timeline back a little, but not too much. Having too much time and just the unfiltered internet to fill it is too dangerous for young teens.
tl;dw it's quite capable for the money and would could easily get on social media apps/sites.
I would not be here if I didn't get my start in my early teen years.
A smart phone is too disconnected of a device when compared to the desktops we all grew up on. No one is talking about fully banning <18s from the internet (at least no one serious) - it's a discussion about making sure that the way folks <18 use the internet is reasonably safe and that parents can make sure their children aren't being exposed to undue harm. That's quite difficult to do with a fully enabled smart phone.
By 16 I was regularly ignoring my parents to go to bed when I was up coding or gaming and doing dumb script kiddie stuff on IRC.
I had an adult introduce me to Astalavista (https://en.wikipedia.org/wiki/Astalavista.box.sk)
Thinking back to that I was very well aware of the fucked up part of the internet much more so than most adults around me. People did in fact meet up in person with strangers from the internet even back then.
I think it's more important to teach around age 10-14 about the dark side of the internet so that late teens can know how to stay safe. Rather than simply throwing them into the reality of it unprepared as "adults".
Also frankly I don't want to know the search history of a late teen. There's a degree of privacy everyone is entitled to.
It's also important to acknowledge that kids that used the internet weren't everyone in our day and the usage of the internet varied wildly. While now-a-days it's an expectation for everyone to be at least moderately online (often required by academia) and often that their presences are tied to their real names.
It's not the porn or the LiveLeak gore content that would have me worried. It's groomers and other adults with bad intentions. Not something you can easily block and not something this ID check will stop. A groomer will slow burn a social relationship with someone until they are legal adults. That's something you can only teach someone to look out for. And even adults are susceptible to this.
Secondly the parents need some similar education, either face-to-face education or information material sent home.
It will not prevent everything, but at least we cannot expect kids and parents to know about parental control features, ublock origin type tools or what dangers are out there.
We have to trust parents and kids to protect themselves, but to do that they need knowledge.
Of course some parents and kids don't care or do not understand or want to bypass any filters and protections, but at leaast a more informed society is for the better and a first step.
Yeah but many parents are stupid and want the government to force everyone to wear oven mitts to protect their kids from their poor/lack of parenting. What do you do then?
Remember how since a lot of men died in WW2 so kids were growing up in fatherless homes which led to a rise in juvenile delinquency, and the government and parents instead of admitting fatherless homes are the issue, the "researchers" then blamed it on the violent comic books being the issue, so the government with support from parents introduced the Comics Code Authority regulations.
People and governments are more than happy to offload the blame for societal issues messing up their kids onto external factors: be it comic books, rock music, MTV, shooter videogames, now the internet platforms, etc.
And they didn't even try to hide very much.
Look at the story from darknet diaries, where the interviewee talks about setting up an AOL account with girlie name and instantly getting flooded with messages, 9/10 of them being from pedos.
https://darknetdiaries.com/transcript/56/
Don't have any examples myself because I was a spectrum kid at that time, quite oblivious to the idea.
Without some data analysis I honestly don't know. Even before Internet (ex: FidoNet) there was plenty of very bad stuff out there, I don't see any clear reason why the pedos and groomers would have avoided it.
> We have to separate kids from adults on the internet somehow
I think what is much worse than in other mediums is the actual lack of a community that observes. In real life, for many cases, you would have multiple people noticing interactions between kids and adults (sports, schools, parks, shops, etc.), so actions might be taken when/before things get strange. On some of the social networks on the internet it is too much one-to-one communication which avoids any oversight.
So, for me, the idea of "more separation" seems to generate on the long term even more problems, because of lack of (healthy) interactions and a community.
I think it's technically possible to build a privacy-preserving age verification. I also think it should be done by the government, because the government already has this information.
There were not fewer pedos and groomers online in the 90s, you were just lucky to have avoided it.
Nowadays with the number of users of the internet converging slowly to the total populations, the percentages are probably converging as well.
The challenge with "protect the children" is not only evildoers targeting them, but targets actively seeking things out. They'll be the first ones looking for ways to circumvent age verification.
That said, I don't expect this to happen, switching is very hard for many reasons.
Historical precedent: prohibition.
Alternate future: the big websites start losing billions because people just use the internet less or not at all because it's a hassle with no return, and tax revenue drops. Then the politicians start to worry.
Even in the absence of democracy, public opinion affects politics.
Compliance industry has grown from zero to $90B after we cracked the nut everything needs compliance.
Here is a good book about the topic https://www.amazon.com/Compliance-Industrial-Complex-Operati...
US tech companies are constantly under FTC audit relating to how they use user data. This is certainly not something that needs to be seriously worried about, certainly less so than say the way in which cameras placed all over cities are used to track all sorts of people or storing GPS locations attached to a specific devices UUID.
Do platforms want to counter it?
Seems to me with an unreliable video selfie age verification:
* Reasonable people with common sense don't need to upload scans of their driving licenses and passports
* The platform gets to retain users without too much hassle
* Porn site users are forced to create accounts; this enables tracking, boosting ad revenue and growth numbers.
* Politicians get to announce that they have introduced age controls.
* People who claimed age checks wouldn't invade people's privacy don't get proven wrong
* Teens can sidestep the age checks and retain their access; teens trying to hide their porn from their parents is an age-old tradition.
* Parents don't see their teens accessing porn. They feel reassured without having to have any awkward conversations or figure out any baffling smartphone parental controls.
Everyone wins.
* authorities get to selectively crack down on sites for not implementing "proper" age verification. The sites never had a widespread problem with grooming to begin with but just so happened to have a lot of other activity that the authorities didn't like.
Having everyone operate in a gray area is dangerous and threatens the rule of law.
It wouldn't be hard to imagine a situation where social media sites leaning towards the government (e.g. Truth Social, X or the like) will be getting a free pass on using age verification methods which are easy to bypass while social media sites that are more critical (e.g. Reddit) will be sanctioned into implementing the strictest and most privacy invading measures. The end result is that people choosing the path of least resistance will be lead to the government-leaning sites.
We already had a half-assed solution, where websites would require you to press the button that says "I am over 18". Clearly somebody decided that wasn't good enough. That person is not going to stop until good enough is achieved.
They're designed destroy anonymity to give the in group pretext to persecute the out group. It will be propagandized as accountability but it will be anything but.
The US is repealing section 230, and it appears to be a pretext for shutting down platforms that don't block anti–Trump speech. Australia has an age verification law that seems to actually be about keeping kids off social media.
> Reasonable people with common sense don't need to upload scans of their driving licenses and passports
Cue random bans.
> People who claimed age checks wouldn't invade people's privacy don't get proven wrong
And? Is that supposed to change anything?
Only if the lawmakers agreed.
I'm curious the sites that enforce this like 'your state has banned...' what traffic loss they have. Because I'm not gonna sign up for a porn site lmao, the stigma
My guess is that's probably one of the reasons Google tried to push for Play Store only apps, provide a measurable/verifiable software chain for stuff like this.
It's not the fancy structured light of phone-style Face ID, but it still protects against the more common ways of fooling biometrics, like holding up a photo or wearing a simple paper mask.
Maybe new ones are different but that’s how they used to be. Little Kinect devices, really, for sensing faces instead of whole people.
https://learn.microsoft.com/en-us/windows-hardware/design/de...
These cameras are considered a "secure biometric" device and AFAIK nobody has faked them. I've flagged the poster who said "try two flat images"
Hello there! It appears you are misapplying the flagging system. While the suggestion may be incorrect, it is not an "egregious comment".
In addition, your comment doesn't follow the Hacker News Guidelines:
https://news.ycombinator.com/newsguidelines.html
Don't feed egregious comments by replying; flag them instead. If you flag, please don't also comment that you did.
Have a great day!
I don’t this will happen in the US but I can see it in more privacy responding countries.
Apple and Google may also add some kind of “child flag” parents can enable which tells websites and apps this user is a child and all age checks should immediately fail.
Like, you’d enroll it by adding a DOB and the computer/phone/etc would just intentionally fail all compatible age checks until that date is 18 years in the past. To remove it (e.g. reuse a device for a non-child), an adult would need to show ID in person at Apple.
Government IDs could be used to do completely privacy preserving, basically OpenID Connect but with no identifying property, just an “isEighteenOrMore” property. However, i agree it’ll never happen in the US because “regular” people still don’t know how identity providers can attest without identifying, and thus would never agree to use their government ID to sign into a pornsite. And on top of all that yeah nobody trusts the government, basically in either party, so they’d be convinced the government was secretly keeping a record of which porn sites they use. Which to be fair is not entirely unlikely. Heck, they’d probably even do it by incompetence via logs or something and then have people get blackmailed!
I never put in my real birthday. It's just one more datapoint to leak in an inevitable hack and help scammers exploit me.
Just because a website sticks a field on a form, doesn't mean you need to fill it out.
I can think of maybe 1 website I use that has a legitimate use to know this info about me... and a dozen that use my fictious birthday for no other purpose than an excuse to market at me under the shallow guise of a 'Happy Birthday' email.
When it's actually required by some law or regulation (e.g. financial stuff) I give my actual birthday. But when some site is just wanting to comply with age verification? Yep, I'm over 30, so you don't need to see my identification. (Jedi hand wave).
"I am altering the deal. Pray I do not alter it any further."
IIRC, it went like this: the account creation screen prompted them for a birthdate. They entered a fictitious one and pretended to be over 13. (I saw my niece do this in front of me, and I just sighed a very heavy sigh. She was way more interested in Club Penguin.)
Then later, they let the cat out of the bag. They tell their friends "lol I'm only 10! Today's my birthday, so give me a hat!" or something. And so if they claimed they're 10 they got 3 years suspension.
I think there was never any verification done, and no verification was possible: think about it, under COPPA, a service in the USA cannot collect PII from children under 13, so what do you do when a kid gives you two contradicting datapoints? Err on the side of caution.
I gave Yahoo! a false birthdate when I signed up. I was 27, but I also just felt they weren't entitled to knowing it. However, I soon found that maintaining a fraudulent identity is tiresome and error-prone. And Yahoo! wouldn't let me simply change my birthdate as often as I wanted to.
I once had a conversation with a friend about cheating on IRS taxes. She said "can you lie to a piece of paper?" like fudging numbers wasn't like lying to an auditor's face. It was a rhetorical question, of course.
ID checks aren't very worthwhile if anyone can use any ID with no consequences.
How long would it take for someone's 18 year old brother to realize they can charge everyone $10 to "verify" everyone's accounts with their ID, because it doesn't matter whose ID is used?
The older brother could also rent an R (or x) rated movie, buy cigarettes, lighters, dry ice, and give them to the kids. The point of the age check is to prevent kids from getting access without an adult in the loop, not to prevent an adult from providing kids access
The "oh my god, think of the children" is similar to "oh my god, think of the terrorists". I am not saying all of this is propaganda 1:1 or a lie, but a lot of it is and it is used as a rhetoric tool of influence by many politicians. Both seems to connect to many people who do not really think about who influences them.
South Korea also has had various versions of this even going back to ~2004 I think.
That looks like it should make things like privacy compatible age verification "trivial".
In Japan, there are already multiple apps which use something like this to verify user's age via the "my number card" + the smartphone's NFC reader.
It's more or less impossible to forge without stealing the government's private keys, or infiltrating the government and issuing a fraudulent card.
Of course, the US isn't a functioning state, the people don't trust it with their identity and security and would rather simply give all their information to private companies instead.
Does this also leak your identity to the app?
If you use the _digital_ MyNa card (e.g. the one in the Wallet.app; not the plastic one); the iOS SDK lets you only request the "is user more than XX years old" flag; without getting the actual identity: https://developer.apple.com/documentation/passkit/requesting...
Now, AFAICT nobody actually does this, but the technical ability is there.
Credit cards don't have photos.
> How many Americans wouldn't be able to present a CC or ID?
The number of Americans who don't have a government issued photo ID is estimated around 1%. The number gets larger if you start going by technicalities like having an expired ID that hasn't been renewed yet.
The intersection between the 1% of 18+ Americans who don't have an ID and those who want to fully verify their Discord accounts is probably a very small number.
Same in the UK, but Steam uses credit cards for age verification there and refuses if you provide a debit card instead. Evidently the payment backends can tell credit and debit apart.
It sometimes asks for my age for viewing a game and I can input any ol' date I want to. It doesn't even flinch if I input a different date every time.
I also don't recall them asking about my age when I was actually underage and paid using a PaySafeCard, but then again they didn't have porn on the platform at that point either.
They only enforce it in the "mature sexual content" category, which mainly applies to porn games. For everything else, including the "some sexual content" category, they still just take your word for it.
Yeah those are parallel systems for reasons that amount to technical debt.
For DL alone:
>Data indicates that approximately 84% to 91% of all Americans hold a driver's license, with roughly 237.7 million licensed drivers in the U.S. as of 2023.
Add in an ID and Passport and we are likely closer to 99%
> Nearly 21 million voting-age U.S. citizens do not have a current (non-expired) driver’s license. Just under 9%, or 20.76 million people, who are U.S. citizens aged 18 or older do not have a non-expired driver’s license. Another 12% (28.6 million) have a non- expired license, but it does not have both their current address and current name. For these individuals, a mismatched address is the largest issue. Ninety-six percent of those with some discrepancy have a license that does not have their current address, 1.5% have their current address but not their current name, and just over 2% do not have their current address or current name on their license. Additionally, just over 1% of adult U.S. citizens do not have any form of government-issued photo identification, which amounts to nearly 2.6 million people.
From https://cdce.umd.edu/sites/cdce.umd.edu/files/pubs/Voter%20I...
> Additionally, just over 1% of adult U.S. citizens do not have any form of government-issued photo identification, which amounts to nearly 2.6 million people.
The rest of the statistic is about driver's licenses specifically, including technicalities like expiration dates and address changes. The online ID check for age verification don't care about the address part anyway, in my experience.
If someone has an expired drivers' license or they changed their name and haven't updated their IDs, they have bigger problems than age-verifying their Discord accounts.
I actually only renewed it to get medical care and because renewing the license was only a little more expensive than getting an ID-only card.
It did prevent me from using some porn sites because my state requires ID verification but many sites just ignore the requirement so I just didn't use the sites that required ID.
It’s less like a TLS handshake and more like OpenID for Verifiable Presentations (OID4VP). The "non-free" hardware requirement serves as Remote Attestation—it allows a verifier to cryptographically prove that the identity hasn't been cloned or spoofed by a script. The verification happens offline or via a standard web flow using the DMV’s public key to validate the data signature, ensuring the credential is authentic without requiring a phone-home to the issuer.
I think you're... missing the point of the pushback. People DO NOT WANT to be identified online, for fear for different types of persecution.
My guess is that 95% or more of all Discord users do not care and simply upload their selfie or ID card and be done with it. I know I will (although they did say that they expect 80%+ to not require verification since they can somehow infer their age from other parameters)
I've already cancelled my Nitro account. I'm quite active on a ~5k member programming server and we're giving Zulip another try. I think it's unlikely we'll stay on Discord.
Obviously anecdotal, but eventually this adds up.
This whole thing being "for the safety of kids" is obviously a farce just to get more user data because Nitro users supposedly will have to do the ID check as well, but if you're paying with a CC/Paypal, you are obviously of sufficient age to not require an ID check.
Are you a minority, LGBTQ+, etc or of a "different" political persuasion that might have any reason to be distrustful of the US government? If so, you probably wouldn't just "be done with it".
Yes but for completely different reasons: I will not bother to play the game and stop using the platform.
That's the endgame and what the EU really wants. No poasting unless they can arrest you for inconvenient memes.
Weird thing.. the people who want this validation fully expect for you to pay for, maintain, keep it valid, and pay for upkeep/service for their desires. Honestly, this is something that SHOULD get very aggressive pushback.. but most people accept for no reason.
Are you sure it's that simple? How high does the resolution need to be for the camera to not be able to tell? And I'm sure there are sublet clues. Remember, you can't modify the photo or change the camera.
Ad-hoc identification can occur via other means like dynamic knowledge based authentication. The sources of this mechanism can be literally anything. Social media itself being one obvious source for the target cohort.
You can walk into many US financial institutions without an ID and still get really far using KBA workflows. The back office will hassle you for a proper scan of a physical ID, but you can often get an account open and funded with just KBA.
This basically only gets used for businesses that need a fig leaf for regulatory purposes. You know, $30 loans for uber eats and tiny loans like that.
In the nomenclature of Multi-Factor Authentication, "something you know" is one factor. So if you know a password and you have a hardware token, that's 2 factors and combining different types is the key to MFA.
Many "knowledge based authentication" tries to string together "things you know" without a different type, and that's a weakness.
However, it can be strengthened through various techniques. If a human is authenticating you in real-time, they may choose a factoid that an impostor is unlikely to know which may be agreed in advance. For example, the security questions combined with other challenges, or a "curve ball" that may elicit a stutter, pause, or prevarication. This is a dynamic method that bob refers to.
In fact, knowledge-based quizzes are used routinely by credit reporting agencies -- the big ones like Experian. And they've been presented by background check services, too. They work like this: they scrape your credit reports and public records in a deep dive for your old addresses, employers, contact info, a whole smorgasbord of stuff. Maybe attackers know some of it. But it's multiple choice: "which of these did you live at? None of the above? All of them?" "Which one of these wasn't your employer?" And the attacker would need to have the same list of public records, and also know the wrong answers! Knowing the wrong answers is the "curve ball" here! How many attackers know that I didn't work for Acme, Inc, and I never lived in San Antonio?
It's also worth pointing out that I've opened at least 3 bank accounts without setting foot in a bank. Even if yours is brick-and-mortar, they probably have a flow on their website for account creation and funding. It is not difficult to satisfy their ID requirements. If they glitch, then you're just flagged a bit, and you follow up as instructed. I've also authenticated identity to the federal government agencies, and accessed several DMV services, using only the apps and websites.
People may feel reticent about establishing their identity online, but isn't it better that you do it first before someone else does? If your identity is known and registered and builds up data points that correspond to you, aren't you less likely to be a victim of fraud or identity theft when things don't add up?
Yes - and they don't work.
> They work like this: they scrape your credit reports and public records in a deep dive for your old addresses, employers, contact info, a whole smorgasbord of stuff.
Most of which don't work on an 18-year-old. No credit history, no past employers, no bill payments, no history of moving houses, address is their parents' house.
There is no smorgasbord. There's name, date of birth, parents' address - all of which are widely known matters of public record (which is why the credit rating agency has them in the first place).
> But it's multiple choice: "which of these did you live at? None of the above? All of them?" "Which one of these wasn't your employer?"
Fantastic, the credit rating agency has just told the fraudster several of your past addresses, and your past employers.
Sure, there's a phony or two in the list - but the fraudster can try as many times as they want, comparing employer and address lists between different credit applications.
Also, they will probably find that out, and the moment people do so, they become suspicious to state actors. I understand the rationale behind the work around you described; I just don't think it will be a huge factor. I see this elsewhere too - for instance, I use ublock origin a lot. But how many people world wide use it? I think never above 30%, most likely significantly fewer (or perhaps all anti-advertisement extensions, I think it most definitely is below 50% and probably below 30% too).
There are a lot of countries and US states where such validation is possible.
Given the state is mandating these checks, it only makes sense that the state should be responsible for making it possible to perform these checks.
Gross.
(I'm not verifying anywhere unless required for official business. Still have my non-KYC sim for people)
The issue is that age verifiers (like Discord) are not really trying.
They also have you move your head in multiple directions.
It would be interesting to see a model completely indistinguishable from a real human in behavior, as well as real-time reflection off different surfaces, etc.
The next step would be to make a complete digital clone of a person based on surreptitiously recording them with hidden cameras. I doubt it's possible.
We had facerig for over a decade now. Facefilter recently. It's not hard anymore.
Your better bet would be to generate a face as an image and then you can easily generate that same face in different expected poses and conditions. You can then use existing models where you get to select the starting image and the ending image. Add some filters and noise to just make it look like normal crappy low light camera.
As for the color that's another expected condition and can be overlayed or pre-generated.
See: Login.gov (USPS offline proofing) and other national identity systems.
(digital identity is a component of my work)
That's going to be a no from me, dawg. I'm sympathetic to ID checks like if you're buying beer or whatever, but not linking my real life identity to discord or whatever.
Pornhub is fighting state age verification and keeps losing state by state, for example.
Using a government issued eID system. The EU is going to rollout eID in a way that a site can just ask “is this person > age xy?”. The answer is cryptographically secure in the sense that this person really is this age, but no other information about you has to be known by the site owner.
Which is the actual correct way to do it.
I don’t understand why all the sites go crazy with flawed age verification schemes right now, instead of waiting a until the eID rollout is done.
EDIT: I forgot to mention that it’s only the correct way if the implementation doesn’t give away to your government on which sites you browse… Which I believe is correctly done in the upcoming EU eID but I could be wrong about it.
This would in total make sure that only one account can be created with the private key, while exposing no information about the private key aka user to the provider. I am fairly certain that should work with our cryptographic tools. It would ofc put the trust on the user not to share their eID private key, but that is needed anyway. Either you manage it or it gets managed (and you lose some degree of privacy).
"The actual correct way" is an overstatement that misses jfaganel99's point. There are always tradeoffs. EUDI is no exception. It sacrifices full anonymity to prevent credential sharing so the site can't learn your identity, but it can recognize you across visits and build a behavioral profile under your pseudonym.
> Since Proof of Age Attestations are designed for single use, the system must support the issuance of attestations in batches. It is recommended that each batch consist of thirty (30) attestations.
It sounds like application would request batch of time-limited proofs from government server. Proofs gets burned after single use. Whether or not you've used any, app just requests another batch at a set interval (e.g. 30 once a month). So you're rate limited on the backend.
Edit: seems like issuing proofs is not limited to the government, e.g. banks you're client of also can supply you with proofs? (if they want to partake for some reason). I guess that would multiply numbers of proof available to you.
> Relying Party SHALL implement the protocols specified in Annex A for Proof of Age attestation presentation.
> A Relying Party SHOULD implement the Zero-Knowledge Proof verification mechanism specified in Annex A
If that ever repeats, the same I'd was used twice. At the same time, the site ID would act as salt to prevent simple matching between services.
If you need to verify even more accounts the government can have some annoying process for you to request another batch of IDs.
A token is generated that has a timestamp and is signed by a private key with payload.
The public key is available through a public api. You throw out any token older than 30 seconds.
Unlimited IDs.
That's basically what you want.
Do they? The UK’s population is more than double of Australia’s and some websites (e.g. imgur) are outright blocking the UK.
They can't afford to and will never strike off 100m Brits and Aussies, and that number will only rise with more high-income countries making regulation.
While it's not without faults (services do not always support alternative authentication which may support foreigners having the right to live in the country), it has been quite reliable for so many years.
So just to say, you can have successful alternatives to a government controlled system as many actors may decide it is quite valuable to develop and maintain such a system and that it aligns with their interest, and then have it become a de-facto standard.
{"error":"error parsing webview url"}
Edit: Apparently my discord account is in some kind of A/B feature test that uses a different verification provider, Persona
But in practice, this only holds if regulators are either inattentive or satisfied with checkbox compliance. If a government is competent and motivated, this approach won’t hold up—and it may even antagonize regulators by looking like bad-faith compliance.
I’ve also heard that some governments are already pushing for much stricter age-verification protocols, precisely because people can bypass weaker checks—for example, by using a webcam with partial face covering to confuse ID/face matching. I can’t name specific vendors, but some providers are responding by deploying stronger liveness checks that are significantly harder to game. And many services are moving age verification into mobile apps, where simple JavaScript-based tricks are less likely to work.
...source?
I sincerely doubt that Discord's lawyers advocated for age verification that was hackable by tech savvy users.
It seems more likely that they are trying to balance two things:
1. Age verification requirements
2. Not storing or sending photos of people's (children's) faces
Both of these are very important, legally, to protect the company. It is highly unlikely that anyone in Discord's leadership, let alone compliance, is advocating for backdoors (at least for us.)
Point is, these kinds of schemes where internal communication is deliberately hobbled to comply maliciously with requirements while still being completely in the clear as far as any actual recorded evidence goes. And there’s always at least one person piping in with a naïve “source?” as if people would keep recorded evidence of their criminal conspiracies.
1. Removes the pain of age verification, encouraging some people to stay in the proprietary walled garden when everyone would be better served by open platforms (and network effects).
2. Provides a pretext for more invasive age verification and identification, because "the privacy-respecting way is too easily circumvented".
3. Encourages people to run arbitrary code from a random Web site in connection with their accounts, which is bad practice, even if this one isn't malware and is fully secure.
The code was released, therefore it is not arbitrary (problem #3). Should companies react with more invasive techniques (problem #2), users can always move to other platforms (problem #1).
Until the cycle restarts again with new platforms.
Also, I am convinced self-hosting or getting a new platform (including return to traditional forums) to run might as well be bureaucratically harder at this point, given the case of lfgss' shutdown: https://news.ycombinator.com/item?id=42433044
Oh cool, which ones?!
…aaaand there's the problem.
There are multiple open-source tools that do everything Discord does. There are few-to-none that offer everything Discord does, and certainly none that are centralized, network-effect-capture-ready.
Short term:
* Small group chats with known friends: Signal, whatsapp, IRC, Matrix
* Community chat: Zulip, Rocket.chat
* Community voice: Mumble, Teamspeak
* Video / screen sharing and voice chat: Zoom, BigBlueButton, Jitsi
I've heard about Stoat but haven't read up on it.
Is there an architectural opportunity to build a "Self-hosted push notification" app and business, where the push broker builds an app to deploy to play, then the self-hosted apps build trust with the broker. The broker app sends push notifications to the user device, which can inform them of the message sent and open arbitrary app windows?
There's an implementation sample here: https://fluffy.chat/en/faq/#push_without_google_services
Thread OP here. This is exactly my point: today isn't that day. I need it to be.
Some future server that I can migrate my community to isn't useful.
narrator> And that's when he discovers his account has now been hacked...
;)
It's a pre-emptive move against any (potential) legislative pressures.
The fact that safari refuses to support modern features and is forced on ios devices makes it even worse.
Calling it a "mild effort" assumes skills that older generations took for granted but many young people seem to have been actively trained out of. We're past the era where I take for granted that aspiring programmers need to have the basics of a terminal or shell explained to them, into one where they might need an explanation for the basics of a file system and paths. I wouldn't be surprised to hear that hardly any of them could touch-type, either. (I wonder what the speed record is for cell phone text input...)
Yes, they can query a search engine (kind of) or, I guess nowadays, ask ChatGPT. But there's going to be more to setting up an alternative than that. And they need to have the idea that an alternative might exist. (After all, they're asking ChatGPT, not some alternative offering from a company that provides alternatives to Google services....)
Look at the Amnezia VPN. It's an app that helps you buy a VPS from a range of cloud provides, then sets it up, completely from the phone, as an exit node under user control.
I don't see why a chat server cannot be set up and managed this way. It only takes one dedicated developer to produce.
by a system with a incentive to keep them in centralized black boxes, yes.
>The rest will be taken care of.
It's never the tech hat's hard, but the networks. If people were able to just jump on a whim a lot of dynamics of modern corruption would fall apart.
The Network Effect.
That's it. Their friends are there so they're there.
What's the problem? You're filtering out people who don't really care about participation in whatever group or society is there. People who want to participate will move to an acceptable service and those who feel that is too much effort probably weren't participating much (if at all) anyway - in that case the only difference is the visible list of people with accounts going down, not the actual "users".
It’s also a futile effort since age checks for adult content is becoming the law around the world so soon any platform you move to will have the same checks.
Most people just care about being able to talk to each other, not their devotion to some "group or society".
You underestimate how many people would rather do nothing than be inconvenienced, sadly. If you're not the personality that the community is rotating around, you'll find the migration pretty lonely.
Heck, even esablished personalities can only do so much. Remember that Microsoft paid top Twitch streamers 10s of milllions to move to Mixer for exclusive streaming. Even that wasn't enough to give a leg up.
The effort to coordinate everyone to move at the same time is bordering on impossible.
I don't think asking people to abandon a platform works. We need to fight for open protocols.
In the gaming sphere it's so universally used that all the friends you've ever made while gaming are on it, as well as all your chat history, and the entire history of whatever server you met them on. And if you want to make new friends, say to play a particular game, it's incredibly easy to find the official game server and start talking to people and forming lobbies with them.
My main friend group in particular has a server that we've had running since we were teenagers (all in our mid-20s now) which is a central place for all of the conversations we've ever had, all of the pictures we've ever sent each other, all the videos we've ever shared, and so on. That's something I search back through frequently looking for stuff we talked about years ago.
So I'm not saying it's impossible to move, but understand that it would require:
- Intentionally separating from the entire gaming sphere, making it so, so much harder to make new friends or talk to people. - Getting every single one of your friends that you play games with to agree to downloading and signing up for this new service (in my case that would be approx. a dozen people) - Accepting that this huge repository of history will be wiped out when moving to the new service (I suppose you could always log back in and scroll through it, but it's at least _harder_ to access, and is separated from all your new history)
On top of this, every time I've looked for capable alternatives to Discord I've come up empty-handed. Nothing else, as far as I can tell supports free servers, the ability to be in multiple servers, text chat divided into separate channels, optional threaded communication, voice chat joinable at any time with customizable audio setup (voice gate, push-to-talk, etc), game streaming from the voice chat at any time, and some "friend" system so that DMs and private calls can be made with each other. And even if I found one, then again I can't express enough that in the gaming sphere effectively _zero_ people use it or even know what it is.
Anyways, I'm not saying that nothing could make me abandon Discord, I'm just saying that doing so is a tremendous effort, and the result at the end will be a significantly worse online social life. So not a mild inconvienence.
This is true, but one needs to regularly back this up elsewhere if you care about it. If you're not in control of it, it can go away in an instant; Discord could one day decide to ban your server or anything else, and then it's gone.
There are a lot of barriers between kids and better solutions, one of which is that anything needs a domain and a server, and that means a credit card.
And yet here we all are, still in an uproar every time GitHub goes down. Change is slow, we can't all leave GitHub in a day. Same with Discord users.
Getting everyone to switch away from Discord has been hard because getting everyone to spontaneously switch with no clear benefit hasn't worked. They want to just keep using the app and get back into a game with their friend.
It's different to lock a door and task users with getting the key to come back in. This is more similar to an MMORPG that kills their audience because they cause the core group to stop playing and then all of the other players experiences get worse, which causes a downward trend that avalanches.
Somehow Discord pulled it off. It really didn't have much of an edge over the other chat apps at launch, just was slightly easier to use because it was simpler. A new site launching now could easily have that over Discord.
From experience, I know if I leave that few of my friends will follow. So I understand the resistance.
because that's not how they view it. For most Gen Z users and younger their digital identity already is their identity and they have no problem verifying it because the idea of being anonymous on a social network defeats the purpose of being there in the first place.
They grew up being watched. They know what these data harvesting operations are and how dangerous this is. They've got front row seats to the dystopia. The difference is that they can't / couldn't do anything about it.
They think the world is broken and that you broke it. They're pissed off. And powerless. Not a good combination
Even McKinsey is now reporting on it,
Some Gen Zers push back on a lack of privacy, creating online subcultures that fantasize about anonymity: the pastoral “cottagecore” aesthetic, inspired by tiny cabins and homegrown greens, was one of Gen Z’s first major trends.
Some opt out; the New York Times recently reported on a group of self-described Luddite teens who found community by kicking smart devices in favor of the humble flip phone.
Even if you don’t go that far, many young people are veering away from “everyone knows everything” social media to curate a close group of friends and carefully monitor how much they put online.
https://www.mckinsey.com/~/media/mckinsey/email/genz/2023/01...Looking at the numbers that TikTok or Meta are doing I think you can unequivocally say that the vast majority of young people do not care, at all, the 'luddite teen' is the digital version of, and about as real, as the Gen Z 'trad wife'.
If you're going to a CCC event you're much more likely to see resistance in the form of someone like Cory Doctorow, an actually angry middle aged guy who to my knowledge has not converted to flip phone cottage core to stick it to the man.
Indeed, this reads as a case of somebody forgetting that the news doesn't report what's absolutely normal to everybody. It reports what's unusual. (Plus all the articles that misrepresent people's opinions either deliberately for clicks, or accidentally through lack of understanding, sometimes due to being given a quota of articles to rush out per day.)
Perhaps the universalizing mistake is going a little bit in both directions here.
There's a huge current trend where people love to tar an entire generation with the same brush. When a person a generation or more removed (in either direction) says something we personally disagree with, it's become the norm to put down that entire generation as though they share the same viewpoint. It's a very unfortunate trend IMO because it often comes across as arrogant and/or patronising.
I don't, but I'd expect there to be a hole where they used to be and I haven't seen one, or any concern from the platforms whose income and future depends on those users. So I'd probably shelf that as yet another exodus story that did in fact not happen
{"error":"failed to execute k-id privately action (status=404)"}
I'm very much an adult, this whole thing is ridiculous. Ban me, I don't care.If I recall, I had a fairly decent view of their various checks because it was delivered completely unminified, including a couple amusing sections and unimplemented features. (A gesture detector with the middle finger gesture in the enumerable commented out, for example...)
Another attack vector that I speculated upon was intercepting and replacing their tflite model with ones own, returning whatever results required.
Additionally, I believe they had a check for virtual camera names in place, as checks would quietly fail with a generic message in the interface, but show the reason as being virtual camera within responses. (Camera names are mutable though, so...)
There is no alternative for Discord for bigger groups.
If there was, I still couldn’t move multiple social circles to it, no matter how much I evangelised.
The “just don’t use the less morally aligned platform” argument has always been valid only for those without a strong need for it, whether it’s X or Discord.
Are you saying that people who don't talk to their friends over Discord don't have friends?
Is that a statement you genuinely find reasonable?
Signal for direct messaging and calls
So once you have friends all connected parties requires to install Discords. How does that work?
Are your parents friendless, do they use Discord?
However, the orgs don’t get to capture verified adult user identity to pad the value of their user data profiles…
[0] https://blog.google/company-news/inside-google/around-the-gl...
https://gist.github.com/mary-ext/6e27b24a83838202908808ad528...
The official app/client is 100% legally compliant in its unmodified state. But doing something like using another client, having your PDS say you're age verified, or using a ublock origin rule to change where the geolocation API thinks you are completely sidestep it.
I've heard, but haven't confirmed, they also detect you opening developer tools using various methods and remove your auth keys from localstorage while you have it open to make account takeovers harder. (but not impossible)
Opening the browser console in a separate window mitigates some of that detection.
Every time I open the dev tools on Safari (to reverse-engineer some random broken website that doesn't let me do what I need to and forces me to write yet another Python script using Beautifulsoup4), Google logs me out of all of my accounts.
To add insult to injury, Google's auth management is so broken that if I log in to the "wrong" account first by accident (E.G. when joining a work meeting from Calendar.app), that account now becomes primary for Google Search / Youtube, and there's no way to change that without logging back out from all accounts and then logging into them again.
You can open the network tab, click an API requesst, and copy the token from the Authorization header.
No, they just keep moving it between updates. It's still there. It just gets harder to extract.
Seems I'm not the only one either: https://github.com/xyzeva/k-id-age-verifier/issues/7
Only annoyed adults, who don't see the point in pursuing a bypass, will supply their actual ID, which is what will eventually get breached in the inevitable yet-another-breach.
These schemes only place the honest at risk.
To be clear - this is a wholly discretionary act on their part to implement this in jurisdictions that have no such legal requirement.
I've never used twitter on a phone, yet that's the only official way to go through the age verification process. Youtube too.
I attempted to get through the youtube one on a new account to see an age-gated video, but couldn't finish the process and gave up. At the time, I remember thinking it would be easier for me to buy an age verified google account from someone.
Discord will require a face scan or ID for full access next month - https://news.ycombinator.com/item?id=46945663 - Feb 2026 (1999 comments)
Discord Alternatives, Ranked - https://news.ycombinator.com/item?id=46949564 - Feb 2026 (456 comments)
Discord faces backlash over age checks after data breach exposed 70k IDs - https://news.ycombinator.com/item?id=46951999 - Feb 2026 (21 comments)
What's less common, but still seen occasionally, is their opposite: "fuckings".
Apparently Twitch doesn't like Mozilla Firefox...
> the presence of clunky workarounds like this doesn't affect it if it doesn't reach the mainstream.
i suspect that mainstream would eventually find it - like how VPNs suddenly became very popular in the UK.
Anyone got a clue what that means?
That is why we, the [Blue / Red] party are announcing today a manifesto pledge to outlaw all computers that allow unsigned booting of unauthorized platforms, to outlaw all browsers that do not participate in the chain of trust this provides, and to outlaw all websites that do not verify the code path from boot to browser.
Only with complete trust and authorization will we be able to sleep safe in the knowledge our children’s faces are being scanned by law abiding patriots and not subverted by evil hackers like xyzeva and Dziurwa.
— General Secretary gorgoiler
.. .. ..
*What do you do, btw, if you extend your political machine into another country by subsuming their party into yours, but when their colour is traditionally X and yours is traditionally Y? Mixed light: the White party? Mixed paint: the Brown party?
Edit: might only be a minor API call issue[2]
You can also self-host the backend from https://github.com/xyzeva/k-id-age-verifier.
CC everyone.
There are many ways in which such a system could be implemented. They could have asked people to use a credit card. Adult entertainment services have been using this as a way to do tacit age verification for a very long time now. Or, they could have made a new zero-knowledge proof system. Or, ideally, they could have told the authorities to get bent.
Tech is hardly the first industry to face significant (justifiable or unjustifiable) government backlash. I am hesitant to use them as examples as they're a net harm, whereas this is about preventing a societal net harm, but the fossil fuel and tobacco industries fought their governments for decades and straight up changed the political system to suit them.
FAANG are richer than they ever were. Even Discord can raise more and deploy more capital than most of the tobacco industry at the time. It's also a righteous cause. A cause most people can get behind (see: privacy as a selling point for Apple and the backlash to Ring). But they're not fighting this. They're leaning into it.
Let's take a look at what they're asking from people for a second, the face scan,
If you choose Facial Age Estimation, you’ll be prompted to record a short video selfie of your face. The Facial Age Estimation technology runs entirely on your device in real time when you are performing the verification. That means that facial scans never leave your device, and Discord and vendors never receive it. We only get your age group.
Their specific ask is to try and get depth data by moving the phone back and forth. This is not just "take a selfie" – they're getting the user to move the device laterally to extract facial structure. The "face scan" (how is that defined??) never leaves the device, but that doesn't mean the biometric data isn't extracted and sent to their third-party supplier, k-Id. From the article, k-id, the age verification provider discord uses doesn't store or send your face to the server. instead, it sends a bunch of metadata about your face and general process details.
The author assumes that "this [approach] is good for your privacy." It's not. If you give me the depth data for a face, you've given me the fingerprint for that face. A machine doesn't need pictures; "a bunch of metadata" will do just fine.Discord is also doing profiling along vectors (presumably behavioral and demographic features) which the author describes as,
after some trial and error, we narrowed the checked part to the prediction arrays, which are outputs, primaryOutputs and raws.
turns out, both outputs and primaryOutputs are generated from raws. basically, the raw numbers are mapped to age outputs, and then the outliers get removed with z-score (once for primaryOutputs and twice for outputs).
Discord plugs into games and allows people to share what they're doing with their friends. For example, Discord can automatically share which song a user is listening on Spotify with their friends (who can join in), the game they're playing, whether they're streaming on Twitch etc. In general, Discord seems to have fairly reliable data about the other applications the user is running. Discord also has data about your voice (which they say they may store) and now your face.Is some or all of this data being turned into features that are being fed to this third-party k-ID? https://www.k-id.com/
https://www.forbes.com/sites/mattgardner1/2024/06/25/k-id-cl...
https://www.techinasia.com/a16z-lightspeed-bet-singapore-par...
k-ID is (at first glance) extracting fairly similar data from Snapchat, Twitch etc. With ID documents added into the mix, this certainly seems like a very interesting global profiling dataset backstopped with government documentation as ground truth. :)
The root problem is that Discord is asking users for their real identity in exchange for accessing social media content. That is a line that simply should not be crossed.
They can change the implementation later. They can make it harder to bypass. They can identify users who bypassed it and start them over from square one. They can change what type of content is blocked. They can alter the deal, but users cannot take back their identity once it is handed over.
Discord has become a platform that is outwardly adversarial to its users. Don't try to fight it. Don't keep investing in a platform that's actively hostile to you. Cut your losses now and find something else.
we really need to teach people to stop being fooled by this, a "bunch of metadata" is often enough to fully reconstruct a face
There's often a degree of uncertainty with the data advertisers have. This would heavily reduce that uncertainty and enable worse behavior on the part of advertisers.