this post was submitted on 05 Apr 2026
814 points (99.4% liked)

Technology

83500 readers
3971 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
all 50 comments
sorted by: hot top controversial new old
[–] mrmaplebar@fedia.io 26 points 7 hours ago (3 children)

What's their motivation? "Protecting the children" from pictures of boobs?

Doubtful.

Instead, I think the AI companies are looking for ways to more easily distinguish human-made "content" from bot-made content, in order to decrease the amount of generative slop that ends up being fed back into their training data.

[–] anon_8675309@lemmy.world 6 points 3 hours ago

They want other entities to be responsible for it so they don’t have to be.

[–] buddascrayon@lemmy.world 13 points 6 hours ago* (last edited 6 hours ago)

Considering the fact that AI based age verification now usually involves taking a head shot and uploading it to the servers of the company, I'm guessing there is an element of facial data collection involved as well.

[–] brucethemoose@lemmy.world 7 points 5 hours ago* (last edited 5 hours ago) (1 children)

It’s anticompetitiveness.

They want to squash open models, and anyone too small to comply with this.

I say this in every thread, but the real AI “battle” is open-weights ML vs OpenAI style tech bro AI. And OpenAI wants precisely no one to realize that.

[–] mrmaplebar@fedia.io 0 points 5 hours ago (1 children)

"Open weights" isn't worth a shit either.

If you don't have the access to training data or the computational power to bake it into a model, you are beholden to somebody's (or rather, some corporation's) binary blob. We're talking about the difference between freeware and FOSS effectively.

Not that it matters because all of thin generative AI stuff is for talentless tech bro chuds in the first place...

[–] brucethemoose@lemmy.world 1 points 3 hours ago* (last edited 3 hours ago)

Even not-fully-reproducible open-weights models are extremely important because they're poison to OpenAI, and they know it. It makes what they're trying to commodify and control effectively free and utilitarian.

But there are fully open models, too, with public training data.

[–] sturmblast@lemmy.world 52 points 13 hours ago (1 children)

A anyone want to bet Palantir is behind this push?

[–] WorldsDumbestMan@lemmy.today 6 points 12 hours ago

You know what they are going to be saying to the Stickholders once they get wiped from Europe?

I-ran.

[–] BarneyPiccolo@lemmy.today 14 points 10 hours ago
[–] qevlarr@lemmy.world 115 points 17 hours ago* (last edited 17 hours ago) (1 children)

It's identity verification.NNo more anonymity. The wet dream of marketers and autocratic governments alike

[–] BygoneNeutrino@lemmy.world -4 points 5 hours ago* (last edited 5 hours ago) (1 children)

I'm pretty sure the vast majority of the people who are anonymous are robots and stealth marketers. Normal people aren't usually willing to put in the effort to maintain their privacy. They use their "anonymous" social media and AI accounts on devices tied to their verified credentials (Google, Amazon, Walmart, Microsoft, etc.)

My theory is that these companies want identity verification to prevent swarms of bot farms from clogging up their servers. Up until this point, the drawbacks of identity verification outweighed the positives.

[–] qevlarr@lemmy.world 4 points 4 hours ago

No man. Second time running into you on this very topic on a 7 day old account. I'm going to tag you.

[–] Tollana1234567@lemmy.today 80 points 18 hours ago* (last edited 18 hours ago) (20 children)

it is backed by META, AI, OpenAI, they all want to sell the data tot o govt, thats where the money is. and last one i forgot was palinitir.

load more comments (20 replies)
[–] NateNate60@lemmy.world 128 points 20 hours ago (5 children)

It's possible to construct an age-verification system that allows a user to verify they are over the age of 18 without divulging any other information whatsoever.

But that would defeat the point of "age" verification for these goons.

[–] mrmaplebar@fedia.io 3 points 7 hours ago (1 children)

I kind of disagree. How can you be certain a person in is a certain age without determining who that person is?

The local AI concept is flawed, as is anything that relies on trusting the user.

If you want to be certain that someone is over 18 at some point you need a government ID or birth certificate, and at that point you know a hell of a lot more about them than their age.

This is identity verification.

[–] NateNate60@lemmy.world 1 points 2 hours ago

In general, we accept that the Government already knows who you are, how old you are, and where you live. That's already a given. The purpose of a zero-knowledge age verification scheme is to allow a third party (not the Government) to be confident that a person is an adult, without being given any additional information or being able to deduce any additional information from what they're given. So essentially, they get only 1 bit of information: whether the user is an adult (true/false). In practice, a perfect system is not possible, since the fact that you receive a response also means you get the answer to related questions, like whether the user possesses a Government-issued ID (obviously "true" if they can successfully complete the verification).

So, here's how such a scheme might work. There are many possible implementations.

In the United States, we have (optional) digital ID cards. These are added to one's digital wallet in a similar manner to payment cards and can be used for things like buying alcohol, getting through airport security, and driving. This digital infrastructure can be re-used.

  1. An organisation which wants to perform digital identity verification generates a cryptographic key pair and registers the public key with a Government server ahead of time. The public key is published to a Government-run public keyserver.
  2. A website who wants to verify a user's age sends a verification request to a Government server, digitally signed with their private key. The server responds with a request ID, which is a random, but unique, string of characters.
  3. The website provides this string to the user. The user copies the string.
  4. The user opens their digital wallet, selects their ID card, and then opens the age verification feature. The user pastes the request ID into their digital wallet, which fetches information about the request from the Government server. Because the request which the request ID is associated with was signed using the organisation's private key, the Government can tell the user who initiated the request.
  5. The user is asked to confirm/deny the age verification request. If the user confirms the request, then a biometric will be required to access their private key (these are stored in the device's keystore), sign the approval response, and then sent that response to the Government server. The Government server checks that the signature is valid and tied to the key associated with that ID before marking the verification request as completed.
  6. After confirming, the user returns to the website and clicks a button which says "I've completed the verification." The website then queries the request ID with the Government server (again, signing the request with their private key). The Government server responds with "completed" if the user has accepted the request, or "not completed" if the user has either not yet accepted the request or denied it.
[–] ayyy@sh.itjust.works 4 points 8 hours ago (1 children)

How do you positively confirm age without confirming identity and referencing it to an official birth certificate?

[–] Tryenjer@lemmy.world 11 points 8 hours ago* (last edited 8 hours ago) (1 children)

With something like a physical gift card.

Go to a store or kiosk, show them your ID card or driver's license, and they'll give you a card randomly chosen from the shelf with a code to activate the +18 version of any social network of your choice.

Each code could only be used once. People would have to buy more, at a symbolic cost, for each social network they wished to activate.

I would tend to be against this in the same way, but at least it would be something I could understand where the objective is actually what is being presented (protecting the children), albeit misguided, because to me it is clear that what is currently being promoted and proposed has nothing to do with age verification, but rather with mass surveillance, marketing and censorship.

[–] deliriousdreams@fedia.io 6 points 8 hours ago

That's not something I've seen suggested before but it is an interesting way to go about it.

[–] innermachine@lemmy.world 10 points 11 hours ago (1 children)

I cannot understand the online age verification crap. You need a licence to buy alcohol, but once it's in your house your kid can drink it. If their so worried just make it so u have to be 18 to buy a computer. This makes it so a age check was in place to get online, yet no identity is tied to the services. Then anybody underage online is only doing so because an adult facilitated it (which is basically currently the case). Shit I had to show id to get my phone contract, and to get my internet so an age verification check was in place for all these kids smart phones and wifi access that their PARENTS provided and now the parents are mad they have access to adult content??? This is like being mad that your kid under age drank when you bought them liquor. The fuck did they think would happen? The real reason they want this stuff to go in place is to harvest more info for private gain.

[–] frongt@lemmy.zip 12 points 9 hours ago

It's because it's not about age verification, it's about surveillance.

[–] CosmicTurtle0@lemmy.dbzer0.com 40 points 15 hours ago (3 children)

There is no way to prove definitively that the person using the credential is the same person using the Internet. Hell there's barely enough of a way to prove that there is a human sitting behind your device.

This is why age/identify verification is pointless.

load more comments (3 replies)
[–] fluffykittycat@slrpnk.net 9 points 19 hours ago (4 children)

Not really, no. Nor do we want that

[–] NateNate60@lemmy.world 38 points 18 hours ago* (last edited 18 hours ago) (1 children)

It is possible to construct a zero-knowledge proof using cryptography and adapting existing digital ID infrastructure. A user can prove that they have knowledge of a private key tied to an adult's identification card without having to reveal the key, or the associated public key.

But that being said, whether something is possible and whether it is a good idea are two different questions.

[–] fluffykittycat@slrpnk.net 17 points 18 hours ago (1 children)

I've never heard anyone explain how you can devise a system that is both Anonymous and immune to somebody handing out their zero knowledge proof tokens by the handful

[–] pazuzuzu@lemmy.nz 19 points 17 hours ago (1 children)

Matthew Green the CS/cryptography professor is actively writing about this in fairly broad language https://blog.cryptographyengineering.com/2026/03/02/anonymous-credentials-an-illustrated-primer/

[–] Tiresia@slrpnk.net 8 points 16 hours ago (1 children)

tl;dr: The "zero knowledge" proof could have a finite number of uses per block of time for each verifier, each of which represented by a unique single-use key. This way anyone sharing keys would be limited by that finite number of uses, and if people sharing this aren't coordinated they could end up re-using a single-use key.

If the encryption was stolen without their consent, this could tip a user off prompting them to invalidate the current set and get a new one. And if the verification is used to support a pseudonym like an account for an online service then instances of re-use could get flagged for moderators.

[–] a_gee_dizzle@lemmy.ca 1 points 10 hours ago

This is interesting, thanks for sharing

[–] CosmoNova@lemmy.world 6 points 17 hours ago (2 children)

What do you mean? The system already exists in Germany and whatever website or app asking if you‘re an adult will only get exactly that information. Not your face, not your name, not even your age. They will simply receive a verified Y/N from a government service. Needless to say no US company implemented it because they would miss out on a lot of sweet sweet data.

[–] Infernal_pizza@lemmy.dbzer0.com 16 points 17 hours ago (1 children)

Surely that's not zero knowledge since the government can see every site you visit, which is the whole point of these laws anyway

[–] sbv@sh.itjust.works 2 points 11 hours ago (2 children)

I can't speak to Germany's system, but there's no need for a site to tell the verification service its identity. If it just asks "is the current session authenticated to someone over 16" and gets an answer back. Identity of both parties remains secret.

[–] baltakatei@sopuli.xyz 3 points 6 hours ago

Theoretically, it's possible for the user to authenticate their age without either the site or service knowing the user's identity. Quick and dirty example:

There's a thing called a ring signature that allows one to prove that one of a large number of people digitally signed something. Let's say a million people all have private keys whose corresponding public keys are registered to a database after they flashed their state ID at a post office or something to prove they are ≥18 years of age. So, John Smith uses his private key plus all 1 million public keys to sign a statement that he sends to a server saying he's ≥18. The server then takes all 1 million public keys plus the signed message John provided and verifies that his signature is among the 1 million but cannot calculate which exact public key belongs to John. The verification process requires all 1 million public keys as input; you cannot, for example, try an omit each public key one-by-one to see which causes the verification process to fail.

Currently, there is ongoing research on how to make compact ring signatures since they can be very large the more public keys are involved.

https://en.wikipedia.org/wiki/Ring_signature

That said, even if you had scalable compact ring signature technology, I'd be more worried about advertiser deänonymization efforts once a user has logged in that check browser canvas size, IP address, user agent, font availability, etc. See https://coveryourtracks.eff.org/

Also, ring signatures for age verification don't actually verify age, just that someone proved their age at some point in the past to the owner of the public key database; just like an adult can log into YouTube on behalf of their children and let the children go to town, John could give anyone access to his private key regardless of age.

[–] Infernal_pizza@lemmy.dbzer0.com 9 points 11 hours ago (1 children)

No need to, but no need for it not to either. And no way to verify it isn't beyond "trust me bro" and I don't trust them

[–] sbv@sh.itjust.works 1 points 8 hours ago

no way to verify it isn't beyond "trust me bro" and I don't trust them

If the verification service is structured like oauth, then the request could be passed through the browser as signed plaintext. You could verify that the requesting site is only passing a minimum age request to the service. That would be as straightforward as viewing the interaction in your browser's debug tooling.

If you say that you don't trust the signature, and that it could be used to smuggle identifying information across, there's a couple of ways to deal with that: open source and audited provider governed by legislation; information theory that would show personally identifying information wouldn't fit into a field of that size; and "personal auditing" where you can try throwing data at the service to see if you can trick it into accepting invalid input (that really goes with the previous point, because the only field you can usefully vary is the signature).

[–] baltakatei@sopuli.xyz 5 points 15 hours ago (1 children)

“Hey, Uncle, can I use your age tokens to do my research homework? The rich kids all get to use the entire Internet because their uncles let them ride their age credentials. Thanks!”

[–] pycorax@sh.itjust.works 1 points 14 hours ago

I don't see how other methods aren't immune to this issue without asking you to do verification every day.

[–] raicon@lemmy.world 6 points 18 hours ago

Actually its possible, but we do not want that

load more comments (1 replies)
[–] Treczoks@lemmy.world 16 points 18 hours ago

Age verification requirements for AI? As in "AI needs to be at least this mature before released to be used"?

[–] darklamer@feddit.org 18 points 19 hours ago

Shall I pretend to be surprised?