Hackers Expose The Massive Surveillance Stack Hiding Inside Your “Age Verification” Check

from the the-failure-is-the-system dept

We’ve been saying this for years now, and we’re going to keep saying it until the message finally sinks in: mandatory age verification creates massive, centralized honeypots of sensitive biometric data that will inevitably be breached. Every single time. And every single time it happens, the politicians who mandated these systems and the companies that built them act shocked—shocked!—that collecting enormous databases of government IDs, facial scans, and biometric data from millions of people turns out to be a security nightmare.

Well, here we go again.

A couple weeks ago, Discord announced it would launch “teen-by-default” settings for its global audience, meaning all users would be shunted into a restricted experience unless they verified their age through biometric scanning. The internet, predictably, was not thrilled. But while many users were busy venting their frustration, a group of security researchers decided to do something more useful: they took a look under the hood at Persona, one of the companies Discord was using for verification (specifically for users in the UK).

What they found, according to The Rage, was exactly what we would predict:

Together with two other researchers, they set out to look into Persona, the San Francisco-based startup that’s used by Discord for biometric identity verification – and found a Persona frontend exposed to the open internet on a US government authorized server.

In 2,456 publicly accessible files, the code revealed the extensive surveillance Persona software performs on its users, bundled in an interface that pairs facial recognition with financial reporting – and a parallel implementation that appears designed to serve federal agencies.

Let me say that again: 2,456 publicly accessible files sitting on a government-authorized server, exposed to the open internet. Files that revealed a system performing not a simple age check, but a ton of potentially intrusive checks:

Once a user verifies their identity with Persona, the software performs 269 distinct verification checks and scours the internet and government sources for potential matches, such as by matching your face to politically exposed persons (PEPs), and generating risk and similarity scores for each individual. IP addresses, browser fingerprints, device fingerprints, government ID numbers, phone numbers, names, faces, and even selfie backgrounds are analyzed and retained for up to three years.

The information the software evaluates on the images themselves includes “Selfie Suspicious Entity Detection,” a “Selfie Age Inconsistency Comparison,” similar background detection, which appears to be matched to other users in the database, and a “Selfie Pose Repeated Detection,” which seems to be used to determine whether you are using the same pose as in previous pictures.

This was the same company checking whether a teenager should be allowed to use voice chat on a gaming platform.

Beyond offering simple services to estimate your age, Persona’s exposed code compares your selfie to watchlist photos using facial recognition, screens you against 14 categories of adverse media from mentions of terrorism to espionage, and tags reports with codenames from active intelligence programs consisting of public-private partnerships to combat online child exploitative material, cannabis trafficking, fentanyl trafficking, romance fraud, money laundering, and illegal wildlife trade.

So you wanted to verify you’re old enough to use voice chat, and now there’s a permanent risk score somewhere documenting whether you might be involved in illegal wildlife trafficking.

What could go wrong?

As the researchers put it to The Rage:

“The internet was supposed to be the great equalizer. Information wants to be free, the network interprets censorship as damage and routes around it, all that beautiful optimism. And for a minute it was true.”

[….]

“The state wants to see everything. The corporations want to see everything. And they’ve learned to work together.”

Discord, to its credit, has now said it will not be proceeding with Persona for identity verification. And to be fair, Discord and similar internet companies are in an impossible position here—facing mounting regulatory pressure in multiple jurisdictions to verify ages while being handed a market of vendors who keep turning out to be security nightmares. But this is part of a pattern that should be deeply familiar by now.

Just last year, Discord’s previous third-party age verification partner suffered a breach that exposed 70,000 government ID photos, which were then held for ransom. Discord said it stopped using that vendor. Then it moved to Persona, which was already raising concerns due to connections to Peter Thiel. Now Persona’s frontend is found wide open on a government-authorized server, and Discord is dropping them too.

See the pattern? Discord keeps swapping vendors like someone frantically rotating buckets under a leaking roof, apparently hoping the next bucket won’t have a hole in it. But the problem was never the bucket. The problem is the hole in the roof — the never-ending stream of age-verification government mandates.

And this brings us to the bigger, more important point that almost nobody in the “protect the children” policy crowd seems willing to engage with honestly. Every single time you mandate age verification, you are mandating the creation of a centralized database of extraordinarily sensitive personal information. Government IDs. Biometric facial data. The kind of data that, once breached, cannot be “changed” like a password. You get one face. You get one government ID number. When those leak—and they will leak—the damage is permanent.

Even the IEEE Spectrum Magazine is now publishing articles that detail how age verification undermines any effort to protect children by putting their privacy at risk.

These systems fail in predictable ways.

False positives are common. Platforms identify as minors adults with youthful faces, or adults who are sharing family devices, or have otherwise unusual usage. They lock accounts, sometimes for days. False negatives also persist. Teenagers learn quickly how to evade checks by borrowing IDs, cycling accounts, or using VPNs.

The appeal process itself creates new privacy risks. Platforms must store biometric data, ID images, and verification logs long enough to defend their decisions to regulators. So if an adult who is tired of submitting selfies to verify their age finally uploads an ID, the system must now secure that stored ID. Each retained record becomes a potential breach target.

Scale that experience across millions of users, and you bake the privacy risk into how platforms work.

We have been cataloging these breaches for years. In 2024, Australia greenlit an age verification pilot, and hours later a mandated verification database for bars was breached. That same year, another ID verification service was breached, exposing private info collected on behalf of Uber, TikTok, and more. Then came the Discord vendor breach last year. And now Persona.

This keeps happening because it has to keep happening. It’s the inevitable result of a system designed to aggregate the exact kind of data that attackers most want to steal. Computer scientists and privacy experts have been sounding this alarm for years.

And what makes this even more galling is that these age verification systems don’t even accomplish what they claim to accomplish.

Take Australia’s infamous ban on social media for under-16s, the poster child for this approach. It’s been a complete failure on its own terms: plenty of kids have already figured out ways around the ban, while those who can’t—particularly kids with disabilities who relied on social platforms for community—are being actively harmed by their exclusion. As the security researcher who helped discover the Persona leak, Celeste, told The Rage:

“Normies won’t be able to bypass these,” while less benevolent people “will always find ways to exploit your system.”

So we’ve built a system that fails to keep out the people it’s supposedly targeting, while successfully creating permanent biometric dossiers on millions of law-abiding users. Not great!

Meanwhile, what’s happening at the legislative level is perhaps even more cynical. Governments around the world are pushing harder and harder for mandatory age verification online. And as these mandates create a captive market worth billions of dollars, a whole ecosystem of venture-backed “identity-as-a-service” startups has sprung up to serve it. Persona, valued at $2 billion and backed by Peter Thiel’s investment network, is just one of many. These companies make grand promises about privacy-preserving verification, get contracts with major platforms, and then — whoops — leave 2,456 files exposed on a government server.

And, of course, these very firms are now lobbying for stricter age verification mandates. They’ve positioned themselves as protectors of children while actively working to expand the legal requirements that guarantee their revenue stream.

Lawmakers mandate an impossible task, VC-backed startups pop up to sell a “solution,” those startups then lobby for even stricter mandates to protect their market, and the cycle repeats.

“Child safety” has simply become the marketing department for a rent-seeking surveillance industry.

As long as the law demands that these biometric gates exist, the “security” of the data they collect will always be a secondary concern to “compliance” with the mandate. Companies will keep rotating through vendors, each one promising that their system is the one that won’t leak, right up until it does. And the age verification industry will keep lobbying for stricter laws, because every new mandate is another guaranteed revenue stream.

The researchers who exposed Persona’s frontend hope their findings will serve as a wake-up call. Given the track record, it probably won’t be. Discord dropping Persona changes nothing—the next vendor will collect the same data, make the same promises, and eventually suffer the same breach. Because the problem was never which company holds your biometric data. The problem is that anyone is being forced to hand it over in the first place.

Filed Under: , , ,
Companies: discord, persona

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Hackers Expose The Massive Surveillance Stack Hiding Inside Your “Age Verification” Check”

Subscribe: RSS Leave a comment
26 Comments
Anonymous Coward says:

Re:

As regards to apple and Android phones there are tested programs that could be used to verify the age of the user without uploading user ids or personal data .eg the device would store user age id data eg user x is over the age of 18 ,
Or user x is under the age of 16 or unde the age of 18 ,
It could use a preinstalled app or a chip on the apple , android phone to keep the age data safe and secure
There would be no need for storage of name or age the website would receive a token. Eg user xx user of device yy is over the age of 18

Or the user is 16 .
No name or address will be uploaded to the Web

All this tech is available now its up to apple or Google android plc to activate this app or setup the required age token on all devices as part of the update app install process
When an app or website asks for the user
age the chip or installed app would send a token with the data ,
Age data ,user is over 18 ,over 16 etc
The app or website would not receive the name or address of the user just the required user age verification data

This tech is tested and could be setup up or switched on by apple or Google in agreement with all western and eu phone manufacturers and activated on apple, android phones
There’s already apple and android app store reinstalled on all phones sold eg 99 per cent of phones sold in the eu and America

This tech is tested and secured if we are going to have widespread age verification on the Internet is it not
vital that it,s safe secure and respects the privacy of all users .

Anonymous Canadian says:

Re: For what it's worth?

Discord have said that they plan only to use on-device age verification methods that do not send identifying information off-device.

And I hear they have a brand-spanking-new bridge to sell me, too!

Less tongue-in-cheek, I’m incredibly disappointed in Discord. I understand the conflicting pressures they’re dealing with, but I’m still disappointed that they haven’t decided to simply a) say no to the mandates, and if necessary, b) just close the service down. I’ll use it as long as it decides I’m still an adult, and when it prompts me to do age verification, I will on that day (sadly) leave the service.

Drew Wilson (user link) says:

When I was reporting that Australia’s age verification has been a failure, I did have people tell me that I’m being too presumptuous and that I shouldn’t be suggesting that all the failures of age verification at that point is an indication that age verification is a failure. After all, they argued, it’s too early to tell whether or not it is a success or failure. You can probably imagine my “lolwut?” response to that.

Arianity (profile) says:

Every single time you mandate age verification, you are mandating the creation of a centralized database of extraordinarily sensitive personal information. Government IDs. Biometric facial data.

These laws generally don’t mandate a centralized database of that type of data. However, they also don’t forbid it, with the expected trainwreck that implies.

It’s the inevitable result of a system designed to aggregate the exact kind of data that attackers most want to steal.

It’s inevitable that any system will have flaws, but it’s not inevitable that they have to be as laughable as they currently are.

And this brings us to the bigger, more important point that almost nobody in the “protect the children” policy crowd seems willing to engage with honestly.

There are people willing to engage one that point. The problem is the answer they’re going to give, that it’s a cost they’re willing to pay, is not one that is going to make privacy advocates happy. As long as it’s all or nothing, there’s no overlap and it’s going to be two ships passing in the night.

Although, even that particular point is more nuanced than you’re making it to be, given that that data is often stored by the government itself to begin with, e.g. at the DMV. Which will…also leak.

You get one government ID number.

TBF, that is actually fixable, if we wanted it to be. And something that urgently should be addressed even outside age verification concerns, it’s long been a problem.

So we’ve built a system that fails to keep out the people it’s supposedly targeting

I mean, the law is supposed to target normies.

Arianity (profile) says:

Re: Re:

That sentence gave it all away.

That sentence was used by a researcher who is against the law, not a proponent of the law. That said:

It will not stop underage porn consumption. This is about preventing non tech-savvy law abiding citizens from viewing porn. Plain and simple.

A lot of kids stumbling on adult content would also be normies. They’re not necessarily tech-savvy.

Thad (profile) says:

Re:

The problem is the answer they’re going to give, that it’s a cost they’re willing to pay, is not one that is going to make privacy advocates happy.

Well yes, when you frame the issue dishonestly it tends to make people unhappy.

It’s not a question of the cost they’re willing to pay. It’s that they want to force everyone else to pay it too.

Arianity (profile) says:

Re: Re:

Well yes, when you frame the issue dishonestly it tends to make people unhappy.

What’s dishonest about it? There’s a lot of problems with these laws, but being secretive that it’s going to impact everyone is not one of them. They’re pretty upfront that everyone needs to comply with it, that’s kind of the whole point.

It’s not a question of the cost they’re willing to pay. It’s that they want to force everyone else to pay it too.

“That’s a price they’re willing to pay” is a turn of phrase that means they’re willing to inflict it on everyone. It doesn’t just mean they’re the only ones paying it

Anonymous Coward says:

With the advent of Apple now rolling out its own AV API, what stands out to me is seeing people even in cybersecurity-aware circles going ‘oh that’s fine, I trust Apple – and they say it’ll be on-device (as if that means anything prima facie)!’ – have we forgotten the importance of auditing these things instead of just going ‘well they Said they wouldn’t do that?’ It’s like at some point everyone suddenly became completely tech illiterate and stopped realizing just how much metadata can tell someone about a user well before you start picking at their face.

Jason Johnson says:

It's not about verifying age.

If it was about verifying age there are far better ways to achieve this.

The regulators creating these laws and the companies behind the effort to get regulators to make these laws are more interested in making money, controlling populations and for the more conservative religious types, put an end to the vices their religions says are bad.

Every age verification law passed in the US was created by religious conservative republicans who are not interested in preventing children from seeing porn, they are interested in ending porn. They really do not want anyone looking at porn because of their superstitions.

Anonymous Coward says:

The systems are not designed for optimal security (of your private data), nor for optimal safety (of children). They are optimized for accountability.

Platforms can show that they engaged an identity service provider. Politicians can show that they mandated age verification. That these services are bound to have harmful consequences is not their concern. They got their backsides covered.

The purpose of a system is what it does.

Whicker says:

A significant amount of support for such laws is pushed by non-religious Democrats.

Look – I’ve talked with my state lawmaker about the push. I was flat out told “there is appetite for legislation to protect children. The form that takes is up for debate”. He was open to the form it took being “taxpayer funded educational opportunities for concerned parents on what they can do to put barriers on place from their children seeing Internet pornography.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...