FTC Admits Age Verification Violates Children’s Privacy Law, Decides To Just Ignore That
from the the-law-breaking-admin dept
We’ve been pointing out the fundamental contradiction at the heart of mandatory age verification laws for years now. To verify someone’s age online, you have to collect personal data from them. If that someone turns out to be a child, congratulations: you’ve just collected personal data from a child without parental consent. Which is a direct violation of the Children’s Online Privacy Protection Act (COPPA)—the very law that’s supposed to be protecting kids.
So what happens when the agency charged with enforcing COPPA finally notices this obvious problem? If you guessed “they admit the conflict and then just promise not to enforce the law,” you’d be exactly right.
The FTC put out a policy statement last week that is remarkable in what it tacitly concedes:
The Federal Trade Commission issued a policy statement today announcing that the Commission will not bring an enforcement action under the Children’s Online Privacy Protection Rule (COPPA Rule) against certain website and online service operators that collect, use, and disclose personal information for the sole purpose of determining a user’s age via age verification technologies.
The FTC appears to be explicitly acknowledging that age verification technologies involve collecting personal information from users—including children—in a way that would otherwise trigger COPPA liability. If the technology didn’t create a COPPA problem, there would be no need for a policy statement promising non-enforcement. You don’t issue a formal announcement saying “we won’t sue you for this” unless “this” is something you could, in fact, sue people for.
The statement itself tries to dress this up by noting that age verification tech “may require the collection of personal information from children, prompting questions about whether such activities could violate the COPPA Rule.” But “prompting questions” is doing an awful lot of work in that sentence. The answer to those questions is pretty obviously “yes, collecting personal information from children without parental consent violates the rule that says you can’t collect personal information from children without parental consent.” The FTC just doesn’t want to say that part out loud, because then the follow-up question becomes: “so why are you encouraging companies to do it?”
Instead, they’ve decided to create an enforcement carve-out. Do the thing that violates the law, but pinky-promise you’ll only use the data to check the kid’s age, delete it afterward, and keep it secure. Then we won’t come after you. This is the FTC solving a legal contradiction not by asking Congress to fix the underlying law or admitting the technology is fundamentally flawed, but by deciding to selectively not enforce the law it’s supposed to be enforcing.
The honest approach would have been to tell Congress that age verification, as currently conceived, cannot be squared with existing privacy law—and that if lawmakers want it anyway, they need to resolve that conflict themselves rather than asking the FTC to pretend it doesn’t exist.
No such luck.
And boy, do they seem proud of themselves. Here’s Christopher Mufarrige, Director of the FTC’s Bureau of Consumer Protection:
“Age verification technologies are some of the most child-protective technologies to emerge in decades…. Our statement incentivizes operators to use these innovative tools, empowering parents to protect their children online.”
“The most child-protective technologies to emerge in decades.”
Excuse me, what?
This is the kind of statement that sounds authoritative right up until you spend thirty seconds thinking about it. Anyone with any knowledge of security and privacy knows that age verification is anything but “child protective.” It involves a huge invasion of privacy, for extremely faulty technology, that has all sorts of downstream effects that put kids at risk.
Oh, and the FTC seems proud that the vote for this was unanimous—though it’s worth noting that Donald Trump fired the two Democratic members of the FTC and has made no apparent efforts to replace them, despite Congress designating that the FTC is supposed to have five full members, with two from the opposing party. A unanimous vote among the remaining two Republicans is a strange thing to brag about.
The FTC even posted about this on X, and the response was… well, let me just show you:

If you can’t see that, the main part to pay attention to is not the tweet from the FTC itself, but the Community Note that (under the way Community Notes works, notes need widespread consensus among users to be appended to the public tweet):
Readers added context they thought people might want to know
Contrary to their claim, using age verification has numerous issues, including but not limited to:
1. Easily bypassed
2. Risks of security data breach
3. Inaccuracies (Placing adults into underage groups, vice versa)
And many more… (sigh, I need a break).
Yeah, we all need a break.
That Community Note does a better job explaining the state of age verification technology than the FTC’s entire Bureau of Consumer Protection. It methodically lists out the problems: kids easily bypass these systems, the collected data creates massive security breach risks, and the technology produces wildly inaccurate results that lock adults out while letting kids through (and vice versa). When the consensus-driven crowdsourced fact-check on your own announcement is more informative than the announcement itself, maybe it’s time to reconsider the announcement.
But let’s say, for the sake of argument, that the technology worked perfectly. Would mandatory age verification still be a good idea?
That still wouldn’t solve the issues with this technology and the harm it does to kids. Even UNICEF (UNICEF!) has been warning that age restriction approaches can actively harm the children they’re supposed to protect. After Australia’s social media ban for under-16s went into effect, UNICEF put out a statement that could not have been more clear about the risks:
“While UNICEF welcomes the growing commitment to children’s online safety, social media bans come with their own risks, and they may even backfire,” the agency said in a statement.
For many children, particularly those who are isolated or marginalised, social media is a lifeline for learning, connection, play and self-expression, UNICEF explained.
Moreover, many will still access social media – for example, through workarounds, shared devices, or use of less regulated platforms – which will only make it harder to protect them.
So the actual child welfare experts are saying that age verification can backfire, push kids into less safe spaces, and should never be treated as a substitute for real safety measures. Meanwhile, the FTC is calling the same technology “the most child-protective” thing to come along in a generation and is waiving its own enforcement authority to encourage more of it.
What we have here is a federal agency that has identified a direct conflict between the law it enforces and the policy outcome it wants. Rather than grappling with what that conflict means—maybe age verification as currently conceived just doesn’t work within the existing legal framework, and for good reason—the FTC has chosen to simply look the other way. The message to companies is clear: go ahead and collect data from kids to figure out if they’re kids. We know that violates COPPA. We don’t care. We like age verification more than we like enforcing our own rules.
That’s a hell of a policy position for the agency that’s supposed to be the last line of defense for children’s privacy online.
Filed Under: age verification, coppa, ftc, think of the children


Comments on “FTC Admits Age Verification Violates Children’s Privacy Law, Decides To Just Ignore That”
To protect children from seeing websites that tell them gay and transgender people exist, we’ll expose them to privacy violations that allow corporations and bad actors who present actual (and statistically more likely) threats to harvest their data. You know: For the Children™!
Orwell has belatedly infected our entire government. “We’re protecting the children by not protecting the children.”
Data brokers are all over this
They desperately want age verification to be mandatory because it will identify children…and there are “people” out there, if I can dignify them with that term, who will pay exorbitant prices for that data. The data brokers of course don’t care: they’ll sell anything to anybody. But the rest of us should care because the verification data will end up in the hands of monsters.
This comment has been flagged by the community. Click here to show it.
You’re citing community notes, now? Really? Remember when you hated them? And I like CN, a lot, but it is ultimately just someone’s opinion.
At no point in your article do you quote somewhere that the FTC admits that.
Since you seem to be trying to “read between the lines” you are at best saying they are implicitly acknowledging something, but really you are trying to put words in their mouth.
Look, I HATE age verification. I think it’s almost always about de-anonymizing adults rather than protecting children.
So if you think they’re breaking the law, SUE THEM. Don’t depend on the FTC. While you’re at it sue CA over the stupid age-verification OS law.
Re:
When did you stop beating your wife?
Re:
There was no time that I have ever “hated” Community Notes. I’ve always supported it from Day One. Go find when I did. I’ll wait.
Actually, here let’s go through it. I wrote about it when old Twitter first came up with it (as Birdwatch), and I thought it was an interesting approach:
https://www.techdirt.com/2021/02/04/can-community-approach-to-disinformation-help-twitter/
Once Elon took over and massively expanded it, I wrote about how it was one of the good things Elon had done and why I thought other sites should do it too, though I thought he went too far in thinking it could fully replace other features: https://www.techdirt.com/2023/10/31/community-notes-is-a-useful-tool-for-some-things-but-not-as-a-full-replacement-for-trust-safety/
I did mock Elon when he pretended that Community Notes needed fixing when it… called out his bullshit, but that’s (again) a positive review of Community Notes.
https://www.techdirt.com/2023/12/12/community-notes-is-great-until-it-challenges-elon-and-then-its-being-manipulated-by-state-actors/
When the EU investigated X for how it implemented Community Notes, I said that the EU investigation was bullshit and said “Community Notes is a really unique and worthwhile experiment, and one I’d like to see other sites implement as well.”
So, come on, show me where I “hated” Community Notes.
Or admit you’re a fucking liar, like always.
This comment has been flagged by the community. Click here to show it.
Re: Re:
You said it was no replacement for Twitter censoring in the ways you would like it to. Which yes of course it was, especially in the case of censoring “misinformation”, which never should be a thing anyway.
But at least you admit you’re lying about what the FTC said, so that’s progress at least.
Re: Re: Re:
Ah, the irony.
In the very same thread you claim (1) that the FTC saying it won’t enforce COPPA violations against age verification systems is NOT an admission that it violates COPPA unless they use the specific magic words “this violates COPPA” but (2) that the only purpose of trust & safety was “censorship” even though it’s not even remotely about censorship and Twitter’s trust & safety team bent over backwards not to remove content, but to look for any other kind of intervention to limit malicious actors.
And, of course, even in my thread where I prove you’re a fucking liar, you can’t admit you fucking lied.
I get that you’re an extremely angry, bitter, bigot who will mindlessly repeat anything your god king tells you, but this is pathetic, dude.
Here, Matt, I’ll help you: “I’m sorry, Mike. In my incandescent rage where I have to criticize everything you say, I mistakenly thought you hated Community Notes because I incorrectly think you, like me, blindly attack anything on “the other side.” Now I see I was wrong to do so, and I will try to actually consider reality before lashing out you blindly in the future.”
Lol, like you could ever do that. Ah well, maybe one day when you grow up, and mature past a 12 year old mindset.
Re:
I don’t usually kink shame, but this fetish you have for being publicly destroyed in the comments after you say the dumbest bullshit is kind of creepy and weird.
LLMs can fulfill your need for contrarian nonsense without involving other people. It’ll also be easier to pretend you won an argument!
The children learning to use p2p software to get the porn they want will be good for them.
Tor foundation came out with a full vpn now (in beta) so they may be able to torrent anonymously for free also. It will bring more Internet freedom movements among the youth.
Oh, create a carveout, will they?
Guess who is breaking the law now.
Wow, I’m glad a ton of child molesting murderers are deciding to just ignore the law.
We need to send all the right wingers along with their bombs.
So what I hear you saying is they could selectively sue any website they want for violating COPPA or for failing to comply with age verification mandates. Damned if you do and damned if you don’t.
Not necessarily true. It can be worth clarifying even if something didn’t, to avoid an unnecessary chilling effect.
I mean, part of grappling with it is sometimes maybe you decide it’s not for good reason, or doesn’t match the intent. Especially in this case, where COPPA can be somewhat self contradictory.
The law also already has various carveouts, such as 312.5 (c)(5) Where the purpose of collecting a child’s and a parent’s name and online contact information, is to protect the safety of a child, and where such information is not used or disclosed for any purpose unrelated to the child’s safety., etc.
It’s happening for a bad result, but a federal agency interpreting a law seems…fine? Are we going to start pretending stuff like Loper Bright or Major Questions doctrine were good, just because it gets the correct result?
That is how policy tradeoffs work, yes. The question is not whether it can backfire, but whether that backfiring is big enough and common enough to outweigh any potential benefits.
Re:
Uh, no, carve-outs are for the legislature to legislate into the legislation.
It’s not a regulatory power they get to exercise in their remit.
It became necessary to destroy the children’s privacy to save it.