Lawmakers in Washington are once again focusing on kids, screens, and mental health. But according to Congress, Big Tech is somehow both the problem and the solution. The Senate Commerce Committee recently held a hearing on “examining the effect of technology on America’s youth.” Witnesses warned about “addictive” online content, mental health, and kids spending too much time buried in screen. At the center of the debate is a bill from Sens. Ted Cruz (R-TX) and Brian Schatz (D-HI) called the Kids Off Social Media Act (KOSMA), which they say will protect children and “empower parents.”
That’s a reasonable goal, especially at a time when many parents feel overwhelmed and nervous about how much time their kids spend on screens. But while the bill’s press release contains soothing language, KOSMA doesn’t actually give parents more control.
Instead of respecting how most parents guide their kids towards healthy and educational content, KOSMA hands the control panel to Big Tech. That’s right—this bill would take power away from parents, and hand it over to the companies that lawmakers say are the problem.
Kids Under 13 Are Already Banned From Social Media
One of the main promises of KOSMA is simple and dramatic: it would ban kids under 13 from social media. Based on the language of bill sponsors, one might think that’s a big change, and that today’s rules let kids wander freely into social media sites. But that’s not the case.
Every major platform already draws the same line: kids under 13 cannot have an account. Facebook, Instagram, TikTok, X, YouTube, Snapchat, Discord, Spotify, and even blogging platforms like WordPress all say essentially the same thing—if you’re under 13, you’re not allowed. That age line has been there for many years, mostly because of how online services comply with a federal privacy law called COPPA.
Of course, everyone knows many kids under 13 are on these sites anyways. The real question is how and why they get access.
Most Social Media Use By Younger Kids Is Family-Mediated
If lawmakers picture under-13 social media use as a bunch of kids lying about their age and sneaking onto apps behind their parents’ backs, they’ve got it wrong. Serious studies that have looked at this all find the opposite: most under-13 use is out in the open, with parents’ knowledge, and often with their direct help.
A large national study published last year in Academic Pediatrics found that 63.8% of under-13s have a social media account, but only 5.4% of them said they were keeping one secret from their parents. That means roughly 90% of kids under 13 who are on social media aren’t hiding it at all. Their parents know. (For kids aged thirteen and over, the “secret account” number is almost as low, at 6.9%.)
Earlier research in the U.S. found the same pattern. In a well-known study of Facebook use by 10-to-14-year-olds, researchers found that about 70% of parents said they actually helped create their child’s account, and between 82% and 95% knew the account existed. Again, this wasn’t kids sneaking around. It was families making a decision together.
A 2022 study by the UK’s media regulator Ofcom points in the same direction, finding that up to two-thirds of social media users below the age of thirteen had direct help from a parent or guardian getting onto the platform.
The typical under-13 social media user is not a sneaky kid. It’s a family making a decision together.
KOSMA Forces Platforms To Override Families
This bill doesn’t just set an age rule. It creates a legal duty for platforms to police families.
Section 103(b) of the bill is blunt: if a platform knows a user is under 13, it “shall terminate any existing account or profile” belonging to that user. And “knows” doesn’t just mean someone admits their age. The bill defines knowledge to include what is “fairly implied on the basis of objective circumstances”—in other words, what a reasonable person would conclude from how the account is being used. The reality of how services would comply with KOSMA is clear: rather than risk liability for how they should have known a user was under 13, they will require all users to prove their age to ensure that they block anyone under 13.
KOSMA contains no exceptions for parental consent, for family accounts, or for educational or supervised use. The vast majority of people policed by this bill won’t be kids sneaking around—it will be minors who are following their parents’ guidance, and the parents themselves.
Imagine a child using their parent’s YouTube account to watch science videos about how a volcano works. If they were to leave a comment saying, “Cool video—I’ll show this to my 6th grade teacher!” and YouTube becomes aware of the comment, the platform now has clear signals that a child is using that account. It doesn’t matter whether the parent gave permission. Under KOSMA, the company is legally required to act. To avoid violating KOSMA, it would likely lock, suspend, or terminate the account, or demand proof it belongs to an adult. That proof would likely mean asking for a scan of a government ID, biometric data, or some other form of intrusive verification, all to keep what is essentially a “family” account from being shut down.
Violations of KOSMA are enforced by the FTC and state attorneys general. That’s more than enough legal risk to make platforms err on the side of cutting people off.
Platforms have no way to remove “just the kid” from a shared account. Their tools are blunt: freeze it, verify it, or delete it. Which means that even when a parent has explicitly approved and supervised their child’s use, KOSMA forces Big Tech to override that family decision.
Your Family, Their Algorithms
KOSMA doesn’t appoint a neutral referee. Under the law, companies like Google (YouTube), Meta (Facebook and Instagram), TikTok, Spotify, X, and Discord will become the ones who decide whose account survives, whose account gets locked, who has to upload ID, and whose family loses access altogether. They won’t be doing this because they want to—but because Congress is threatening them with legal liability if they don’t.
These companies don’t know your family or your rules. They only know what their algorithms infer. Under KOSMA, those inferences carry the force of law. Rather than parents or teachers, decisions about who can be online, and for what purpose, will be made by corporate compliance teams and automated detection systems.
What Families Lose
This debate isn’t really about TikTok trends or doomscrolling. It’s about all the ordinary, boring, parent-guided uses of the modern internet. It’s about a kid watching “How volcanoes work” on regular YouTube, instead of the stripped-down YouTube Kids. It’s about using a shared Spotify account to listen to music a parent already approves. It’s about piano lessons from a teacher who makes her living from YouTube ads.
These aren’t loopholes. They’re how parenting works in the digital age. Parents increasingly filter, supervise, and, usually, decide together with their kids. KOSMA will lead to more locked accounts, and more parents submitting to face scans and ID checks. It will also lead to more power concentrated in the hands of the companies Congress claims to distrust.
What Can Be Done Instead
KOSMA also includes separate restrictions on how platforms can use algorithms for users aged 13 to 17. Those raise their own serious questions about speech, privacy, and how online services work, and need debate and scrutiny as well. But they don’t change the core problem here: this bill hands control over children’s online lives to Big Tech.
If Congress really wants to help families, it should start with something much simpler and much more effective: strong privacy protections for everyone. Limits on data collection, restrictions on behavioral tracking, and rules that apply to adults as well as kids would do far more to reduce harmful incentives than deputizing companies to guess how old your child is and shut them out.
But if lawmakers aren’t ready to do that, they should at least drop KOSMA and start over. A law that treats ordinary parenting as a compliance problem is not protecting families—it’s undermining them.
Parents don’t need Big Tech to replace them. They need laws that respect how families actually work.
Republished from the EFF’s Deeplinks blog.
Re: Re: Cold Comfort, but…
I should add, this is medical device, not software.
Re: Cold Comfort, but…
NYU did just file its first patent lawsuit (the first from what I can tell). https://dockets.justia.com/docket/delaware/dedce/1:2021cv00813/75653
Congrats and good luck on the next 20!
I started reading Techdirt in 2007 when I had to start covering intellectual property for a legal newspaper, but really knew nothing about it. It was the best crash course I could have found, and such an important and intelligent counter-point to other points of view I was hearing. Congratulations to Mike and the whole team, and here's to another 20.
History of Zimmerman / Eon-Net
Zimmerman was actually first sanctioned for patent litigation behavior back in 2006 in 2006. However, those sanctions were overturned overturned by a different Federal Circuit panel.
The meaning of "independent invention"
Mike, thanks for the writeup, I appreciate your thoughts and the comments here from other viewpoints as well. To Lonnie Holder——and I honestly ask this as someone just hunting for the right words to describe patent disputes—— Isn't any patent defendant who has not been accused of copying an "independent inventor"? We know that 1) they have (or had) a product of some kind on the market, and 2) they are not accused of copying it. Anyone who creates and markets a product of some kind that isn't exactly identical to another product is an inventor on some level, right? And since copying, at least, held in low esteem by society, shouldn't their invention be considered independent until someone at least alleges otherwise?
camera phones in courtrooms
Two years ago I was a reporter covering crime in Seattle and I was one of the last ones without a camera phone. At that time they would let you into the courtroom in the county jail building with a phone, but not with a camera; if you had a camera in your phone, as most reporters did, you had to leave it up front. It was a big advantage to me not having a camera phone then.
Bad idea
I agree with Hulser that the issue isn't so much ethics as just a foolish idea from a business perspective.
If I were a reporter at the paper behind this stunt I would be upset! I'd feel like I'm being undermined by my own boss. The potential damage is to the newspaper's reputation and the trust of readers. If a newspaper is willing to create a fake ad and say "just kidding!" in fine print, you have to wonder if the next "experiment" will be: "What happens when we write a fake story?"
The problem isn't harm to consumers; the problem is the newspaper harming itself. Newspapers are in the fact-verification business; their marketing departments need to be cognizant of that.
I'm going to ask around, but on first glance I'm not sure this decision will prevent the type of mass-defendant lawsuit described in Mike's link, unless the original manufacturer has a license (as Intel did).
In many cases, including the 92-defendant case Mike linked to, the manufacturer does _not_ have a license, and is also an alleged infringer. It's just more lucrative to go after the retailer clients than the manufacturer of the device. Not sure that Quanta v. LG will stop that.
Amazon owes the country something
Targeted taxes suck; they're unfair and inefficient. But the problem is that chambers of commerce and other business lobbyists fight the FAIR taxes, too. Eventually something comes down the pipe that affects one company or industry more than another because it's politically effective (but still exceedingly difficult) to split the business lobby.
I'd like to see the businesses that oppose illogical taxes on business talk about what they ARE willing to pay to be socially responsible members of society. We don't hear too much of that.