Tim Wu Asks Why Congress Keeps Failing To Protect Kids Online. The Answer Is That He’s Asking The Wrong Question
from the wrong-question,-dumb-answer dept
While I often disagree with Tim Wu, I like and respect him, and always find it interesting to know what he has to say. Wu was also one of the earliest folks to give me feedback on my Protocols not Platforms paper, when he attended a roundtable at Columbia University discussing early drafts of the papers in that series.
Yet, quite frequently, he perplexes me. The latest is that he’s written up a piece for The Atlantic wondering “Why Congress Keeps Failing to Protect Kids Online.”
The case for legislative action is overwhelming. It is insanity to imagine that platforms, which see children and teenagers as target markets, will fix these problems themselves. Teenagers often act self-assured, but their still-developing brains are bad at self-control and vulnerable to exploitation. Youth need stronger privacy protections against the collection and distribution of their personal information, which can be used for targeting. In addition, the platforms need to be pushed to do more to prevent young girls and boys from being connected to sexual predators, or served content promoting eating disorders, substance abuse, or suicide. And the sites need to hire more staff whose job it is to respond to families under attack.
Of course, let’s start there. The “case for legislative action” is not, in fact, overwhelming. Worse, the case that the internet harms kids is… far from overwhelming. The case that it helps many kids is, actually, pretty overwhelming. We keep highlighting this, but the actual evidence does not, in any way, suggest that social media and the internet are “harming” kids. It actually says the opposite.
Let’s go over this again, because so many people want to ignore the hard facts:
- Last fall, the widely respected Pew Research Center did a massive study on kids and the internet, and found that for a majority of teens, social media was way more helpful than harmful.
- This past May, the American Psychological Association (which has fallen for tech moral panics in the past, such as with video games) released a huge, incredibly detailed, and nuanced report going through all of the evidence, and finding no causal link between social media and harms to teens.
- Soon after that, the US Surgeon General (in the same White House where Wu worked for a while) came out with a report which was misrepresented widely in the press. Yet, the details of that report also showed that no causal link could be found between social media and harms to teens. It did still recommend that we act as if there were a link, which was weird and explains the media coverage, but the actual report highlights no causal link, while also pointing out how much benefit teens receive from social media).
- A few months later, an Oxford University study came out covering nearly a million people across 72 countries, noting that it could find no evidence of social media leading to psychological harm.
- The Journal of Pediatrics just published a new study again noting that after looking through decades of research, the mental health epidemic faced among young people appears largely due to the lack of open spaces where kids can be kids without parents hovering over them. That report notes that they explored the idea that social media was a part of the problem, but could find no data to support that claim.
What most of these reports did note is that there is some evidence that for some very small percentage of the populace who are already dealing with other issues, those issues can be exacerbated by social media. Often, this is because the internet and social media become a crutch for those who are not receiving the help that they need elsewhere. However, far more common is that the internet is tremendously helpful for kids trying to figure out their own identity, to find people they feel comfortable around, and to learn about and discover the wider world.
So, already, the premise is problematic that the internet is inherently harmful for children. The data simply does not support that.
None of this means that we shouldn’t be open to ways to help those who really need it. Or that we shouldn’t be exploring better regulations for privacy protection (not just for kids, but for everyone). But this narrative that the internet is inherently harmful is simply not supported by the data, even as Wu and others seem to pretend it’s clear.
Wu does mention the horror stories he heard from some parents while he was in the White House. And those horror stories do exist. But most of those horror stories are similar to the rare, but still very real, horror stories facing kids offline as well. We should be looking for ways to deal with those rare but awful stories, but that doesn’t mean destroying all of the benefits of online services in the meantime.
And that brings us to the second problem with Wu’s setup here. He then pulls out the “something must be done, this is something, we should do this” approach to solving the problem he’s already misstated. In particular, he suggests that Congress should support KOSA:
A bolder approach to protecting children online sought to require that social-media platforms be safer for children, similar to what we require of other products that children use. In 2022 the most important such bill was the Kid’s Online Safety Act (KOSA), co-sponsored by Senators Richard Blumenthal of Connecticut and Marcia Blackburn of Tennessee. KOSA came directly out of the Frances Haugen hearings in the summer of 2021, and particularly the revelation that social-media sites were serving content that promoted eating disorders, suicide, and substance abuse to teenagers. In an alarming demonstration, Blumenthal revealed that his office had created a test Instagram account for a 13-year-old girl, which was, within one day, served content promoting eating disorders. (Instagram has acknowledged that this is an ongoing issue on its site.)
The KOSA bill would have imposed a general duty on platforms to prevent and mitigate harms to children, specifically those stemming from self-harm, suicide, addictive behaviors, and eating disorders. It would have forced platforms to install safeguards to protect children and tools to enable parental supervision. In my view, the most important thing the bill would have done was simply force the platforms to spend more money and more ongoing attention on protecting children, or risk serious liability.
But KOSA became a casualty of the great American culture war. The law would give parents more control over what their children do and see online, which was enough for some groups to transform the whole thing into a fight over transgender issues. Some on the right, unhelpfully, argued that the law should be used to protect children from trans-related content. That triggered civil-rights groups, who took up the cause of teenage privacy and speech rights. A joint letter condemned KOSA for “enabl[ing] parental supervision of minors’ use of [platforms]” and “cutting off another vital avenue of access to information for vulnerable youth.”
It got ugly. I recall an angry meeting in which the Eating Disorders Coalition (in favor of the law) fought with LGBTQ groups (opposed to it) in what felt like a very dark Veep episode, except with real lives at stake. Critics like Evan Greer, a digital-rights advocate, charged that attorneys general in red states could attempt to use the law to target platforms as part of a broader agenda against trans rights. That risk is exaggerated. The bill’s list of harms is specific and discrete; it does not include, say, “learning about transgenderism,” but it does provide that “nothing shall be construed [to require a platform to prevent] any minor from deliberately and independently searching for, or specifically requesting, content.” Nonetheless, the charge had a powerful resonance and was widely disseminated.
This whole thing is quite incredible. KOSA did not become a “casualty of the great American culture war.” Wu, astoundingly, downplays that both the Heritage Foundation and the bill’s own Republican co-author, Marsha Blackburn, directly said that the will would be helpful in censoring transgender information. For Wu to then say that the bill’s own co-author is wrong about what her own bill does is quite incredible.
He’s also wrong. While it is correct that the bill lists out six designated categories of harmful information that must be blocked, he leaves out that it’s the state Attorneys’ General who get to decide. And if you’ve paid attention to anything over the last decade, it’s that state AGs are inherently political, and are some of the most active “culture warriors” out there, quite willing to twist laws to their own interpretation to get headlines.
Also, even worse, as we’ve explained over and over again, laws like these that do require “mitigation” of “harmful content” around things like “eating disorders,” often fail to understand how that content works. As we’ve detailed, Instagram and Facebook made a big effort to block “eating disorder” content, and it backfired in a huge way. First, the issue wasn’t social media driving people to eating disorders, but people seeking out information on eating disorders (in other words, it was a demand side, not a supply side, problem).
So, when that content was removed, people with eating disorders still sought out the same content, and they still found it, either by using code words to get around the blocks or moving to darker, and even more problematic forums, where the people who ran them were way worse. And one result of this was that those forums lost the actually useful forms of mitigation, which include people talking to the kids and helping them get the help they need.
So here we have Tim Wu misunderstanding the problem, misunderstanding the solution he’s supporting (such that he’s supporting an idea already shown to make the problem worse), and he asks why Congress isn’t doing anything? Really?
It doesn’t help that there has been no political accountability for the members of Congress who were happy to grandstand about children online and then do nothing. No one outside a tiny bubble knows that Wicker voted for KOSA in public but helped kill it in private, or that infighting between Cantwell and Pallone helped kill children’s privacy. I know this only because I had to for my job. The press loves to cover members of Congress yelling at tech executives. But its coverage of the killing of popular bills is rare to nonexistent, in part because Congress hides its tracks. Say what you want about the Supreme Court or the president, but at least their big decisions are directly attributable to the justices or the chief executive. Congressmen like Frank Pallone or Roger Wicker don’t want to be known as the men who killed Congress’s efforts to protect children online, so we rarely find out who actually fired the bullet.
Notice that Wu never considers that the bills might be bad and didn’t deserve to move forward?
But Wu has continued to push this, including on exTwitter, in which he attacks those who have presented the many problematic aspects of KOSA, suggesting that they’re simply ignoring the harms. They’re not. Wu is ignoring the harms of what he supports.

Again, while I often disagree with Wu, I usually find he argues in good faith. The argument in this piece, though, is just utter nonsense. Do better.
Filed Under: children, congress, duty of care, eating disorders, kosa, privacy, social media, think of the children, tim wu