Jonathan Haidt’s incredibly well-timed decision to surf on the wave of a moral panic about kids and social media has made him a false hero for many parents and educators. In my review, I noted that his book, “The Anxious Generation,” is written in a way that makes adults struggling with the world today feel good, because it gives them something to blame for lots of really difficult things happening with kids today.
The fact that it’s wrong and the data don’t support the actual claims is of no matter. It feels like it could be right, and that’s much easier than doing the real and extremely difficult work of actually preparing kids for the modern world.
So what happens when an actual expert confronts Haidt on this?
Earlier this year, we had Dr. Candice Odgers on our podcast. Unlike Haidt, she is an actual expert in this field and has been doing research on the issue for years. The podcast was mostly to talk about what the research actually shows, rather than just “playing off Haidt’s” misleading book. However, Odgers has become the go-to responder to Haidt’s misleading moral panic. She’s great at it (though there are a ton of other experts in the field who also point out that Haidt’s claims are not supported by evidence).
Still, Odgers keeps getting called on by publications to respond to Haidt’s claims. She’s done so in Nature, where she highlighted what the research actually shows, and in The Atlantic, where she explained how Haidt’s supported proposals might actually cause real harm to kids.
Many people have been wondering if Haidt and Odgers (who were at UVA at the same, Odgers as a grad student, Haidt as a professor) would have a chance to debate directly, and that finally happened recently during a session hosted by UVA. This gave them a chance to discuss what the research says directly. I recommend watching the whole discussion, which is an hour and a half long, though most of the discussion on the research comes in the first half.
What came across to me, and which Haidt admits at the very end, is that Odgers knows the research in this space better than anyone, and she wasn’t going to let Haidt get away with making broad generalizations not supported by the data. Here’s a snippet of her responding to Haidt insisting that the research supports his position, including that he was seeing the same thing across the Western world. But Odgers points out that’s not what the research shows:
Jon, I’m going to actually I’m going to follow up. So there’s a 2023 Lancet paper that came out. And all of the analyses there’s tremendous variability like you know in terms of mental health across countries. But if you both look at symptoms and you look at suicide, there’s been a general trend for declining rates across all European countries in Canada, Australia. You can pick out certain measures and certain time periods where there might be an increase, but I’ve always been curious of how these cross-country comparisons. So how that becomes evidence that this is somehow causing? Or if we see different trends and different countries at different times, that that creates the story? So… I don’t see what you’re seeing.
She also suggests that Haidt’s problem is that he has a story and then went back searching for data to support it. Rather than going in and seeing what the data actually says:
The cross-country comparisons, you know, they’re they’re often a starting point to see whether there might be something interesting correlationally going on, but it’s a very slippery place to start and I think you know, unless you start with the pretty clear hypothesis about what should explain those differences, if you’re just looking at trend lines and then going backwards and starting to fill in an explanation, it’s hard to follow where it goes and whether or not we’re just fitting these lines to our existing theories, but I’ll leave it.
Haidt jumps in to insist you don’t need a pre-existing hypothesis to find something. This is technically true because of course you can sometimes find something that way. But also, it is kind of a big deal right now, given the replication crisis which started in Haidt’s own field of psychology. The crisis was brought on by researchers hunting through data to try to prove something. This is why pre-registered studies are increasingly so important. So having Haidt just act like not having a hypothesis initially seems pretty tone deaf.
Similarly, I’ll note that Haidt frequently jumps between arguments that aren’t directly connected. When asked about evidence on mental health, he talks instead about things like sextortion and catfishing. Obviously, being a victim of those kinds of attacks and abuses can impact mental health, but that’s still a much smaller part of the issue and isn’t directly related to the larger issue of scrolling on social media and how it impacts mental health.
There’s a lot more in the discussion, but I’m really hoping that more people recognize that Haidt’s position doesn’t seem to really be supported by the evidence. Watching Odgers confront him is enlightening, but too few people will see it. Instead, politicians, parents, and school administrators are all acting as though Haidt has it all figured out. Mostly because it absolves them of having to do the hard work of teaching kids how to use these tools appropriately.
What if the reason we’re so worried about teens on Instagram, TikTok, or Snapchat is because we’ve fundamentally misunderstood the nature of the digital world? What if we’re confusing the everyday risks of growing up online with the specter of unavoidable harm?
No one is better at covering the moral panic about “the kids these days and their social media” than danah boyd. She literally wrote the book on this a decade ago (a decade ago!) and every time she weighs in, it’s with something deeply insightful and enlightening.
Her latest is a must-read. It makes a very clear point on something that had been bothering me, but which I was unable to put into words: there’s a difference between risk and harm, and the people pushing the moral panic about social media harms are deliberately blurring the lines between those two things:
In short,“Does social media harm teenagers?”is not the same question as“Can social media be risky for teenagers?”
The language of “harm” in this question is causal in nature. It is also legalistic. Lawyers look for “harms” to place blame on or otherwise regulate actants. By and large, in legal contexts, we talk about PersonA harming PersonB. As such, PersonA is to be held accountable. But when we get into product safety discussions, we also talk about how faulty design creates the conditions for people to be harmed due to intentional, malfeasant actions by the product designer. Making a product liability claim is much harder because it requires proving the link of harm and the intentionality to harm.
Risk is a different matter. Getting out of bed introduces risks into your life. Risk is something to identify and manage. Some environments introduce more potential risks and some actions reduce the risks. Risk management is a skill to develop. And while regulation can be used to reduce certain risks, it cannot eliminate them. And it can also backfire and create more risks. (This is the problem that María Angel and I have withtechno-legal solutionism.)
This is a point I’ve tried (and failed) to get across for a while, so I greatly appreciate the way she put it here. No one is saying that social media is a riskless environment. But nothing is truly a riskless environment.
In the past, I’ve sometimes described this as one of the lessons I learned growing up. In the neighborhood where I grew up, there was a deli four blocks from my house. But to get there, you had to cross a pretty busy street. When I was little, I wasn’t allowed to go there alone. As I got older, my parents taught me how to cross that street safely, and later I was allowed to go with friends, and eventually, by myself.
There was still some risk involved, but we managed the risk by teaching me about it, and teaching me how to minimize the risk and to walk to the deli safely. There was always still the possibility that I wouldn’t be careful enough. Or that a car would be speeding much faster than it should have gone. Or a car could have gone out of control.
There’s still risk. That risk could lead to harm. But walking to the deli is not an inherently harmful activity.
I think about this a lot in relation to Jonathan Haidt and his books. In his earlier books (and even, to some extent, in The Anxious Generation), he’s a huge proponent of the “free range kids” movement, which is all about teaching kids how to move about in the world freely, without supervision. As with my parents and the deli, it’s about allowing kids to go into risky situations, but doing so in a way that gives them the tools to minimize those risks.
Yet, now, in the virtual world, he acts as if risks can’t be managed must be harms, rather than risks (even if the data completely disagrees with that).
danah’s piece (you really should read the whole thing) talks about risky activities, including crossing busy streets, but also activities like going skiing. Skiing is risky. I still do it (well, snowboarding), and I know there’s some risk in it, but I try to manage that risk as well. Still, every year, I see plenty of people (of all ages) end up hurting themselves on the mountain. There are risks. We know that. Yet many of us still get enjoyment out of it, and try our best to manage the risks.
This is the nature of living.
So why are we treating social media so differently?
As danah notes:
Can social media be risky for youth? Of course. So can school. So can friendship. So can the kitchen. So can navigating parents. Can social media be designed better? Absolutely. So can school. So can the kitchen. (So can parents?) Do we always know the best design interventions? No. Might those design interventions backfire? Yes.
Does that mean that we should give up trying to improve social media or other digital environments? Absolutely not. But we must also recognize thattrying to cement design into law might backfire. And that, more generally, technologies’ risks cannot be managed by design alone.
Fixating on better urban design is pointless if we’re not doing the work to socialize and educate people into crossing digital streets responsibly. And when we age-gate and think that people can magically wake up on their 13th or 18th birthday and be suddenly able to navigate digital streets just because of how many cycles they took around the sun, we’re fools. Socialization and education are still essential, regardless of how old you are. (Psst to the old people: the September that never ended…)
This essay contains so much important information to understand, and it is (as usual) so clearly stated.
This paragraph, though, represents so much of what I feel and what all of the actual research seems to support:
Better design is warranted, but it is not enough if the goal is risk reduction. Risk reduction requires socialization, education, and enough agency to build experience. Moreover, if we think that people will still get hurt, we should be creating digital patrols who are there to pick people up when they are hurt. (This is why I’ve always argued that“digital street outreach”would be very valuable.)
Also, this:
Returning to our earlier note on product liability, it is reasonable to ask if specific design choices of social media create the conditions for certain kinds of harms to be more likely — and for certain risks to be increased. Researchers have consistently found that bullying is more frequent and more egregious at school than on social media, even if it is more visible on the latter. This makes me wary of a product liability claim regarding social media and bullying. Moreover, it’s important to notice what schools have done in response to this problem. They’ve invested in social-emotional learning programs to strengthen resilience, improve bystander approaches, and build empathy. These interventions are making a huge difference, far more than building design. (If someone wants to tax social media companies to scale these interventions, have a field day.)
There’s so much more in the essay, and I feel like it’s something I’m going to keep pointing people to for a long, long time. But if I keep quoting it, I’m just going to end up reposting the whole thing here. So I’ll just say go read the whole thing, as there’s plenty more in there that’s worth reading, thinking about, and understanding.
Last month, we shared the details of a really good “Dear Colleague” letter that Senator Rand Paul sent around urging other Senators not to vote for KOSA. While the letter did not work and the Senate overwhelmingly approved KOSA (only to now have it stuck in the House), Paul has now expanded upon that letter in an article at Reason.
It starts out by pointing out how much good the internet can be for families:
Today’s children live in a world far different from the one I grew up in and I’m the first in line to tell kids to go outside and “touch grass.”
With the internet, today’s children have the world at their fingertips. That can be a good thing—just about any question can be answered by finding a scholarly article or how-to video with a simple search.
While doctors’ and therapists’ offices close at night and on weekends, support groups are available 24 hours a day, 7 days a week, for people who share similar concerns or have had the same health problems. People can connect, share information, and help each other more easily than ever before. That is the beauty of technological progress.
He correctly admits that the internet can also be misused, and that not all of it is appropriate for kids, but that’s no reason to overreact:
It is perhaps understandable that those in the Senate might seek a government solution to protect children from any harms that may result from spending too much time on the internet. But before we impose a drastic, first-of-its-kind legal duty on online platforms, we should ensure that the positive aspects of the internet are preserved. That means we have to ensure that First Amendment rights are protected and that these platforms are provided with clear rules so that they can comply with the law.
He points out that the law empowers the FTC to police content that could impact the mental health of children, but does not clearly define mental health disorders, and those could change drastically with no input from Congress.
What he doesn’t mention is that we’re living in a time when some are trying to classify normal behavior as a mental health disorder, and thus this law could be weaponized.
From there, he talks about the “duty of care.” That’s a key part of both KOSA and other similar bills and says that websites have a “duty of care” to make efforts to block their sites from causing various problems. As we’ve explained for the better part of a decade, a “duty of care” turns itself into a demand for censorship, as it’s the only way for companies to avoid costly litigation over whether or not they were careful enough.
Just last week, I got into a debate with a KOSA supporter on social media. They insisted that they’re not talking about content, but just about design features like “infinite scroll.” When asked about what kind of things they’re trying to solve for, I was told “eating disorders.” I pointed out that “infinite scroll” doesn’t lead to eating disorders. They’re clearly targeting the underlying content (and even that is way more complex than KOSA supporters realize).
Senator Paul makes a similar point in the other direction. Things like “infinite scroll” aren’t harmful if the underlying content isn’t harmful:
For example, if an online service uses infinite scrolling to promote Shakespeare’s works, or algebra problems, or the history of the Roman Empire, would any lawmaker consider that harmful?
I doubt it. And that is because website design does not cause harm. It is content, not design, that this bill will regulate.
As for stopping “anxiety,” Paul makes the very important point that there are legitimate and important reasons why kids may feel some anxiety today, and KOSA shouldn’t stop that information from being shared:
The world’s most well-known climate activist, Greta Thunberg, famously suffers from climate anxiety. Should platforms stop her from seeing climate-related content because of that?
Under this bill, Greta Thunberg would have been considered a minor and she could have been deprived from engaging online in the debates that made her famous.
Anxiety and eating disorders are two of the undefined harms that this bill expects internet platforms to prevent and mitigate. Are those sites going to allow discussion and debate about the climate? Are they even going to allow discussion about a person’s story overcoming an eating disorder? No. Instead, they are going to censor themselves, and users, rather than risk liability.
He also points out — as he did in his original letter — that the KOSA requirements to block certain kinds of ads makes no sense in a world in which kids see those same ads elsewhere:
Those are not the only deficiencies of this bill. The bill seeks to protect minors from beer and gambling ads on certain online platforms, such as Facebook or Hulu. But if those same minors watch the Super Bowl or the PGA tour on TV, they would see those exact same ads.
Does that make any sense? Should we prevent online platforms from showing kids the same content they can and do see on TV every day? Should sports viewership be effectively relegated to the pre-internet age?
Even as I’ve quoted a bunch here, there’s way more in the article. It is, by far, one of the best explanations of the problems of KOSA and many other bills that use false claims of “regulating design” as an attempt to “protect the kids.” He also talks about the harms of age verification, how it will harm youth activism, and how the structure of the bill will create strong incentives for websites to pull down all sorts of controversial content.
There is evidence that kids face greater mental health challenges today than in the past. Some studies suggest this is more because of society’s openness to discussing and diagnosing mental health challenges. But there remains no compelling evidence that the internet and social media are causing it. Even worse, as Paul’s article makes abundantly clear, there is nothing out there suggesting that censoring the internet will magically fix those problems. Yet, that’s what KOSA and many other bills are designed to do.
It’s just like adults to be constantly diagnosing the wrong thing in trying to “save the children.” Over the last couple of years there’s been a mostly nonsense moral panic claiming that the teen mental health crisis must be due to social media. Of course, as we’ve detailed repeatedly, the actual research on this does not support that claim at all.
Instead, the evidence suggests that there is a ton of complexity happening here and no one factor. That said, two potentially big factors contributing to the teen mental health crisis are (1) the mental health challenges that their parents are facing, and (2) the lack of available help and resources for both kids and parents to deal with mental health issues.
When you combine that, it should be of little surprise that desperate teens are turning to AI for mental health support. That’s discussed in an excellent new article in The Mercury News’ Mosaic Journalism Program, which helps high school students learn how to do professional-level journalism.
For many teenagers, digital tools such as programs that use artificial intelligence, or AI, have become a go-to option for emotional support. As they learn to navigate and cope in a world where mental health care demands are high, AI is an easy and inexpensive choice.
Now, I know that some people’s immediate response is to be horrified by this, and it’s right to be concerned. But, given the situation teens find themselves in, this is not all that surprising.
Teens don’t have access to real mental health help. On our most recent podcast, we spoke to an expert in raising kids in a digital age, Devorah Heitner, who mentioned that making real, professional mental health support available in every high school would be so much more helpful than something silly like a “Surgeon General’s Warning” on social media.
Indeed, as another recent podcast guest, Candice Odgers, has noted, the evidence actually suggests that the reason kids with mental health issues spend so much time on social media might be because they are already having mental health issues, and the lack of resources to actually help them makes them turn to social media instead.
And now, it appears it may also make them turn to AI systems.
The details in the article aren’t as horrifying as they might otherwise be. It does note that there are ways that using AI can be helpful to some kids, which I’m sure is true:
Some students, like Brooke Joly, who will be a junior at Moreau Catholic High School in Hayward in the fall, say they value the bluntness of AI when seeking advice or mental health tips.
“I’ve asked AI for advice a few times because I just wanted an accurate answer rather than someone I know sugar-coating,” she said by text in an interview.
The privacy and consistency that AI promises its young users does make a compelling case for choosing mental health care delivered via app.
Venkatesh, who said she has struggled with depression, said she appreciates that ChatGPT has no judgmental bias. “I think the symptoms of depression are very stigmatized, and if you were to tell people what the reality of depression is like — skipping meals or skipping showers, for instance — people would judge you for that. I think in those instances, it’s easier to talk to someone who is not human because AI would never judge you for that.”
AI can provide a safe space for teens to be vulnerable at a point when the adults in their lives may not be supportive of mental health care.
That said, this is another area that is simply not well-studied at all (unlike social media and mental health, which now have tons of studies).
Hopefully, we can see some actual studies on whether or not AI can actually be helpful here. The article does note that there are some specialized apps focused on this market, but one would hope those would have some data to back up their approach. Relying on a general LLM like ChatGPT seems like… a much riskier proposition.
As one youth director in the article notes, one thing that using AI does for kids is it puts them in control, at a time when they often feel they have control over so little. This brings us to yet another study that we’ve talked about in the past: one that suggests that another leading factor in mental health struggles for kids has been the lack of spaces where parents aren’t hovering over them and making all the decisions.
Given that, you can understand why kids might seek their own solutions. The lack of viable options that don’t involve, once again, having parents or other authority figures hovering over them, certainly makes them more appealing.
None of this is great, and (again) it would appear that any real solution should involve making mental health professionals more accessible to teens, such as in schools. But absent that, it’s understandable why they might turn to other types of tools. So, hopefully, there’s going to be a lot more research on how helpful (or unhelpful!) those tools actually are, or at least how to properly integrate them into a larger, more comprehensive, approach to improving mental health.
What if banning social media from schools actually put kids at even greater risk?
One of the more annoying things in talking about tech policy is how many people refuse to think one step ahead about how the world reacts to their policy proposals. We’ve talked about this in many contexts, but one that keeps coming up as illustrative is eating disorder content, where getting big social media companies to ban such content backfired.
That’s because the issue with eating disorder content online wasn’t a “supply side” problem (kids getting eating disorders because they stumbled upon such content online), but rather a “demand side” problem (kids with eating disorders seeking out such content). When social media sites banned that content, the kids still went looking for it, but often found it in less reputable places, and (even worse!) often in places that didn’t also try to provide resources or other community members to guide people towards recovery.
For every effort to “ban” something, we need to think about what impact it will actually have.
Lately, there’s been a lot of talk about “banning social media in schools.” And you can see why this feels like it makes sense intuitively. There are concerns about how much time kids spend on social media, what content they see, and certainly whether or not it’s relevant in schools. I mean, we already have school districts across the country filing ridiculous lawsuits against social media companies claiming that they’re melting kids’ brains.
So shouldn’t banning social media in schools be a no-brainer?
Turns out it’s a lot more complex than that. The always excellent reporter Emily Baker-White has a piece at Forbes that, among other things, is looking at what happens in schools that have banned social media. And it’s not making kids any more safe. If anything, the reverse is happening.
As with the eating disorder issue discussed above, it appears that the kids these days want their social media. And if schools try to ban social media, the kids are finding ways around those bans, often using questionable free VPNs to route around network level blocks. And those free VPNs are… pretty bad about the privacy of people using them:
It’s common practice among most school districts to restrict the internet access of their students to prevent them from browsing porn and “inappropriate” websites — from social media platforms to educational sites about racial identity, mental and reproductive health. And, increasingly, it’s common practice among many kids to use apps that bypass those restrictions so they can view those sites anyway.
Today, 1 in 4 high American school students now use workarounds to avoid schools’ internet restrictions, which, in addition to blocking websites, can also monitor their personal online lives, including their social media posts, emails, and browsing history. The most common of these workarounds is a VPN, or virtual private network, which obscures a user’s IP address from the websites they navigate to and the apps they use. But VPNs — especially the free types that teens are most likely to use — often collect sensitive personal information like location and browsing history.
Many unscrupulous free VPN companies then sell that information to data brokers. Some of them have ties to China, where the Chinese Communist Party has the authority to force any company to hand over such data. And others may contain malware that allows hackers to take control of devices on which they are installed.
Yeah. So, for all the hyped up fear about TikTok supposedly shipping all our kids’ data to China, it appears a more effective way for China to get data on American kids is to… have more American schools ban social media at school, leading them to use sketchy VPNs that suck up data and keep that data in China.
And yes, this can lead to very real risks:
Just last week, the U.S. Justice Department indicted a Chinese national for allegedly using free VPNs to gain access to 19 million IP addresses, more than 600,000 of which were in the United States, and renting them out to criminals who used them to stalk and defraud people and engage in child exploitation.
Of course, rather than recognizing that maybe banning stuff outright might lead to worse results, I’m pretty sure all this is going to lead to is the next mole to whack in this never-ending game of whac-a-mole, and politicians and schools will be looking to figure out ways to… ban free VPNs.
The article has lots of concerned quotes from policymakers… focused on the problem of VPNs. But not so much on how their own moral panics drove up the usage of these VPNs.
Still, this is yet another example where when folks like Jonathan Haidt insist there are really no downsides to his policy proposals — which include banning social media for many kids — they may not understand at all what they’re talking about.
Just last week, we posted about a thorough debunking of the “mobile phones are bad for kids” argument making the rounds. We highlighted how banning phones can actually do significantly more harm than good. This was based on a detailed article in the Atlantic by UCI psychologist and researcher Candice Odgers, who actually studies this stuff.
As she’s highlighted multiple times, none of the research supports the idea that phones or social media are inherently harmful. In the very small number of cases where there’s a correlation, it often appears to be a reverse causal situation:
When associations are found, things seem to work in the opposite direction from what we’ve been told: Recent research among adolescents—including among young-adolescent girls, along with a large review of 24 studies that followed people over time—suggests that early mental-health symptoms may predict later social-media use, but not the other way around.
In other words, the kids who often have both mental health problems and difficulty putting down their phones appear to be turning to their phones because of their untreated mental health issues, and because they don’t have the resources necessary to help them.
Taking away their phones takes away their attempt to find help for themselves, and it also takes away a lifeline that many teens have used to actually help themselves: whether it’s in finding community, finding information they need, or otherwise communicating with friends and family. Cutting that off can cause real harm. Again, as Odgers notes:
We should not send the message to families—and to teens—that social-media use, which is common among adolescents and helpful in many cases, is inherently damaging, shameful, and harmful. It’s not. What my fellow researchers and I see when we connect with adolescents is young people going online to do regular adolescent stuff. They connect with peers from their offline life, consume music and media, and play games with friends. Spending time on YouTube remains the most frequent online activity for U.S. adolescents. Adolescents also goonline to seek information about health, and this is especially true if they also report experiencing psychological distress themselves or encounter barriers to finding help offline. Many adolescents report finding spaces of refuge online, especially when they have marginalized identities or lack support in their family and school. Adolescents also report wanting, but often not being able to access, online mental-health services and supports.
All adolescents will eventually need to know how to safely navigate online spaces, so shutting off or restricting access to smartphones and social media is unlikely to work in the long term. In many instances, doing so could backfire: Teens will find creative ways to access these or even more unregulated spaces, and we should not give them additional reasons to feel alienated from the adults in their lives.
But still, when there’s a big moral panic to be had, politicians are quick to follow, so banning mobile phones for teens is on the table:
The committee says that without urgent action, more children could be put in harm’s way.
It recommended the next government should work with the regulator, Ofcom, to consult on additional measures, including the possibility of a total ban on smartphones for under-16s or having parental controls installed as a default.
The report notes that mobile phone use has gone up in recent years:
Committee chairman Robin Walker said its inquiry had heard “shocking statistics on the extent of the damage being done to under-18s”.
The report found there had been a significant rise in screen time in recent years, with one in four children now using their phone in a manner resembling behavioural addiction.
Again, most of those studies cover the time when kids were locked down due to COVID, so it’s not at all surprising that their phone usage went up. And, as Odgers has shown, there’s been no actual data suggesting any real or significant causal connection between phone use and mental health problems for kids.
Incredibly, since this is happening in the UK, you’d think that maybe the MPs could wander over to Oxford (surely, they’re aware of it?) and talk to Andrew Przybylski, who keeps releasing new studies, based on huge data sets, that show no link between phone/internet use and harm. He’s been pumping these out for years. Surely, the MPs could be bothered to go take a look?
But, no, it’s easier to ignore the real problem (and the hard societal solutions it would entail) and instead play up the moral panic. Then, they can do something stupidly, dangerously counter-productive like banning phones… and claim victory. Then, when the mental health problems get worse, not better, they can find some other technology to blame, rather than taking a step back and wondering why they’re failing to provide resources to help those dealing with a mental health crisis.
Apparently, the world needs even more terrible bills that let ignorant senators grandstand to the media about how they’re “protecting the kids online.” There’s nothing more serious to work on than that. The latest bill comes from Senators Brian Schatz and Ted Cruz (with assists from Senators Chris Murphy, Katie Britt, Peter Welch, Ted Budd, John Fetterman, Angus King, and Mark Warner). This one is called the “The Kids Off Social Media Act” (KOSMA) and it’s an unconstitutional mess built on a long list of debunked and faulty premises.
It’s especially disappointing to see this from Schatz. A few years back, I know his staffers would regularly reach out to smart people on tech policy issues in trying to understand the potential pitfalls of the regulations he was pushing. Either he’s no longer doing this, or he is deliberately ignoring their expert advice. I don’t know which one would be worse.
The crux of the bill is pretty straightforward: it would be an outright ban on social media accounts for anyone under the age of 13. As many people will recognize, we kinda already have a “soft” version of that because of COPPA, which puts much stricter rules on sites directed at those under 13. Because most sites don’t want to deal with those stricter rules, they officially limit account creation to those over the age of 13.
In practice, this has been a giant mess. Years and years ago, Danah Boyd pointed this out, talking about how the “age 13” bit is a disaster for kids, parents, and educators. Her research showed that all this generally did was to have parents teach kids that “it’s okay to lie,” as parents wanted kids to use social media tools to communicate with grandparents. Making that “soft” ban a hard ban is going to create a much bigger mess and prevent all sorts of useful and important communications (which, yeah, is a 1st Amendment issue).
Schatz’s reasons put forth for the bill are just… wrong.
No age demographic is more affected by the ongoing mental health crisis in the United States than kids, especially young girls. The Centers for Disease Control and Prevention’s Youth Risk Behavior Survey found that 57 percent of high school girls and 29 percent of high school boys felt persistently sad or hopeless in 2021, with 22 percent of all high school students—and nearly a third of high school girls—reporting they had seriously considered attempting suicide in the preceding year.
Gosh. What was happening in 2021 with kids that might have made them feel hopeless? Did Schatz and crew simply forget about the fact that most kids were under lockdown and physically isolated from friends for much of 2021? And that there were plenty of other stresses, including millions of people, including family members, dying? Noooooo. Must be social media!
Studies have shown a strong relationship between social media use and poor mental health, especially among children.
Note the careful word choice here: “strong relationship.” They won’t say a causal relationship because studies have not shown that. Indeed, as the leading researcher in the space has noted, there continues to be no real evidence of any causal relationship. The relationship appears to work the other way: kids who are dealing with poor mental health and who are desperate for help turn to the internet and social media because they’re not getting help elsewhere.
Maybe offer a bill that helps kids get access to more resources that help them with their mental health, rather than taking away the one place they feel comfortable going? Maybe?
From 2019 to 2021, overall screen use among teens and tweens (ages 8 to 12) increased by 17 percent, with tweens using screens for five hours and 33 minutes per day and teens using screens for eight hours and 39 minutes.
I mean, come on Schatz. Are you trolling everyone? Again, look at those dates. WHY DO YOU THINK that screen time might have increased 17% for kids from 2019 to 2021? COULD IT POSSIBLY BE that most kids had to do school via computers and devices at home, because there was a deadly pandemic making the rounds?
Maybe?
Did Schatz forget that? I recognize that lots of folks would like to forget the pandemic lockdowns, but this seems like a weird way to manifest that.
I mean, what a weird choice of dates to choose. I’m honestly kind of shocked that the increase was only 17%.
Also, note that the data presented here isn’t about an increase in social media use. It could very well be that the 17% increase was Zoom classes.
Based on the clear and growing evidence, the U.S. Surgeon General issued an advisory last year, calling for new policies to set and enforce age minimums and highlighting the importance of limiting the use of features, like algorithms, that attempt to maximize time, attention, and engagement.
Wait. You mean the same Surgeon General’s report that denied any causal link between social media and mental health (which you falsely claim has been proved) and noted just how useful and important social media is to many young people?
From that report, which Schatz misrepresents:
Social media can provide benefits for some youth by providing positive community and connection with others who share identities, abilities, and interests. It can provide access to important information and create a space for self-expression. The ability to form and maintain friendships online and develop social connections are among the positive effects of social media use for youth. , These relationships can afford opportunities to have positive interactions with more diverse peer groups than are available to them offline and can provide important social support to youth. The buffering effects against stress that online social support from peers may provide can be especially important for youth who are often marginalized, including racial, ethnic, and sexual and gender minorities. , For example, studies have shown that social media may support the mental health and well-being of lesbian, gay, bisexual, asexual, transgender, queer, intersex and other youths by enabling peer connection, identity development and management, and social support. Seven out of ten adolescent girls of color report encountering positive or identity-affirming content related to race across social media platforms. A majority of adolescents report that social media helps them feel more accepted (58%), like they have people who can support them through tough times (67%), like they have a place to show their creative side (71%), and more connected to what’s going on in their friends’ lives (80%). In addition, research suggests that social media-based and other digitally-based mental health interventions may also be helpful for some children and adolescents by promoting help-seeking behaviors and serving as a gateway to initiating mental health care.
Did Schatz’s staffers just, you know, skip over that part of the report or nah?
The bill also says that companies need to not allow algorithmic targeting of content to anyone under 17. This is also based on a widely believed myth that algorithmic content is somehow problematic. No studies have legitimately shown that of current algorithms. Indeed, a recent study showed that removing algorithmic targeting leads to people being exposed to more disinformation.
Is this bill designed to force more disinformation on kids? Why would that be a good idea?
Yes, some algorithms can be problematic! About a decade ago, algorithms that tried to optimize solely for “engagement” definitely created some bad outcomes. But it’s been a decade since most such algorithms have been designed that way. On most social media platforms, the algorithms are designed in other ways, taking into account a variety of different factors, because they know that optimizing just on engagement leads to bad outcomes.
Then the bill tacks on Cruz’s bill to require schools to block social media. There’s an amusing bit when reading the text of that part of the law. It says that you have to block social media on “federally funded networks and devices” but also notes that it does not prohibit “a teacher from using a social media platform in the classroom for educational purposes.”
But… how are they going to access those if the school is required by law to block access to such sites? Most schools are going to do a blanket ban, and teachers are going to be left to do what? Show kids useful YouTube science videos on their phones? Or maybe some schools will implement a special teacher code that lets them bypass the block. And by the end of the first week of school half the kids in the school will likely know that password.
What are we even doing here?
Schatz has a separate page hyping up the bill, and it’s even dumber than the first one above. It repeats some of the points above, though this time linking to Jonathan Haidt, whose work has been trashed left, right, and center by actual experts in this field. And then it gets even dumber:
Big Tech knows it’s complicit – but refuses to do anything about it…. Moreover, the platforms know about their central role in turbocharging the youth mental health crisis. According to Meta’s own internal study, “thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse.” It concluded, “teens blame Instagram for increases in the rate of anxiety and depression.”
This is not just misleading, it’s practically fraudulent misrepresentation. The study Schatz is citing is one that was revealed by Frances Haugen. As we’ve discussed, it was done because Meta was trying to understand how to do better. Indeed, the whole point of that study was to see how teens felt about using social media in 12 different categories. Meta found that most boys felt neutral or better about themselves in all 12 categories. For girls, it was 11 out of 12. It was only in one category, body image, where the split was more pronounced. 32% of girls said that it made them feel worse. Basically the same percentage said it had no impact, or that it made them feel better.
Also, look at that slide’s title. The whole point of this study was to figure out if they were making kids feel worse in order to look into how to stop doing that. And now, because grandstanders like Schatz are falsely claiming that this proves they were “complicit” and “refuse to do anything about it,” no social media company will ever do this kind of research again.
Because, rather than proactively looking to see if they’re creating any problems that they need to try to fix, Schatz and crew are saying “simply researching this is proof that you’re complicit and refuse to act.”
Statements like this basically ensure that social media companies stick their heads in the sand, rather than try to figure out where harm might be caused and take steps to stop that harm.
Why would Schatz want to do that?
That page then also falsely claims that the bill does not require age verification. This is a silly two-step that lying politicians claim every time they do this. Does it directly mandate age verification? No. But, by making the penalties super serious and costly for failing to stop kids from accessing social media that will obviously drive companies to introduce stronger age verification measures that are inherently dangerous and an attack on privacy.
Perhaps Schatz doesn’t understand this, but it’s been widely discussed by many of the experts his staff used to talk to. So, really, he has no excuse.
The FAQ also claims that the bill will pass constitutional muster, while at the same time admitting that they know there will be lawsuits challenging it:
Yes. As, for example, First Amendment expert Neil Richards explains, “[i]nstead of censoring the protected expression present on these platforms, the act takes aim at the procedures and permissions that determine the time, place and manner of speech for underage consumers.” The Supreme Court has long held that the government has the right to regulate products to protect children, including by, for instance, restricting the sale of obscene content to minors. As Richards explains: “[i]n the same way a crowded bar or nightclub is no place for a child on their own”—or in the way every state in the country requires parental consent if it allows a minor to get a tattoo—“this rule would set a reasonable minimum age and maturity limitation for social media customers.”
While we expect legal challenges to any bill aimed at regulating social media companies, we are confident that this content-neutral bill will pass constitutional muster given the government interests at play.
There are many reasons why this is garbage under the law, but rather than breaking them all down (we’ll wait for judges to explain it in detail), I’ll just point out the major tell is in the law itself. In the definition of what a “social media platform” is in the law, there is a long list of exceptions of what the law does not cover. It includes a few “moral panics of yesteryear” that gullible politicians tried to ban and were found to have violated the First Amendment in the process.
It explicitly carves out video games and content that is professionally produced, rather than user-generated:
Remember the moral panics about video games and TV destroying kids’ minds? Yeah. So this child protection bill is hasty to say “but we’re not banning that kind of content!” Because whoever drafted the bill recognized that the Supreme Court has already made it clear that politicians can’t do that for video games or TV.
So, instead, they have to pretend that social media content is somehow on a whole different level.
But it’s not. It’s still the government restricting access to content. They’re going to pretend that there’s something unique and different about social media, and that they’re not banning the “content” but rather the “place” and “manner” of accessing that content. Except that’s laughable on its face.
You can see that in the quote above where Schatz does the fun dance where he first says “it’s okay to ban obscene content to minors” and then pretends that’s the same as restrictions on access to a bar (it’s not). One is about the content, and one is about a physical place. Social media is all about the content, and it’s not obscene content (which is already an exception to the First Amendment).
And, the “parental consent” for tattoos… I mean, what the fuck? Literally 4 questions above in the FAQ where that appears Schatz insists that his bill has nothing about parental consent. And then he tries to defend it by claiming it’s no different than parental consent laws?
The FAQ also claims this:
This bill does not prevent LGBTQ+ youth from accessing relevant resources online and we have worked closely with LGBTQ+ groups while crafting this legislation to ensure that this bill will not negatively impact that community.
I mean, it’s good you talked to some experts, but I note that most of the LGBTQ+ groups I’m aware of are not listed on your list of “groups supporting the bill” on the very same page. That absence stands out.
And, again, the Surgeon General’s report that you misleadingly cited elsewhere highlights how helpful social media can be to many LGBTQ+ youth. You can’t just say “nah, it won’t harm them” without explaining why all those benefits that have been shown in multiple studies, including the Surgeon General’s report, somehow don’t get impacted.
There’s a lot more, but this is just a terrible bill that would create a mess. And, I’m already hearing from folks in DC that Schatz is trying to get this bill added to the latest Christmas tree of a bill to reauthorize the FAA.
It would be nice if we had politicians looking to deal with the actual challenges facing kids these days, including the lack of mental health support for those who really need it. Instead, we get unconstitutional grandstanding nonsense bills like this.
Everyone associated with this bill should feel ashamed.
The Florida legislature (with the support of goon-in-chief Ron DeSantis) really has a thing for unconstitutional bills that piss on the 1st Amendment. I mean, the state is still in a 1st Amendment fight with Mickey Mouse, which should tell you something.
Some of those laws are about schools. Some are about libraries, and — yes — some are about social media. You’ll recall that Florida and DeSantis were among the first to pass a law trying to regulate social media editorial freedom, which went down in flames in both the district court and the 11th Circuit (the Supreme Court is about to hear the case).
The latest entrant into this unconstitutional morass is the very first bill of the 2024 session, HB 1, “Social Media Use for Minors.” The name is a misnomer. It should be “no social media use for minors” because it’s an out and out ban on kids using social media under the age of 16. It also requires age verification and some other things, all of which have already been found to be unconstitutional violations of the 1st Amendment elsewhere.
But this is Florida, where culture wars outrank constitutionality, apparently.
The bill’s sponsor, Florida Rep. Tyler Sirois, claims on his website that he believes in “the principals of limited government, individual responsibility, and constitutional liberty” which is fucking hilarious given that this bill literally violates all three of those principles.
The bill is an unconstitutional government intrusion in the ability of individuals to take responsibility for their own actions. Hear me out, but it’s possible that Tyler Sirois is a hypocrite.
Sirois has said the bill is necessary to keep kids safe. But, of course, as we keep pointing out, there remains an astounding lack of evidence of any inherent danger to kids and social media. Indeed, as the Journal of Pediatrics detailed in a big meta study last fall, the lack of spaces where kids can be kids seems to be the driving factor in teen mental health issues, and taking social media away from them seems likely to make that worse. Most kids seem fine with social media, and many find it hugely beneficial. There are only a small group where it is problematic, and any real solutions should be focused on helping that small group. Not banning all kids.
Second, it’s well established that kids have 1st Amendment rights to access information and both social media bans and age restrictions have been found to be obviously, and easily, unconstitutional in just the last year alone. Does Sirois not have a legislative director who understands these things, or does he just not care? You don’t get to pretend the 1st Amendment doesn’t exist to push your unconstitutional monstrosity.
So, the bill attacking a “problem” that the evidence says doesn’t exist, and doing so in an unconstitutional way. And it’s doing so by removing parental autonomy and responsibility.
So how the hell does Sirois claim that those are his three principles… and then push this monstrosity?
Okay, look, at this point, we need to start calling out those in positions of power who insist that it’s unquestionable that social media is harmful to kids when the don’t present any evidence at all to back up those assertions. Because as we’ve been documenting, every single study that comes out these days seems to say the exact opposite. I know that I’ve posted this a few times lately, but I’m going to do so again, because it’s important to understand just how the research consensus is shaping up these days:
In the fall of 2022, the widely respected Pew Research Center did a massive study on kids and the internet, and found that for a majority of teens, social media was way more helpful than harmful.
In May of 2023, the American Psychological Association (which has fallen for tech moral panics in the past, such as with video games) released a huge, incredibly detailed, and nuanced report going through all of the evidence, and finding no causal link between social media and harms to teens.
Soon after that, the US Surgeon General came out with a report which was misrepresented widely in the press. Yet, the details of that report also showed that no causal link could be found between social media and harms to teens. It did still recommend that we act as if there were a link, which was weird and explains the media coverage, but the actual report highlights no causal link, while also pointing out how much benefit teens receive from social media).
A few months later, an Oxford University study came out covering nearly a million people across 72 countries, noting that it could find no evidence of social media leading to psychological harm.
The Journal of Pediatrics published a new study in the fall of 2023 again noting that after looking through decades of research, the mental health epidemic faced among young people appears largely due to the lack of open spaces where kids can be kids without parents hovering over them. That report notes that they explored the idea that social media was a part of the problem, but could find no data to support that claim.
In November of 2023, Oxford University published yet another study, this one focused specifically on screen time, and if increased screen time was found to be damaging to kids, and found no data to support that contention.
That’s not to say there isn’t some sort of mental health crisis going on these days. Almost every expert believes there absolutely is. It’s just that the rush to blame it on social media is simply unsupported by the data. If anything, as the Journal of Pediatrics study shows, it’s the lack of open spaces where kids can be kids without parents watching their every move (which predates the rise of social media) that may contribute the most to the rise in mental health issues among children. Thus, the simplistic, and almost certainly wrong, argument that social media is to blame may even make the problem worse, because social media has become the one place left where kids often can just be kids without parents hovering over them.
Much of the research above — including the APA and Surgeon General’s report — also find that for many teens, social media is actually very useful and helpful for their mental health, in giving them a place to explore, figure out who they are as a person, and to interact with people beyond the narrow set of folks they might meet otherwise.
However, many of the studies also agree that for a small — but still important — group of teens, social media can exacerbate existing mental health problems, when they seek to use it alone as a kind of medication, allowing them to go deeper. And it’s quite clear that we should be looking for, promoting, and encouraging efforts to help those at risk teens, and provide better tools and resources for them.
But that’s very different from insisting (and regulating) social media as if it is universally bad for kids.
That’s all preamble to what this post is actually about. Utah Governor Spencer Cox has already made it clear he hates social media. He signed one of the first bills in the country that (unconstitutionally) tries to ban kids from social media, and mocked those who pointed out it was unconstitutional (we’ll soon find out, as Utah was just sued over that law).
“I think it’s obvious to anyone who spends any time on social media or has kids — I have four kids. I’ve seen what’s happened to them as they’ve spent time on social media, and their friends, that this is absolutely causing these terrible increases, these hockey stick-like increases that we are seeing in anxiety, depression, and self-harm amongst our youth,” Cox, the chair of National Governors Association, said during an interview on NBC’s “Meet the Press,” that aired Sunday.
Now, if Meet the Press were actually concerned about accuracy, its host might have, you know, pointed out all the studies that say otherwise, and questioned Cox on how his anecdotal insistence can possibly stand up to all of those studies. But, that’s not how the mainstream media acts these days (and especially not those that have a vested interest in slamming the internet).
Cox went even further, though, insisting not only does it harm kids (despite the evidence to the contrary), but also that big tech knows this and doesn’t care:
“They know this is harming our kids,” Cox said of big tech companies. “They’re covering it up. They’re doing everything possible to take advantage of our kids for their own gain. And we’re not going to stand for that. And so we’re still pushing forward.”
Now, it’s always possible that some companies are doing this, but from what I’ve seen, the opposite is true. The research that has come out to date has shown companies studying this stuff in order to figure out ways to minimize the harm.
Of course, with some of the spin on things like Meta’s internal research (which, again, was to look at ways to minimize any harm), which was falsely portrayed as Meta “covering up” or “ignoring” harm caused to kids, it’s actually now going to drive these companies to do less research, and do less to stop any harms. Because politicians like Cox, and media outlets like NBC, are still going to spin any such research as “proof” of “covering it up.”
The whole thing is stupid beyond belief. The evidence shows what the evidence shows, and it’s that right now there’s a giant moral panic going on. There is no evidence that social media is inherently bad for teenagers. There is a ton of research suggesting it’s helpful for most kids, and that any interventions should be clearly targeted to the small group of at risk kids.
But, Spencer Cox is absolutely positive he’s right, apparently because of what he has observed with a tosample size of his own kids. Maybe, given all of these studies, it suggests that Cox has been spending so much time raging about culture war moral panics when he could have, you know, taught his kids how to use the internet properly.
Notably, the other guest on that episode of Meet the Press was the governor of Utah’s neighbor to the east, Governor Jared Polis from Colorado. And despite the GOP constantly insisting it’s the party of “parents’ rights” and “keeping government out” of everyone’s business, it’s Polis who argues that Cox is doing the opposite, and suggests he (correctly) thinks these are issues that parents themselves should deal with:
“I think the responsibility belongs with parents, not the government,” Polis, the vice chair of the NGA, said during the joint interview with Cox.
“I certainly agree with the diagnosis that Governor Cox did, and I have some sympathy for that approach. But I do think at the end of the day, the government can’t parent kids,” he added later.
Polis is still wrong regarding the diagnosis. The evidence pretty clearly says that. But he’s correct that this is an issue for parents and schools, not for the government to step in and effectively ban children from the very social media that many of them find so useful.
At this point, I really have to question the seriousness of anyone who claims that the evidence shows that social media is bad for kids. We’re now reaching a point where the research is increasingly overwhelmingly pointing in the other direction. I’ve posted it before, but I’ll post this list again:
Last fall, the widely respected Pew Research Center did a massive study on kids and the internet, and found that for a majority of teens, social media was way more helpful than harmful.
This past May, the American Psychological Association (which has fallen for tech moral panics in the past, such as with video games) released a huge, incredibly detailed and nuanced report going through all of the evidence, and finding no causal link between social media and harms to teens.
Soon after that, the US Surgeon General came out with a report which was misrepresented widely in the press. Yet, the details of that report also showed that no causal link could be found between social media and harms to teens. It did still recommend that we act as if there were a link, which was weird and explains the media coverage, but the actual report highlights no causal link, while also pointing out how much benefit teens receive from social media.
A few months later, an Oxford University study came out covering nearly a million people across 72 countries, noting that it could find no evidence of social media leading to psychological harm.
The Journal of Pediatrics recently published a new study again noting that after looking through decades of research, the mental health epidemic faced among young people appears largely due to the lack of open spaces where kids can be kids without parents hovering over them. That report notes that they explored the idea that social media was a part of the problem, but could find no data to support that claim.
In November a new study came out from Oxford showing no evidence whatsoever of increased screentime having any impact on the functioning of brain development in kids.
And we can go back further too. There was a study in 2019 that couldn’t find any evidence of social media being bad for kids.
But now we have yet another study to add to the list. And it’s a big one. It comes from the National Academies of Science, entitled Social Media and Adolescent Health. Eleven different academics helped put the paper together, along with another seven staff members who worked on it. This isn’t just some random report that a couple academics put together. It was a massive project. And it shows.
But the key finding:
The committee’s review of the literature did not support the conclusion that social media causes changes in adolescent health at the population level.
That’s not to say that everything is great. As we’ve detailed, and as many other studies have shown, there certainly are situations where some individuals who are already dealing with certain mental health issues may find them exacerbated on social media. And there are some reasonable concerns about some kids getting so focused on social media that it takes away from sleep or studying. And the report makes this clear as well.
As this (and many other) reports make clear, the issues here are more complex, and any focus on just banning social media outright would likely do more harm than good:
Studies looking at the association between social media use and feelings of sadness over time have largely found small to no effects, but people with clinically meaningful depression may engage with social media differently. Some research has proposed that this relation is circular, with people with more symptoms of depression spending more time using social media and social media use predicting risk of depression. At the same time, the relation between social media use and depression might vary among different demographic or identity groups. Among LGBTQ+ teens, for example, social media use is associated with fewer depressive symptoms but an increased risk of bullying.
The report notes that it would be useful to have access to more data, while also admitting (which unfortunately too many academics don’t) that data access questions also come with certain risks:
It is difficult to determine what effect social media has on well-being or the extent to which companies are doing due diligence to protect young people from the more habit-forming affordances of their platforms, as companies retain extremely tight control on their data and algorithms. A general lack of transparency regarding social media operations has bred public distrust of the platforms and the companies that run them. Yet some of the companies’ reluctance to share data is valid. Platform algorithms are proprietary, which can make obliging companies to share them seem unfair and uncompetitive. Social media companies also hold a great deal of information about ordinary people that could, in the wrong hands, be used for surveillance or blackmail. For these reasons, the development of technical standards to benchmark platform operations, transparency, and data use requires the coordination of a range of stakeholders.
The report then has a bunch of recommendations, and notably they do not include things like age verification, or aggressive parental controls, or cutting off kids’ access to social media (which are the main policies we see being proposed around the globe). Instead, the recommendations are much more reasonable and nuanced. It includes things like much more digital media literacy training in schools starting as early as kindergarten, and running through all years of schooling.
It does suggest that social media companies should develop more standardized systems for reporting abuse and harassment, as well as managing those reports, adjudicating them, and following up on them. It does suggest that the social media companies should be more open to working with researchers to share data, but doesn’t seem to be suggesting mandated access, just “good faith efforts,” which seems more reasonable than out and out mandates.
Overall, this is yet another study that shows these issues are complex and nuanced, and that much of the media reporting (and political messaging, including by the US Surgeon General) goes way beyond what the data actually shows.
I’m also pleased that, unlike the misleading reports that note an increase in teen suicide starting from the mid-2000s (that some academics have used to blame social media), this report goes back to the 1970s (we published an identical chart — it’s literally the same chart — last year as well) which shows that teen suicide rates were much higher in the 90s before declining sharply in the early 2000s, and then starting to go up over the last decade or so.
Anyway, as we’ve been saying for the longest time, the general idea that social media is inherently harmful to teens has been debunked so many times it’s simply malpractice for anyone — especially a policymaker or journalist — to say otherwise at this point. There are real concerns for some teens. But, at the same time, it’s pretty clear that social media is also helpful for many teens.
We should be looking at ways to help those who end up having problems with it, but that appears to be a very small percentage. But instead of looking for targeted treatments, we’re seeing overblown nonsense suggesting it’s harmful across the board.
Hopefully this study, like so many others, will finally get across the idea that it is not, in fact, inherently harmful.