Apparently, Congress only “listens to the children” when they agree with what the kids are saying. As soon as some kids oppose something like KOSA, their views no longer count.
It’s no surprise given the way things were going, but the Senate today overwhelmingly passed KOSA by a 91 to 3 vote. The three no votes were from Senators Ron Wyden, Rand Paul, and Mike Lee.
There are still big questions about whether the House will follow suit, and, if so, how different their bill would be, and how the bills from the two chambers would be reconciled, but this is a step closer to KOSA becoming law, and creating all of the many problems people have been highlighting about it for years.
One thing I wanted to note, though, is how cynical the politicians supporting this have been. It’s become pretty typical for senators to roll out “example kids” as a kind of prop as for why they have to pass these bills. They will have stories about horrible things that happened, but with no clear explanation for how this bill would actually prevent that bad thing, and while totally ignoring the many other bad things the bill would cause.
In the case of KOSA, we’ve already highlighted how it would do harm to all sorts of information and tools that are used to help and protect kids. The most obvious example is LGBTQ+ kids, who often use the internet to help find their identity or to communicate with others who might feel isolated in their physical communities. Indeed, GOP support for KOSA was conditioned on the idea that the law would be used to suppress LGBTQ+ related content.
But, I did find it notable that, after all of the pro-KOSA team using kids as props to vote for the bill, how little attention was given last week to the ACLU sending hundreds of students to Congress to tell them how much KOSA would harm them.
Last week, the American Civil Liberties Union sent 300 high school students to Capitol Hill to lobby against the Kids Online Safety Act, a bill meant to protect children online.
The teenagers told the staffs of 85 lawmakers that the legislation could censor important conversations, particularly among marginalized groups like L.G.B.T.Q. communities.
“We live on the internet, and we are afraid that important information we’ve accessed all our lives will no longer be available,” said Anjali Verma, a 17-year-old rising high school senior from Bucks County, Pa., who was part of the student lobbying campaign. “Regardless of your political perspective, this looks like a censorship bill.”
But somehow, that perspective gets mostly ignored in all of this.
It would have been nice to have had an actual discussion on the policy challenges here, but from the beginning, KOSA co-sponsors Richard Blumenthal and Marsha Blackburn refused to take any of the concerns about the bill seriously. They frequently insisted that any criticism of the bill was just “big tech” talking points.
And, while they made cosmetic changes to try to appease some, the bill does not (and cannot) fix its fundamental problems. The bill is, fundamentally at its heart, a bill that is about censorship. And, while it does not directly demand censorship, the easiest and safest way to comply with the law will be to takedown whatever culture war hot topic politicians don’t like.
It’s kind of incredible that many of those who voted for the bill today were big supporters of the Missouri case against the administration (including Missouri’s Attorney General who brought that suit, Eric Schmitt, who voted in favor of KOSA today). So, apparently, according to Schmitt, governments should never try to influence how social media companies decide to take down content, but also government should have the power to take enforcement action against companies that don’t take down content the FTC decides is harmful.
There is a tremendous amount of hypocrisy here. And it would be nice if someone asked the senators voting in favor of this law why they were going against the wishes of all the kids who visited the Hill last week. After all, that’s what the senators who trotted out kids on the other side tried to do to those few senators who pointed out the flaws in this terrible law.
We live in the age of performative lawmaking. Something must be done! This is something. We will do it. Who cares about the tradeoffs, nuances, or the evidence? Throw all that out the window and DO SOMETHING. And if you’re going to DO SOMETHING why not make it big, bold, and already proven ineffective? At least it will get you headlines.
The underlying concerns about kids and technology are often quite legitimate. It’s reasonable to worry about kids being distracted or spending too much time on phones or social media. But just because there are concerns, it doesn’t mean that an outright ban is an effective policy or necessary. It would be nice if policy making involved actually looking at the evidence rather than making calls based on gut decisions.
But apparently, that’s not how it works.
Last month, we had an article about California Governor Gavin Newsom’s wife pushing an evidence-free moral panic about kids and social media. The very next day, we had a story by two Australian professors who had looked at all research on the question of whether or not banning phones in school was effective. They found that the evidence simply did not support banning phones in school. They concluded “the evidence for banning mobile phones in schools is weak and inconclusive.”
Certainly, some studies showed small positive benefits to removing phones, but many also showed negative effects. As we discussed on our most recent podcast with another researcher in the field, such bans can cause other problems as well.
Gov. Gavin Newsom called on Tuesday for a statewide ban on smartphone use in California schools, joining a growing national effort to curb cyberbullying and classroom distraction by limiting access to the devices.
Mr. Newsom, who has four school-age children, said he would work this summer with state lawmakers to dramatically restrict phone use during the school day in the nation’s most populous state.
Again, the actual evidence has shown that it’s not at all clear that an outright ban is effective, and it has failed in many places. New York City tried to ban phones in schools a decade ago and it failed, miserably. It was enforced unequally, often targeting kids in low-income communities, and parents wanted to know that in an emergency, their kids could call. At the time, NYC’s school chancellor said “lifting the cell phone ban is about common sense.”
Apparently, here in California, we no longer believe in common sense. Or evidence. We believe in the “feels” of the governor and his wife.
Of course, New York seems to be backsliding as well. Just a few weeks ago, New York’s Governor Kathy Hochul… also called for banning phones in schools, as if there wasn’t already evidence as to why such bans don’t work in her own state.
Again, I don’t think anyone believes that kids should be on their phones all day. But an outright ban is a blunt instrument that hasn’t worked all that well. Instead, it seems like there should be room for variability. Let parents, teachers, and school principals figure these things out on a more micro level, rather than implementing a flat out statewide ban.
But, alas, when we’re living in an age of moral panics, apparently such nuances and more focused approaches aren’t allowed.
Mobile phones are currently banned in all Australian state schools and many Catholic and independent schools around the country. This is part of a global trend over more than a decade to restrict phone use in schools.
But previous research has shown there is little evidence on whether the bans actually achieve these aims.
Many places that restricted phones in schools before Australia did have now reversed their decisions. For example, several school districts in Canada implemented outright bans then revoked them as they were too hard to maintain. They now allow teachers to make decisions that suit their own classrooms.
A ban was similarly revoked in New York City, partly because bans made it harder for parents to stay in contact with their children.
What does recent research say about phone bans in schools?
Our study
We conducted a “scoping review” of all published and unpublished global evidence for and against banning mobile phones in schools.
Our review, which is pending publication, aims to shed light on whether mobile phones in schools impact academic achievement (including paying attention and distraction), students’ mental health and wellbeing, and the incidence of cyberbullying.
A scoping review is done when researchers know there aren’t many studies on a particular topic. This means researchers cast a very inclusive net, to gather as much evidence as possible.
Our team screened 1,317 articles and reports as well as dissertations from masters and PhD students. We identified 22 studies that examined schools before and after phone bans. There was a mix of study types. Some looked at multiple schools and jurisdictions, some looked at a small number of schools, some collected quantitative data, others sought qualitative views.
In a sign of just how little research there is on this topic, 12 of the studies we identified were done by masters and doctoral students. This means they are not peer-reviewed but done by research students under supervision by an academic in the field.
But in a sign of how fresh this evidence is, almost half the studies we identified were published or completed since 2020.
The studies looked at schools in Bermuda, China, the Czech Republic, Ghana, Malawi, Norway, South Africa, Spain, Sweden, Thailand, the United Kingdom and the United States. None of them looked at schools in Australia.
Academic achievement
Our research found four studies that identified a slight improvement in academic achievement when phones were banned in schools. However, two of these studies found this improvement only applied to disadvantaged or low-achieving students.
Some studies compared schools where there were partial bans against schools with complete bans. This is a problem because it confuses the issue.
But three studies found no differences in academic achievement, whether there were mobile phone bans or not. Two of these studies used very large samples. This masters thesis looked at 30% of all schools in Norway. Another study used a nationwide cohort in Sweden. This means we can be reasonably confident in these results.
Mental health and wellbeing
Two studies in our review, including this doctoral thesis, reported mobile phone bans had positive effects on students’ mental health. However, both studies used teachers’ and parents’ perceptions of students’ wellbeing (the students were not asked themselves).
Two other studies showed no differences in psychological wellbeing following mobile phone bans. However, three studies reported more harm to students’ mental health and wellbeing when they were subjected to phone bans.
The students reported they felt more anxious without being able to use their phone. This was especially evident in one doctoral thesis carried out when students were returning to school after the pandemic, having been very reliant on their devices during lockdown.
So the evidence for banning mobile phones for the mental health and wellbeing of student is inconclusive and based only on anecdotes or perceptions, rather than the recorded incidence of mental illness.
Bullying and cyberbullying
Four studies reported a small reduction in bullying in schools following phone bans, especially among older students. However, the studies did not specify whether or not they were talking about cyberbullying.
Teachers in two other studies, including this doctoral thesis, reported they believed having mobile phones in schools increased cyberbullying.
But two other studies showed the number of incidents of online victimisation and harassment was greater in schools with mobile phone bans compared with those without bans. The study didn’t collect data on whether the online harassment was happening inside or outside school hours.
The authors suggested this might be because students saw the phone bans as punitive, which made the school climate less egalitarian and less positive. Other research has linked a positive school climate with fewer incidents of bullying.
There is no research evidence that students do or don’t use other devices to bully each other if there are phone bans. But it is of course possible for students to use laptops, tablets, smartwatches or library computers to conduct cyberbullying.
Even if phone bans were effective, they would not address the bulk of school bullying. A 2019 Australian study found 99% of students who were cyberbullied were also bullied face-to-face.
What does this tell us?
Overall, our study suggests the evidence for banning mobile phones in schools is weak and inconclusive.
As Australian education academic Neil Selwyn argued in 2021, the impetus for mobile phone bans says more about MPs responding to community concerns rather than research evidence.
Politicians should leave this decision to individual schools, which have direct experience of the pros or cons of a ban in their particular community. For example, a community in remote Queensland could have different needs and priorities from a school in central Brisbane.
Mobile phones are an integral part of our lives. We need to be teaching children about appropriate use of phones, rather than simply banning them. This will help students learn how to use their phones safely and responsibly at school, at home and beyond.
We keep pointing to research that suggests the narrative around “social media is bad for kids” is simply not supported by the data. Over and over again, we see studies that suggest that adults are overreacting to a few limited cases. Sometimes, problematic social media use seems to be due to a lack of systems in place to help with mental health issues, leading kids to spend more time on social media because they aren’t getting the support and help that they need.
One of the key points that came out of Jonathan Haidt’s problematic recent book was that he claimed one of the real downsides to social media was that it took kids away from spending time with other kids. And, this might feel reasonable. Lots of parents, certainly, have stories of kids staring at mobile device screens and seeming to have a less active social life from when the parents were kids.
But is it actually accurate?
A fascinating new study out of Norway suggests that Haidt (and the conventional wisdom many people believe about this) could be absolutely wrong.
Using five waves of biennially collected data from a birth cohort assessed throughout age 10–18 years (n = 812), we found that increased social media use predicted more time with friends offline but was unrelated to future changes in social skills. Age and sex did not moderate these associations but increased social media use predicted declined social skills among those high in social anxiety symptoms. The findings suggest that social media use may neither harm nor benefit the development of social skills and may promote, rather than displace, offline interaction with friends during adolescence. However, increased social media use may pose a risk for reduced social skills in socially anxious individuals.
Again, this seems to support much of the previous data we’ve seen suggesting that social media use is not inherently harmful for kids, and could actually be helpful in creating a new avenue for socializing with other kids in some cases.
But, the second part of the findings also confirms the other point we’ve raised here. For some adolescents, often those who are already struggling with mental health, they may end up turning to social media in response to a lack of other resources and help. This seems to be supported by the finding that social media could “pose a risk of reduced social skills in socially anxious individuals.”
Both of these findings support what we’ve been discussing all along: rather than focusing on outright bans for social media, the better focus should be on providing more mental health resources for kids (perhaps even inserting some of those resources and tools into social media apps) and working on better ways to determine that small cohort who are struggling.
Obviously, there may also be limits to this research as well. It’s based on an ongoing study in Trondheim, Norway, doing a long-term study on children born there in 2003 and 2004. It is entirely possible that young people in Trondheim are not representative of the wider world (or even just young people in Norway). But, at the very least, the depth and detail in the Trondheim Early Secure Study (TESS) suggests that they have pretty detailed data on this particular cohort.
Because the initial, overarching aim of TESS was to study mental health, we oversampled for children with emotional and behavioral problems, thus increasing variance and statistical power. More specifically, children were allocated to four strata according to their SDQ scores (cut-offs: 0–4, 5–8, 9–11, and 12–40), and the probability of selection increased with increasing SDQ scores (0.37, 0.48, 0.70, and 0.89 in the four strata, respectively (i.e., the higher SDQ scores, the higher odds for being drawn to the study)). This oversampling was corrected for in the analyses. In total, 1250 of those who consented were drawn to participate. From age 4 onwards (N = 1007) participants have been thoroughly assessed at the university clinic every second year, with 8 data waves completed, including information from the participant’s parents and teachers.
Also, the research on social media use was pretty in-depth as well and didn’t just rely on kids checking a box or something.
Social media use was assessed by semi-structured interviews conducted by the same trained personnel at all measurement points. Participants were asked about platforms used, overall frequency of use, and specific social media behavior. The main outcome constitutes the monthly sum of liking, commenting, and posting, which captured the participants’ responses to the following questions: 1)‘How often do you like other’s updates?‘; 2) ‘How often do you write comments to other’s updates or photos?‘; 3) ‘How often do you post (written) updates on your own social media sites?‘; 4) ‘How often do you post photos’? At ages 16 and 18, we also asked 5) ‘How often do you post selfies?‘. The questions were not specific to certain social media platforms, but as the participants were interviewed, the interviewers would provide examples of social media sites if needed, or in other ways facilitate a correct recall (e.g., ‘If you think about last week … ‘).
We also validated our main analysis and tested whether the results were replicated when using an alternative means of measuring the frequency of social media use, captured by interview at ages 10, 12, and 14 (total frequency of checking social media per day) and objectively measured at ages 16 and 18 (daily time spent on social media apps according to the phone’s screen time application).
They also closely measured time spent with friends through structured interviews. The data here appears to be pretty robust, whether or not the sample is representative of a wider set of young people.
While this particular research is the first of its kind, it seems to align with some other previous research:
To the best of our knowledge, the present study is the first to examine the relation between social media use and time spent with offline friends at the within-person level and capturing the years from late childhood to emerging adulthood. Importantly, during adolescence the boundaries between offline and online peer interactions are blurred, with offline friends also being online friends (van Zalk et al., 2020) being the new norm.
Our results align with studies showing that connecting with others and maintaining relationships are important motivations for adolescents’ use of social media (Ellison et al., 2007; Kircaburun et al., 2020; Park et al., 2009), connecting with people known from offline contexts being of particular importance (Reich et al., 2012). Use of online resources is found to reinforce already existing friendships (Desjarlais & Willoughby, 2010), which may explain why social media use promotes more time spent with friends face-to-face. Although one hypothesized mechanism for the association between social media use and time spent with friends is increased closeness with friends, potentially due to more self-disclosure, neither friendship closeness nor social anxiety moderated effects in the current study. However, it should be noted that we assessed closeness to best friend, whereas the outcome measure (i.e., time spent with friends face-to-face) did not differentiate between best friend and other friends, possibly contributing to the null finding.
Online interactions not only fuel existing relationships, but also enhance the initiation of new ones (Koutamanis et al., 2013), with more than half of US adolescents having made new friends online (Lenhart, 2015). Thus, it might also be that the relationship between increased social media use and time spent with friends is partly due to new friendships.
At the very least, this brings us back to where we were before, noting that the issue of kids and mental health, especially as it relates to social media use, is complicated. It does not appear to be as simple as “good” or “bad.” It’s good for some people. It’s bad for some people.
But, the idea that it somehow replaces in-person interactions does not appear to be supported by this particular study of this particular group of kids. Instead, it suggests the opposite.
Apparently, the world needs even more terrible bills that let ignorant senators grandstand to the media about how they’re “protecting the kids online.” There’s nothing more serious to work on than that. The latest bill comes from Senators Brian Schatz and Ted Cruz (with assists from Senators Chris Murphy, Katie Britt, Peter Welch, Ted Budd, John Fetterman, Angus King, and Mark Warner). This one is called the “The Kids Off Social Media Act” (KOSMA) and it’s an unconstitutional mess built on a long list of debunked and faulty premises.
It’s especially disappointing to see this from Schatz. A few years back, I know his staffers would regularly reach out to smart people on tech policy issues in trying to understand the potential pitfalls of the regulations he was pushing. Either he’s no longer doing this, or he is deliberately ignoring their expert advice. I don’t know which one would be worse.
The crux of the bill is pretty straightforward: it would be an outright ban on social media accounts for anyone under the age of 13. As many people will recognize, we kinda already have a “soft” version of that because of COPPA, which puts much stricter rules on sites directed at those under 13. Because most sites don’t want to deal with those stricter rules, they officially limit account creation to those over the age of 13.
In practice, this has been a giant mess. Years and years ago, Danah Boyd pointed this out, talking about how the “age 13” bit is a disaster for kids, parents, and educators. Her research showed that all this generally did was to have parents teach kids that “it’s okay to lie,” as parents wanted kids to use social media tools to communicate with grandparents. Making that “soft” ban a hard ban is going to create a much bigger mess and prevent all sorts of useful and important communications (which, yeah, is a 1st Amendment issue).
Schatz’s reasons put forth for the bill are just… wrong.
No age demographic is more affected by the ongoing mental health crisis in the United States than kids, especially young girls. The Centers for Disease Control and Prevention’s Youth Risk Behavior Survey found that 57 percent of high school girls and 29 percent of high school boys felt persistently sad or hopeless in 2021, with 22 percent of all high school students—and nearly a third of high school girls—reporting they had seriously considered attempting suicide in the preceding year.
Gosh. What was happening in 2021 with kids that might have made them feel hopeless? Did Schatz and crew simply forget about the fact that most kids were under lockdown and physically isolated from friends for much of 2021? And that there were plenty of other stresses, including millions of people, including family members, dying? Noooooo. Must be social media!
Studies have shown a strong relationship between social media use and poor mental health, especially among children.
Note the careful word choice here: “strong relationship.” They won’t say a causal relationship because studies have not shown that. Indeed, as the leading researcher in the space has noted, there continues to be no real evidence of any causal relationship. The relationship appears to work the other way: kids who are dealing with poor mental health and who are desperate for help turn to the internet and social media because they’re not getting help elsewhere.
Maybe offer a bill that helps kids get access to more resources that help them with their mental health, rather than taking away the one place they feel comfortable going? Maybe?
From 2019 to 2021, overall screen use among teens and tweens (ages 8 to 12) increased by 17 percent, with tweens using screens for five hours and 33 minutes per day and teens using screens for eight hours and 39 minutes.
I mean, come on Schatz. Are you trolling everyone? Again, look at those dates. WHY DO YOU THINK that screen time might have increased 17% for kids from 2019 to 2021? COULD IT POSSIBLY BE that most kids had to do school via computers and devices at home, because there was a deadly pandemic making the rounds?
Maybe?
Did Schatz forget that? I recognize that lots of folks would like to forget the pandemic lockdowns, but this seems like a weird way to manifest that.
I mean, what a weird choice of dates to choose. I’m honestly kind of shocked that the increase was only 17%.
Also, note that the data presented here isn’t about an increase in social media use. It could very well be that the 17% increase was Zoom classes.
Based on the clear and growing evidence, the U.S. Surgeon General issued an advisory last year, calling for new policies to set and enforce age minimums and highlighting the importance of limiting the use of features, like algorithms, that attempt to maximize time, attention, and engagement.
Wait. You mean the same Surgeon General’s report that denied any causal link between social media and mental health (which you falsely claim has been proved) and noted just how useful and important social media is to many young people?
From that report, which Schatz misrepresents:
Social media can provide benefits for some youth by providing positive community and connection with others who share identities, abilities, and interests. It can provide access to important information and create a space for self-expression. The ability to form and maintain friendships online and develop social connections are among the positive effects of social media use for youth. , These relationships can afford opportunities to have positive interactions with more diverse peer groups than are available to them offline and can provide important social support to youth. The buffering effects against stress that online social support from peers may provide can be especially important for youth who are often marginalized, including racial, ethnic, and sexual and gender minorities. , For example, studies have shown that social media may support the mental health and well-being of lesbian, gay, bisexual, asexual, transgender, queer, intersex and other youths by enabling peer connection, identity development and management, and social support. Seven out of ten adolescent girls of color report encountering positive or identity-affirming content related to race across social media platforms. A majority of adolescents report that social media helps them feel more accepted (58%), like they have people who can support them through tough times (67%), like they have a place to show their creative side (71%), and more connected to what’s going on in their friends’ lives (80%). In addition, research suggests that social media-based and other digitally-based mental health interventions may also be helpful for some children and adolescents by promoting help-seeking behaviors and serving as a gateway to initiating mental health care.
Did Schatz’s staffers just, you know, skip over that part of the report or nah?
The bill also says that companies need to not allow algorithmic targeting of content to anyone under 17. This is also based on a widely believed myth that algorithmic content is somehow problematic. No studies have legitimately shown that of current algorithms. Indeed, a recent study showed that removing algorithmic targeting leads to people being exposed to more disinformation.
Is this bill designed to force more disinformation on kids? Why would that be a good idea?
Yes, some algorithms can be problematic! About a decade ago, algorithms that tried to optimize solely for “engagement” definitely created some bad outcomes. But it’s been a decade since most such algorithms have been designed that way. On most social media platforms, the algorithms are designed in other ways, taking into account a variety of different factors, because they know that optimizing just on engagement leads to bad outcomes.
Then the bill tacks on Cruz’s bill to require schools to block social media. There’s an amusing bit when reading the text of that part of the law. It says that you have to block social media on “federally funded networks and devices” but also notes that it does not prohibit “a teacher from using a social media platform in the classroom for educational purposes.”
But… how are they going to access those if the school is required by law to block access to such sites? Most schools are going to do a blanket ban, and teachers are going to be left to do what? Show kids useful YouTube science videos on their phones? Or maybe some schools will implement a special teacher code that lets them bypass the block. And by the end of the first week of school half the kids in the school will likely know that password.
What are we even doing here?
Schatz has a separate page hyping up the bill, and it’s even dumber than the first one above. It repeats some of the points above, though this time linking to Jonathan Haidt, whose work has been trashed left, right, and center by actual experts in this field. And then it gets even dumber:
Big Tech knows it’s complicit – but refuses to do anything about it…. Moreover, the platforms know about their central role in turbocharging the youth mental health crisis. According to Meta’s own internal study, “thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse.” It concluded, “teens blame Instagram for increases in the rate of anxiety and depression.”
This is not just misleading, it’s practically fraudulent misrepresentation. The study Schatz is citing is one that was revealed by Frances Haugen. As we’ve discussed, it was done because Meta was trying to understand how to do better. Indeed, the whole point of that study was to see how teens felt about using social media in 12 different categories. Meta found that most boys felt neutral or better about themselves in all 12 categories. For girls, it was 11 out of 12. It was only in one category, body image, where the split was more pronounced. 32% of girls said that it made them feel worse. Basically the same percentage said it had no impact, or that it made them feel better.
Also, look at that slide’s title. The whole point of this study was to figure out if they were making kids feel worse in order to look into how to stop doing that. And now, because grandstanders like Schatz are falsely claiming that this proves they were “complicit” and “refuse to do anything about it,” no social media company will ever do this kind of research again.
Because, rather than proactively looking to see if they’re creating any problems that they need to try to fix, Schatz and crew are saying “simply researching this is proof that you’re complicit and refuse to act.”
Statements like this basically ensure that social media companies stick their heads in the sand, rather than try to figure out where harm might be caused and take steps to stop that harm.
Why would Schatz want to do that?
That page then also falsely claims that the bill does not require age verification. This is a silly two-step that lying politicians claim every time they do this. Does it directly mandate age verification? No. But, by making the penalties super serious and costly for failing to stop kids from accessing social media that will obviously drive companies to introduce stronger age verification measures that are inherently dangerous and an attack on privacy.
Perhaps Schatz doesn’t understand this, but it’s been widely discussed by many of the experts his staff used to talk to. So, really, he has no excuse.
The FAQ also claims that the bill will pass constitutional muster, while at the same time admitting that they know there will be lawsuits challenging it:
Yes. As, for example, First Amendment expert Neil Richards explains, “[i]nstead of censoring the protected expression present on these platforms, the act takes aim at the procedures and permissions that determine the time, place and manner of speech for underage consumers.” The Supreme Court has long held that the government has the right to regulate products to protect children, including by, for instance, restricting the sale of obscene content to minors. As Richards explains: “[i]n the same way a crowded bar or nightclub is no place for a child on their own”—or in the way every state in the country requires parental consent if it allows a minor to get a tattoo—“this rule would set a reasonable minimum age and maturity limitation for social media customers.”
While we expect legal challenges to any bill aimed at regulating social media companies, we are confident that this content-neutral bill will pass constitutional muster given the government interests at play.
There are many reasons why this is garbage under the law, but rather than breaking them all down (we’ll wait for judges to explain it in detail), I’ll just point out the major tell is in the law itself. In the definition of what a “social media platform” is in the law, there is a long list of exceptions of what the law does not cover. It includes a few “moral panics of yesteryear” that gullible politicians tried to ban and were found to have violated the First Amendment in the process.
It explicitly carves out video games and content that is professionally produced, rather than user-generated:
Remember the moral panics about video games and TV destroying kids’ minds? Yeah. So this child protection bill is hasty to say “but we’re not banning that kind of content!” Because whoever drafted the bill recognized that the Supreme Court has already made it clear that politicians can’t do that for video games or TV.
So, instead, they have to pretend that social media content is somehow on a whole different level.
But it’s not. It’s still the government restricting access to content. They’re going to pretend that there’s something unique and different about social media, and that they’re not banning the “content” but rather the “place” and “manner” of accessing that content. Except that’s laughable on its face.
You can see that in the quote above where Schatz does the fun dance where he first says “it’s okay to ban obscene content to minors” and then pretends that’s the same as restrictions on access to a bar (it’s not). One is about the content, and one is about a physical place. Social media is all about the content, and it’s not obscene content (which is already an exception to the First Amendment).
And, the “parental consent” for tattoos… I mean, what the fuck? Literally 4 questions above in the FAQ where that appears Schatz insists that his bill has nothing about parental consent. And then he tries to defend it by claiming it’s no different than parental consent laws?
The FAQ also claims this:
This bill does not prevent LGBTQ+ youth from accessing relevant resources online and we have worked closely with LGBTQ+ groups while crafting this legislation to ensure that this bill will not negatively impact that community.
I mean, it’s good you talked to some experts, but I note that most of the LGBTQ+ groups I’m aware of are not listed on your list of “groups supporting the bill” on the very same page. That absence stands out.
And, again, the Surgeon General’s report that you misleadingly cited elsewhere highlights how helpful social media can be to many LGBTQ+ youth. You can’t just say “nah, it won’t harm them” without explaining why all those benefits that have been shown in multiple studies, including the Surgeon General’s report, somehow don’t get impacted.
There’s a lot more, but this is just a terrible bill that would create a mess. And, I’m already hearing from folks in DC that Schatz is trying to get this bill added to the latest Christmas tree of a bill to reauthorize the FAA.
It would be nice if we had politicians looking to deal with the actual challenges facing kids these days, including the lack of mental health support for those who really need it. Instead, we get unconstitutional grandstanding nonsense bills like this.
Everyone associated with this bill should feel ashamed.
After demolishing the competition from 2020 through the first half of 2022, TikTok’s DAU growth rate has collapsed. In the fourth quarter of 2023, the video service lagged Snapchat, YouTube, Instagram, and Facebook. Yes, you read that right: The ancient big blue app grew faster than TikTok.
This reminds me of when Congress was super focused on regulating Facebook, even after it was shedding users rapidly.
That’s not to say that if there is any real evidence (again, none of which has yet been shown) of dangers associated with TikTok they should be ignored. But, it is a reminder that the internet space remains incredibly dynamic, even as the media and politicians act as if what’s happening today will continue to be the way it is.
Social media sites come and go. They’re cool for kids until their parents get involved or until they go through the inevitable enshittification curve. There appear to be at least some signals that TikTok may have passed its prime and folks are starting to move on.
In short, the “problem” (if there is one) may solve itself through the simple fact that… TikTok might just not be all that cool anymore. Business Insider suggests that the original TikTok generation, who were teenagers when the app first became cool, may have since graduated and started having to live life and get a job and stuff, leaving less time for TikTok. Of course, that would ignore the fact that as younger kids age into being teens, they’re still likely to join. But, perhaps not at the same rate as before.
The young adults I spoke to have been on social media for a decade or more and didn’t question the impact it was having on them until recently. They started noticing that TikTok, in particular, got in the way of sleep, work, household chores and relationships. Some even say it has kept them from chasing their own creative dreams. They are now deleting the app in pursuit of more in-person experiences and tidier homes.
While that may be anecdotal, there is at least some data to back it up:
TikTok’s U.S. average monthly users between the ages of 18 and 24 declined by nearly 9% from 2022 to 2023, according to mobile analytics firm Data.ai.
In short, as it often does, Congress may be fighting (badly) the last battle, and not realizing that some of this stuff… takes care of itself.
It seems like the only “bipartisan” support around regulations and the internet these days is… over the false, widely debunked moral panic that the internet is inherently harmful to children. Study after study has said it’s simply not true. Here’s the latest list (and I have one more to write up soon):
Last fall, the widely respected Pew Research Center did a massive study on kids and the internet, and found that for a majority of teens, social media was way more helpful than harmful.
This past May, the American Psychological Association (which has fallen for tech moral panics in the past, such as with video games) released a huge, incredibly detailed, and nuanced report going through all of the evidence, and finding no causal link between social media and harms to teens.
Soon after that, the US Surgeon General came out with a report which was misrepresented widely in the press. Yet, the details of that report also showed that no causal link could be found between social media and harms to teens. It did still recommend that we act as if there were a link, which was weird and explains the media coverage, but the actual report highlights no causal link, while also pointing out how much benefit teens receive from social media).
A few months later, an Oxford University study came out covering nearly a million people across 72 countries, noting that it could find no evidence of social media leading to psychological harm.
The Journal of Pediatrics recently published a new study again noting that after looking through decades of research, the mental health epidemic faced among young people appears largely due to the lack of open spaces where kids can be kids without parents hovering over them. That report notes that they explored the idea that social media was a part of the problem, but could find no data to support that claim.
In November a new study came out from Oxford showing that no evidence whatsoever of increased screentime having any impact on the functioning of brain development in kids.
And yet, if you talk to politicians or the media, they insist that there’s overwhelming evidence showing the opposite, that social media is inherently dangerous to children.
The latest to fall for this false moral panic is the powerful Herb Conaway, a New Jersey state representative who has been in the New Jersey Assembly since 1997. He has a bunch of moral panic related quotes. He’s claimed that the mental health epidemic among children “can be laid at the feed of social media” (despite all the studies saying otherwise). He also has claimed (again, contrary to the actual evidence) that social media “really has been horrific on the mental health and the physical health of our young people, particularly teenagers and particularly young girls.”
This is not, in fact, what the evidence shows. But it is how the moral panic has been passed around.
And so, the greatly misinformed Assemblymember has successfully been pushing Bill A5750, which requires age verification and parental consent for use of any social media platform with 5 million or more accounts worldwide. It has just passed out of committee and has a very real chance of becoming the law in New Jersey (until a federal court throws it out as unconstitutional —but we’ll get there).
Before we get to the legal problems with the bill, let’s talk about the fundamental problems.
Age verification is a privacy nightmare. This has been explained multiple times in great detail. There is no way to do age verification without putting everyone’s privacy at great risk. You don’t have to take my word for it, the French data protection agency CNIL studied every available age verification method and found that they were both unreliable and violate privacy rights.
Why would Assemblymember Conaway want to put his constituents’ privacy at risk?
Age verification only works by requiring someone to collect sensitive private data, and then hoping they can keep it safe. That’s… bad?
Next, parental verification is crazy dangerous. It can make sense in perfectly happy homes with parents who have a good relationship with their children but, tragically, that is not all homes. And if you have situations where (for example) there is an LGBTQ child in a home where the parents cannot accept their child’s identity, imagine how well that will go over.
And that’s especially true at a time when we’re seeing social media operations being created to specifically cater to marginalized groups. For example, the Trevor Project, the wonderful non-profit that helps LGBTQ youth, has their own social media network for those kids. Can you imagine how well that will work if parents of those kids had to get permission before they could make use of that site?
This law would put the most marginalized kids in society at much greater risk and cut them off from the communities and services that have been repeatedly found to help them the most.
Why?
Because of a moral panic that is not backed by the actual evidence.
The fact that this bill applies to any social media with greater than 5 million accounts means it would sweep in tons of smaller sites. Note that it’s not even active accounts or active monthly users. And it’s not just accounts in New Jersey. It’s 5 million global accounts. There are many sites that would qualify that simply could never afford to put in place age verification or parental controls, and thus the only answer will be to cut off New Jersey entirely.
So, again, the end result is cutting off marginalized and at-risk kids from the services that have repeatedly been found to be helpful.
On the legal front, these provisions are also quite clearly unconstitutional, and have been found by multiple courts to be so. Just in the past few months federal courts have rejected an Arkansas age verification bill and a California one. Neither of these were surprising results as they had been litigated in front of the Supreme Court decades ago.
The parental controls mandate is equally unconstitutional. In Brown v. EMA the Supreme Court noted that the 1st Amendment does not allow for the government “to prevent children from hearing or saying anything without their parents’ prior consent.” Children have 1st Amendment rights as well, and while they are somewhat more limited than for adults, the courts have found repeatedly that children have the right to access 1st Amendment-protected speech, and to do so without parental consent.
And, in cases like this, it’s even worse than in Brown, which was about a failed attempt by California to restrict access to violent video games. Here, the New Jersey bill attempts to limit access to all social media, not just specifically designated problematic ones. So it’s an even broader attack on the 1st Amendment rights of children than Brown was.
So, in the end we have a terribly drafted bill that will sweep in a ton of companies, even ones with limited presence in New Jersey, ordering them to invest in expensive and faulty features that have already been shown to put private info at risk, while doing so in a way that has also been shown to put the most marginalized and at-risk children at much greater risk. And all of this has already been found to be unconstitutional.
All based on a moral panic that has been widely debunked by research.
Yet the bill is sailing through the New Jersey legislature, and almost guarantees that the state of New Jersey is going to have to spend millions in taxpayer funds to defend this law in court, only to be told exactly what I’m telling them for free.
Over the last few years, we’ve highlighted study after study after study showing that, contrary to the public narrative, claims by politicians, the media, and plaintiffs in many, many lawsuits, the actual evidence just does not show at all that social media/internet is doing damage to kids. In a recent post we highlighted just a few of the recent reports on this.
Last fall, the widely respected Pew Research Center did a massive study on kids and the internet, and found that for a majority of teens, social media was way more helpful than harmful.
This past May, the American Psychological Association (which has fallen for tech moral panics in the past, such as with video games) released a huge, incredibly detailed, and nuanced report going through all of the evidence, and finding no causal link between social media and harms to teens.
Soon after that, the US Surgeon General (in the same White House where Wu worked for a while) came out with a report which was misrepresented widely in the press. Yet, the details of that report also showed that no causal link could be found between social media and harms to teens. It did still recommend that we act as if there were a link, which was weird and explains the media coverage, but the actual report highlights no causal link, while also pointing out how much benefit teens receive from social media).
A few months later, an Oxford University study came out covering nearly a million people across 72 countries, noting that it could find no evidence of social media leading to psychological harm.
The Journal of Pediatrics just published a new study again noting that after looking through decades of research, the mental health epidemic faced among young people appears largely due to the lack of open spaces where kids can be kids without parents hovering over them. That report notes that they explored the idea that social media was a part of the problem, but could find no data to support that claim.
Now, the folks at Oxford University, who did one of those studies above, have released another study, this time looking at almost 12,000 kids in the US to determine whether “screen time” had an impact on their brain function or well-being. This is a pretty massive study, and the results are pretty damn clear:
Screen time activities included ‘traditional’ screen pursuits such as watching TV shows or movies and using digital platforms such as YouTube to watch videos, as well as interactive pursuits like playing video games. In addition, they were asked about connecting with others through apps, calls, video calls and social media.
Even with participants who had high rates of digital engagement, there was no evidence of impaired functioning in the brain development of the children.
The study appears pretty thorough:
Using data from the Adolescent Brain Cognitive Development (ABCD) Study, the largest long-term study of brain development and child health in the United States, researchers from Oxford Internet Institute, University of Oxford, University of Oregon, Tilburg University, and University of Cambridge analysed the cognitive function of 9-12 year old children alongside their self-reported screen time use.
[….]
In the ABCD study, the participants’ neurodevelopment was assessed through monitoring functional brain connectivity, which refers to how regions of the brain work together and includes emotional and physiological activities. This was done through MRI scans. Further to this, physical and mental health assessments and information from the child’s caregiver was provided.
When analysing the screen time use alongside the ABCD data, patterns of functional brain connectivity were related to patterns of screen engagement, but there was no meaningful association between screen time use and measures of cognitive and mental well-being, even when the evidential threshold was set very low.
The researchers behind this study were pretty clear on what they feel has been learned (and I’ll note that Andrew, in particular, is always extremely careful not to overclaim what his studies say, as you can hear when we had him on the podcast recently):
Jack Miller, the first author who analysed the data as part of his thesis at the Oxford Internet Institute said: “If screen time had an impact on brain development and well-being, we expected to see a variety of cognitive and well-being outcomes that this comprehensive, representative, research did not show.”
Professor Andrew Przybylski who supervised the work added: “We know that children’s brains are more susceptible to environmental influence than adults, as digital screen time is a relatively new phenomenon, it’s important to question its impact.”
Professor Matti Vuorre from Tilburg University, a co-author observed: “One thing that makes this work stand out is our analysis plan was reviewed by experts before we saw the data; this adds rigour to our approach.” He added, “One also suggested we take a look at social media on its own because it’s a source of worry for many and we did not find anything special about this form of online engagement.”
Professor Przybylski concludes: “Our findings should help guide the heated debates about technology away from hyperbole and towards high-quality science. If researchers don’t improve their approach to studying tech, we’ll never learn what leads some young people to flounder and others to flourish in the digital age.”
All of this research is important, because clearly there does remain a mental health crisis going on, including among children. But we risk making things worse, not better when we immediately insist that it must be because of the internet, or video games, or screen time or whatever.
You can also read the full study yourself if you’d like to get at the details, since they published it as open access. And, because they put it under a creative commons attributions license, we can also post a copy here.
We’ve done this a few times now where people start talking about a social media trend that actually only went viral because of the media coverage of the supposed (but not really) social media trend. And each time there’s some outrage moral panic about how “social media” is destroying the children or whatever, when it’s more frequently just adults freaking out over an overblown story.
That seems to have happened again this week. Did you hear the story about how the kids on TikTok were suddenly agreeing with Osama bin Laden, and saying that 9/11 was justified, because they read his mostly batshit crazy “Letter to America?” It got so crazy that the Guardian pulled down their version of the letter that people were pointing to.
So, look, where there a few naïve kids on TikTok who read the letter and talked about it on TikTok? Yes. But did it really go viral? No. At least not until the media went crazy about it all. Thankfully, at least some in the media are calling out that the viral attention came after the panic. Drew Harwell and Victoria Bisset at the Washington Post have a good run down:
TikTok spokesman Alex Haurek said Thursday that the company was “proactively and aggressively” removing videos promoting the letter for violating the company’s rules on “supporting any form of terrorism” and said it was “investigating” how the videos got onto its platform.
Haurek said that the #lettertoamerica hashtag had been attached to 274 videos that had garnered 1.8 million views on Tuesday and Wednesday, before “the tweets and media coverage drove people to the hashtag.” Other hashtags, for comparison, dwarfed discussion of the letter on the platform: During a recent 24-hour period, #travel videos had 137 million views, #skincare videos had 252 million views and #anime videos had 611 million views, Haurek said.
TikTok, for its part, banned promoting the letter pretty quickly, but that combined with the media coverage of all of this Streisanded the thing into getting a ton more attention.
A small number of TikTok users found a letter written by bin Laden and published by the Guardian in 2002 and thought that—despite it being full of anti-Semitic garbage and Islamic-fundamentalist nuttery—the late terrorist and al-Qaida leader made some good points in critiquing American foreign policy.
Sure, the sentiment these videos expressed is nauseating. But so is a lot of stuff on the internet, and to call these particular pieces of content is to misunderstand TikTok and to grossly mischaracterize the chain of events that brought this phenomenon before a mass audience.
As Nover notes, a few videos on TikTok getting a bunch of views is not a “trend.” Due to the quick nature of TikTok tons of videos get a bunch of views, but to really be trending they need many, many millions of views. Nover calls out CNN and the NY Times for claiming that these videos were somehow newsworthy for how many views they got:
Less than 300 videos? 14 million views total? On TikTok, those are paltry numbers. The other day, I watched a video of a guy trying Indian food for the first time that—by itself—got 18.6 million views on TikTok. In May, I tweeted a video of a guy unclogging a sewer drain and that got 26 million views. At any moment online, things are going viral before TikTok’s massive audience. To call these pro-Osama videos viral, however, would be a stretch. TikTok’s algorithm is known for supersizing virality, often lifting seemingly random content to ungodly heights quicker than any other platform.
Reporter Ryan Broderick also highlighted how there is no evidence at all that this was trending in any way, and much of the outrage is from people misunderstanding TikTok’s scale and how it counts views. Any sliver of a video watched counts as a view, and people go through tons of videos, often swiping away after a split second, though it still counts as a view:
So, right now, the popular wisdom is that TikTok counts a single second as a single view. That said, a developer I work with named Morry Kolman has done some tests on this and it may be even less than a second. A view is very possibly just every unique open or autoplay of a video. Meanwhile, a view on YouTube is around 30 seconds and a view on Facebook is around three seconds. So if we were to take one of the most viral videos about Bin Laden’s letter, which had around two million views, and applied Facebook’s view metric, it would have around 660,000 views. And if it was a YouTube video, it would have around 65,000. Now, let’s ask ourselves. Would you, in 2023, give a shit if someone was saying something unhinged and offensive in a YouTube video with 65,000 views? If so, good news, I can send you millions of videos that match that exact description! Just go search “The Marvels” and have a blast.
Of course, what did go viral was old people in the media freaking out about this. Nover again:
One prominent Twitter figure’s outraged post about the videos, which included a supercut of them, racked up 32 million views on X. Flipping through TV channels on Thursday, I noticed the story on several different news broadcasts—every anchor and reporter was disgusted and wanted to say so. The Biden administration even responded Thursday. “No one should ever insult the 2,977 American families still mourning loved ones by associating themselves with the vile words of Osama bin Laden,” White House spokesperson Andrew Bates wrote on X while sharing the CNN article.
Broderick did his best to track down what was actually happening, and again, it appears to have been a tiny group of mostly nobodies, and a few bots, and TikTok took it all down before it went particularly viral:
I spent all day yesterday looking for the “thousands” of videos. Of course, TikTok was already in the process of scrubbing them, but according to screenshots and cached Google data, I’m comfortable saying there were likely around 300-500 unique videos about the letter and, once again, around 25% of what I personally saw were bots or automated accounts or duets. And the largest comment section I’ve seen underneath one of these videos had around 5,000 comments. Now, let’s do this again. Would you, in 2023, give a shit if there were around 5,000 people being offensive on the internet? Well, if you would, I, once again, have some good news for you. Type “X.com” into your URL bar and make sure to follow the site’s CEO, he has a lot of interesting ideas about race and gender.
Look, if you want to find naïve people saying stupid shit, it’s not difficult to find online. There are a lot of naïve people (of any age), and they have many places to say stupid, ignorant shit. It’s not hard to find. The media doesn’t need to go crazy about all of them. The White House doesn’t need to put out a statement about all of them (though it did here).
Before assuming that this is somehow “trending” or something that’s catching on, learn to take a breath and look at what’s actually happening. This isn’t a story about a few naïve folks saying dumb shit. This is a story about the media, yet again, blowing a small sample size way out of proportion to make a misleading point about “the kids and social media these days.”
I really wish we could fast forward a few decades to the point where we look back on the moral panic over kids and social media and laugh about it, the same way we now laugh about similar moral panics regarding television, Dungeons & Dragons, rock & roll music, comic books, pinball, chess, novels, and the waltz. But, at the moment, basically everyone is losing their minds over the still totally unproven claim that social media is bad for kids’ mental health.
This is despite multiple massive studies highlighting no evidence of any actual causal connection. Earlier this year, the American Psychological Association released a detailed study on the matter, that reviewed basically all of the research literature out there, and found no evidence of a causal link, stating that social media was not inherently beneficial or harmful to kids. And the APA has been known in the past to fall for bogus moral panics, so when even it is saying “there’s just no evidence,” perhaps there’s really no evidence.
Even the much publicized Surgeon General’s report on social media and kids admitted (in the fine print) that there appeared to be no evidence of harm, but said that we should act as if there was (which is a very odd recommendation).
And then there have been a few massive studies recently, including a giant study from the well respected Pew Research Center, who found that most teenagers found social media helpful. Just recently we pointed to another study, this time out of Oxford University, looking at over one million people across 72 countries, finding no evidence of social media increasing psychological harm.
But, still, the media (and especially politicians) keep insisting that it’s true.
Much of the (heavily redacted) complaint seems to be based on the full-on belief in the moral panic about social media and harms to kids. It takes a bunch of things completely out of context — such as the fact that Meta, like any company, keeps trying to grow its business, as some sort of proof of nefarious intent. Unless these states are trying to argue that economic growth is illegal, many of these arguments seem pretty weak.
There are so many strange things in the complaint that present things with an interpretation that is not supported by reality. For example, it claims that the move away from chronological feeds to algorithmic feeds was somehow a nefarious attempt to “attract young users to the platform and keep them engaged on its Social Media Platforms for as long as possible.”
Meta had originally displayed content on a user’s “Feed” chronologically, i.e., in the order the content was posted by people the user elected to follow. Meta moved from chronological Feeds to engagement-based Feeds in 2009 (for Facebook) and 2016 (for Instagram).
The engagement-based Feed is different and alters the users’ experience. It algorithmically presents material to users based on several engagement components: posts with more “Likes,” comments, and other indicia of user engagement are displayed to users first.
This change was designed to prioritize material most likely to engage users for longer periods of time.
Are they… going to sue TV stations for putting popular shows in prime time next? I mean, of course Facebook is trying to increase engagement. What website isn’t?
Also, studies have shown, repeatedly, that users like the algorithmic feed, and most hate the chronological feed (yes, I know there are exceptions). Recent research found that when there are chronological feeds, users get a lot more misinformation and junk they don’t want to see.
Is it the states’ position that everyone should be forced to get more junk and misinformation in their feeds?
By algorithmically serving content to young users according to variable reward schedules, Meta manipulates dopamine releases in its young users, inducing them to engage repeatedly with its Platforms—much like a gambler at a slot machine.
I know this is a popular claim, but it’s nonsense, not supported by any actual science.
I mean, the weird thing in all of this, which no one will admit, is that, yes, pointing people to more relevant information might make them use your product more. But, that’s because it’s providing more relevant information.
The complaint points a few times to Frances Haugen’s leaked documents, but again, those were massively misrepresented in the media. As we highlighted multiple times, the research showed that in 23 of 24 areas studied (all twelve for boys, and 11 of 12 for girls), more kids felt better about themselves on the topics of conversation, rather than worse. There was only one area, “body images,” where the number of girls who felt worse outweighed those who felt better. And Facebook’s researchers called out that fact in the research in order to point out that it was an issue they should look at dealing with.
I mean, one could easily argue that fashion magazines, teen magazines, not to mention television and movies, could readily be accused of making teen girls feel bad about their own “body image,” but did the states sue over that? Of course not. And I’ll bet that literally none of those had an internal research group studying the matter, and calling for the company to try to fix it.
The complaint also spends a lot of time arguing that Meta “promotes harmful content, such as content promoting eating disorders to youth.” But, again, as we’ve detailed, multiple studies found that when Instagram tried to block that content, teens quickly found ways around it by adjusting their language. And when Instagram went even further in trying to block it, the teens who wanted to engage with eating disorder content just moved elsewhere (to TikTok and to specialized eating disorder forums, which had even less control). Even worse, by moving those discussions off of Meta, many of those conversations lost the powerful responses from people who had recovered from eating disorders, who were participating in the discussions on Instagram and trying to guide people to more helpful resources.
In other words, this shit is complicated, and Instagram’s attempts to stop sharing eating disorder content almost certainly made eating disorder situations worse, not better, not because of any nefarious plan on the part of Meta, but because there’s a demand problem. A bunch of teens were looking for those conversations, and were going to have them with or without Meta’s assistance.
So, basically, the states are suing Meta for surfacing the larger societal problems regarding teenage eating disorders that the states themselves have failed to deal with.
Then there’s a large part (again heavily redacted) of the complaint alleging COPPA violations. When I saw that section, at first I thought it sounded like the more serious part of the complaint. At least there’s a clear law there, and violating it can get you into trouble.
But, reading through it, I’m again left confused. COPPA has some specific rules regarding how you handle collection of data on those under 13 for sites targeted at those under 13. Like many websites, Meta’s solution to this is to say no one under the age of 13 is allowed on the site. In practice, this just means that kids are taught by their parents to lie.
But, rather than realize that maybe that’s the problem, the states are blaming Meta for not magically figuring out that kids (often with their parents’ help) are lying. It makes a big deal over the fact that Instagram didn’t start even asking people their ages until 2019, but again, the law does not require that at all. It applies to sites that are deliberately targeting children under that age, not those that magically fail to keep all such kids out. But the state AGs act as if the law requires some sort of age verification scheme:
Eventually, in response to pressure from regulators and the public, Meta purported to implement an age gate as part Instagram’s account registration process—but the term “gate” was a misnomer because it did not prevent under-13 users from creating and using Instagram accounts
Indeed, the complaint seems to argue that age verification is required. Which is just flat out false:
Meta has access to, and chooses not to use, feasible alternative age verification methods that would significantly reduce or eliminate the number of underage users on Meta’s Social Media Platforms, for example, by requiring young users to submit student IDs upon registration.
There is, as noted, a lot in the complaint that is redacted. So perhaps there are some important nefarious details hidden under all that black ink. I wouldn’t put it past Meta to be lying about stuff. And maybe some of it rises to the level of an issue for which its reasonable to face a crackdown from the states.
But, from what’s public, this lawsuit seems like a joke, driven by grandstanding AGs who have bought into the current moral panic and need some headlines.
Of course, Meta being Meta, the company’s response to this lawsuit is… also less than convincing. The comments its released to the press were more or less “well, kids use YouTube and TikTok more than Facebook/Instagram, so why are the states picking on us?” Which is not the most compelling of responses.