It shouldn’t be news to any regular readers here that Warner Bros. has been a ridiculously jealous protector of all things intellectual property when it comes to the Harry Potter franchise. Harry Potter themed fan festivals? That’s banned magic, according to Warner Bros. Want to make a parody condom called “Harry Poppers”? Here comes Warner Bros. to kill the mood. A non-profit dinner with a Harry Potter theme, mostly to make a mother’s daughter happy? The Warner Bros. did its dementor thing to shut down all that joy.
Now, what should be obvious in all of those examples is that this all works against the interests of Warner Bros., the publishers of the books, and J.K. Rowling. After all, does anyone really believe that these fans showing off their fandom, gathering to celebrate the Harry Potter franchise, in any way is a threat to sales of these books and movies? Of course not! If anything, they build upon the Potter community and serve as an interest multiplier that can’t possibly do anything but drive more interest in the books and films.
It’s a sad day for little witches and wizards in Jackson Hole. The Teton County Library’s (TCL) slate of Harry Potter programming has been canceled due to copyright infringement. TCL announced the news on Wednesday, Oct. 2. TCL said it had received a cease-and-desist letter from Warner Bros. Entertainment Inc., which owns and controls all things Potter.
“Prior to receiving the letter, Library staff was unaware that this free educational event was a copyright infringement,” TCL’s announcement reads. “In the past, libraries had been encouraged to hold Harry Potter-themed events to promote the books as they were released.”
While other festivals have attempted to rebrand to generic names and themes to get around all of this, the library in this case isn’t bothering. It’s just shutting the whole thing down. And while the library is making conciliatory noises about respecting the intellectual property of others, this is completely idiotic.
Precisely what about a library building some programming around children’s love for Harry Potter represents any kind of threat whatsoever to Warner Bros.? I’ll wait, while someone tries deperately to grasp enough straws to formulate an argument for this. But really, don’t bother. This is protectionism for the sake of protectionism.
And it’s incredibly shortsighted to boot. The fans who grew to love the Harry Potter universe have grown up and now have children of their own. And that new generation could be loyal Potter fans too, if Warner Bros. would let them. Instead, the company appears far more interested in shutting down what are essentially entry points of interest for an entire new generation of potential fans and customers.
It appears Harry Potter will no longer have a nose, with Warner Bros. having cut it off to spite his face.
Over the last few years, politicians in Utah have been itching to pass terrible internet legislation. Some of you may forget that in the earlier part of the century, Utah became somewhat famous for passing absolutely terrible internet laws that the courts then had to clean up. In the last few years, it’s felt like other states have passed Utah, and maybe its lawmakers were getting a bit jealous in losing their “we pass batshit crazy unconstitutional internet laws” crown.
So, two years ago, they started pushing a new round of such laws. Even as they were slamming social media as dangerous and evil, Utah Governor Spencer Cox proudly signed the new law, streaming it on all the social media sites he insisted were dangerous. When Utah was sued by NetChoice, the state realized that the original law was going to get laughed out of court and they asked for a do-over, promising that they were going to repeal and replace the law with something better. The new law changed basically nothing, though, and an updated lawsuit (again by NetChoice) was filed.
The law required social media companies to engage in “age assurance” (which is just a friendlier name for age verification, but still a privacy nightmare) and then restrict access to certain types of content and features for “minor accounts.”
Cox also somewhat famously got into a fight on ExTwitter with First Amendment lawyer Ari Cohn. When Cohn pointed out that the law clearly violates the First Amendment, Cox insisted: “Can’t wait to fight this lawsuit. You are wrong and I’m excited to prove it.” When Cohn continued to point out the law’s flaws, Cox responded “See you in court.”
In case you’re wondering how the lawsuit is going, last night Ari got to post an update:
The law is enjoined. The court found it to likely be unconstitutional, just as Ari and plenty of other First Amendment experts expected. This case has been a bit of a roller coaster, though. A month and a half ago, the court said that Section 230 preemption did not apply to the case. The analysis on that made no sense. As we just saw, a court in Texas threw out a very similar law and said that since it tried to limit how sites could moderate content, it was preempted by Section 230. But, for a bunch of dumb reasons, the judge here, Robert Shelby, argued that the law wasn’t actually trying to impact content moderation (even though it clearly was).
But, that was only part of the case. The latest ruling found that the law almost certainly violates the First Amendment anyway:
NetChoice’s argument is persuasive. As a preliminary matter, there is no dispute the Act implicates social media companies’ First Amendment rights. The speech at issue in this case— the speech social media companies engage in when they make decisions about how to construct and operate their platforms—is protected speech. The Supreme Court has long held that “[a]n entity ‘exercis[ing] editorial discretion in the selection and presentation’ of content is ‘engage[d] in speech activity’” protected by the First Amendment. And this July, in Moody v. NetChoice, LLC, the Court affirmed these First Amendment principles “do not go on leave when social media are involved.” Indeed, the Court reasoned that in “making millions of . . . decisions each day” about “what third-party speech to display and how to display it,” social media companies “produce their own distinctive compilations of expression.”
Furthermore, following on the Supreme Court’s ruling earlier this year in Moody about whether or not the entire law can be struck down on a “facial” challenge, the court says “yes” (this issue has recently limited similar rulings in Texas and California):
NetChoice has shown it is substantially likely to succeed on its claim the Act has “no constitutionally permissible application” because it imposes content-based restrictions on social media companies’ speech, such restrictions require Defendants to show the Act satisfies strict scrutiny, and Defendants have failed to do so.
Utah tries to argue that this law is not about speech and content, but rather about conduct and “structure,” as California did in challenges to its “kids code” law. The court is not buying it:
Defendants respond that the Definition contemplates a social media service’s “structure, not subject matter.” However, Defendants’ argument emphasizes the elements of the Central Coverage Definition that relate to “registering accounts, connecting accounts, [and] displaying user-generated content” while ignoring the “interact socially” requirement. And unlike the premises-based distinction at issue in City of Austin, the social interaction-based distinction does not appear designed to inform the application of otherwise content-neutral restrictions. It is a distinction that singles out social media companies based on the “social” subject matter “of the material [they] disseminate[].” Or as Defendants put it, companies offering services “where interactive, immersive, social interaction is the whole point.”
The court notes that Utah seems to misunderstand the issue, and finds the idea that this law is content neutral to be laughable:
Defendants also respond that the Central Coverage Definition is content neutral because it does not prevent “minor account holders and other users they connect with [from] discuss[ing] any topic they wish.” But in this respect, Defendants appear to misunderstand the essential nature of NetChoice’s position. The foundation of NetChoice’s First Amendment challenge is not that the Central Coverage Definition restricts minor social media users’ ability to, for example, share political opinions. Rather, the focus of NetChoice’s challenge is that the Central Coverage Definition restricts social media companies’ abilities to collage user-generated speech into their “own distinctive compilation[s] of expression.”
Moreover, because NetChoice has shown the Central Coverage Definition facially distinguishes between “social” speech and other forms of speech, it is substantially likely the Definition is content based and the court need not consider whether NetChoice has “point[ed] to any message with which the State has expressed disagreement through enactment of the Act.”
Given all that, strict scrutiny applies, and there’s no way this law passes strict scrutiny. The first prong of the test is whether or not there’s a compelling state interest in passing such a law. And even though it’s about the moral panic of kids on the internet, the court says there’s a higher bar here. Because we’ve done this before, with California trying to regulate video games, which the Supreme Court struck down fourteen years ago:
To satisfy this exacting standard, Defendants must “specifically identify an ‘actual problem’ in need of solving.” In Brown v. Entertainment Merchants Association, for example, the Supreme Court held California failed to demonstrate a compelling government interest in protecting minors from violent video games because it lacked evidence showing a causal “connection between exposure to violent video games and harmful effects on children.” Reviewing psychological studies California cited in defense of its position, the Court reasoned research “show[ed] at best some correlation between exposure to violent entertainment” and “real-world effects.” This “ambiguous proof” did not establish violent videogames were such a problem that it was appropriate for California to infringe on its citizens’ First Amendment rights. Likewise, the Court rejected the notion that California had a compelling interest in “aiding parental authority.” The Court reasoned the state’s assertion ran contrary to the “rule that ‘only in relatively narrow and well-defined circumstances may government bar public dissemination of protected materials to [minors].’”
While there’s lots of screaming and yelling about how social media is bad for kids’ mental health, as we directly told Governor Cox, the evidence just doesn’t support the claim. The court seems to recognize that the claims are a lot of hot air as well. Indeed, Utah submitted the Surgeon General’s report as “proof,” which apparently they didn’t even read. As we noted, contrary to the media reporting on that report, it contained a very nuanced analysis that does not show any causal harms to kids from social media.
The judge absolutely noticed that.
First, though the court is sensitive to the mental health challenges many young people face, Defendants have not provided evidence establishing a clear, causal relationship between minors’ social media use and negative mental health impacts. It may very well be the case, as Defendants allege, that social media use is associated with serious mental health concerns including depression, anxiety, eating disorders, poor sleep, online harassment, low self-esteem, feelings of exclusion, and attention issues. But the record before the court contains only one report to that effect, and that report—a 2023 United States Surgeon General Advisory titled Social Media and Youth Mental Health—offers a much more nuanced view of the link between social media use and negative mental health impacts than that advanced by Defendants. For example, the Advisory affirms there are “ample indicators that social media can . . . have a profound risk of harm to the mental health and well-being of children and adolescents,” while emphasizing “robust independent safety analyses of the impact of social media on youth have not yet been conducted.” Likewise, the Advisory observes there is “broad agreement among the scientific community that social media has the potential to both benefit and harm children and adolescents,” depending on “their individual strengths and vulnerabilities, and . . . cultural, historical, and socio-economic factors.” The Advisory suggests social media can benefit minors by “providing positive community and connection with others who share identities, abilities, and interest,” “provid[ing] access to important information and creat[ing] a space for self-expression,” “promoting help-seeking behaviors[,] and serving as a gateway to initiating mental health care.”
The court is also not at all impressed by a declaration Utah provided by Jean Twenge, who is Jonathan Haidt’s partner-in-crime in pushing the baseless moral panic narrative about kids and social media.
Moreover, a review of Dr. Twenge’s Declaration suggests the majority of the reports she cites show only a correlative relationship between social media use and negative mental health impacts. Insofar as those reports support a causal relationship, Dr. Twenge’s Declaration suggests the nature of that relationship is limited to certain populations, such as teen girls, or certain mental health concerns, such as body image.
Then the court points out (thank you!) that kids have First Amendment rights too:
Second, Defendants’ position that the Act serves to protect uninformed minors from the “risks involved in providing personal information to social media companies and other users” ignores the basic First Amendment principle that “minors are entitled to a significant measure of First Amendment Protection.” The personal information a minor might choose to share on a social media service—the content they generate—is fundamentally their speech. And the Defendants may not justify an intrusion on the First Amendment rights of NetChoice’s members with, what amounts to, an intrusion on the constitutional rights of its members’ users…
Furthermore, Utah fails to meet the second prong of strict scrutiny, that the law be “narrowly tailored.” Because it’s not:
To begin, Defendants have not shown the Act is the least restrictive option for the State to accomplish its goals because they have not shown existing parental controls are an inadequate alternative to the Act. While Defendants present evidence suggesting parental controls are not in widespread use, their evidence does not establish parental tools are deficient. It only demonstrates parents are unaware of parental controls, do not know how to use parental controls, or simply do not care to use parental controls. Moreover, Defendants do not indicate the State has tried, or even considered, promoting “the diverse supervisory technologies that are widely available” as an alternative to the Act. The court is not unaware of young people’s technological prowess and potential to circumvent parental controls. But parents “control[] whether their minor children have access to Internet-connected devices in the first place,” and Defendants have not shown minors are so capable of evading parental controls that they are an insufficient alternative to the State infringing on protected speech.
Also, this:
Defendants do not offer any evidence that requiring social media companies to compel minors to push “play,” hit “next,” and log in for updates will meaningfully reduce the amount of time they spend on social media platforms. Nor do Defendants offer any evidence that these specific measures will alter the status quo to such an extent that mental health outcomes will improve and personal privacy risks will decrease
The court also points out that the law targets social media only, and not streaming or sports apps, but if it was truly harmful, then the law would have to target all of those other apps as well. Utah tried to claim that social media is somehow special and different than those other apps, but the judge notes that they provide no actual evidence in support of this claim.
But Defendants simply do not offer any evidence to support this distinction, and they only compare social media services to “entertainment services.” They do not account for the wider universe of platforms that utilize the features they take issue with, such as news sites and search engines. Accordingly, the Act’s regulatory scope “raises seriously doubts” about whether the Act actually advances the State’s purported interests.
The court also calls out that NetChoice member Dreamwidth, run by the trust & safety expert known best online as @rahaeli, proves how stupid and mistargeted this law is:
Finally, Defendants have not shown the Act is not seriously overinclusive, restricting more constitutionally protected speech than necessary to achieve the State’s goals. Specifically, Defendants have not identified why the Act’s scope is not constrained to social media platforms with significant populations of minor users, or social media platforms that use the addictive features fundamental to Defendants’ well-being and privacy concerns. NetChoice member Dreamwidth, “an open source social networking, content management, and personal publishing website,” provides a useful illustration of this disconnect. Although Dreamwidth fits the Central Coverage Definition’s concept of a “social media service,” Dreamwidth is distinguishable in form and purpose from the likes of traditional social media platforms—say, Facebook and X. Additionally, Dreamwidth does not actively promote its service to minors and does not use features such as seamless pagination and push notification.
The court then also notes that if the law went into effect, companies would face irreparable injury, given the potential fines in the law.
This harm is particularly concerning given the high cost of violating the Act—$2,500 per offense—and the State’s failure to promulgate administrative rules enabling social media companies to avail themselves of the Act’s safe harbor provision before it takes effect on October 1, 2024.
Some users also sued to block the law, and the court rejected that request as there is no clear redressable injury for those plaintiffs yet, and thus they have no standing to sue at this point. That could have changed after the law started to be enforced, but thanks to the injunction from the NetChoice part, the law is not going into effect.
Utah will undoubtedly waste more taxpayer money and appeal the case. But, so far, these laws keep failing in court across the country. And that’s great to see. Kids have First Amendment rights too, and one day, our lawmakers should start to recognize that fact.
Over the last few years, it’s felt like the age verification debate has gotten progressively stupider. People keep insisting that it must be necessary, and when others point out that there are serious privacy and security concerns that will likely make things worse, not better, we’re told that we have to do it anyway.
Let’s head down under for just one example. Almost exactly a year ago, the Australian government released a report on age verification, noting that the technology was simply a privacy and security nightmare. At the time, the government felt that mandating such a technology was too dangerous:
“It is clear from the roadmap at present, each type of age verification or age assurance technology comes with its own privacy, security, effectiveness or implementation issues,” the government’s response to the roadmap said.
The technology must work effectively without circumvention, must be able to be applied to pornography hosted outside Australia, and not introduce the risk to personal information for adults who choose to access legal pornography, the government stated.
“The roadmap makes clear that a decision to mandate age assurance is not yet ready to be taken.”
That’s why we were a bit surprised earlier this year when the government announced a plan to run a pilot program for age verification. However, as we pointed out at the time, just hours after the announcement of that pilot program, it was revealed that a mandated verification database used for bars and clubs in Australia was breached, revealing sensitive data on over 1 million people.
You would think that might make the government pause and think more deeply about this. But apparently that’s not the way they work down under. The government is now exploring plans to officially age-gate social media.
The federal government could soon have the power to ban children from social media platforms, promising legislation to impose an age limit before the next election.
But the government will not reveal any age limit for social media until a trial of age-verification technology is complete.
The article is full of extremely silly quotes:
Prime Minister Anthony Albanese said social media was taking children away from real-life experiences with friends and family.
“Parents are worried sick about this,” he said.
“We know they’re working without a map. No generation has faced this challenge before.
“The safety and mental and physical health of our young people is paramount.
“Parents want their kids off their phones and on the footy field. So do I.”
This is ridiculous on all sorts of levels. Many families stay in touch via social media, so taking kids away from it may actually cut off their ability to connect with “friends and families.”
Yes, there are cases where some kids cannot put down phones and where obvious issues must be dealt with, as we’ve discussed before. But the idea that this is a universal, across-the-board problem is nonsense.
Hell, a recent study found that more people appeared to be going into the great outdoors because of seeing it glorified on social media. Some are worried that people are too focused on the great outdoors because it’s being overly glorified on social media.
Again, there’s a lot of nuance in the research that suggests this is not a simple issue of “if we cut kids off of social media, they’ll spend more time outside.” Some kids use social media to build up their social life which can lead to more outdoor activity, while some don’t. It’s not nearly as simple as saying that they’ll magically go outdoors and play sports if they don’t have social media.
Then you combine that with the fact that the Australian government knows that age verification is inherently unsafe, and this whole plan seems especially dangerous.
But, of course, politicians love to play into the latest moral panic.
South Australian Premier Peter Malinauskas said getting kids off social media required urgent leadership.
“The evidence shows early access to addictive social media is causing our kids harm,” he said.
“This is no different to cigarettes or alcohol. When a product or service hurts children, governments must act.”
Except, it’s extremely different than cigarettes and alcohol, both of which are actually consumed by the body and insert literal toxins into the bloodstream. Social media is speech. Speech can influence, but you can’t call it inherently a toxin or inherently good or bad.
The statement that “addictive social media is causing our kids harm” is literally false. The evidence is way more nuanced, and there remain no studies showing an actual causal relationship here. As we’ve discussed at length (backed up by multiple studies), if anything the relationship may go the other way, with kids who are already dealing with mental health problems resorting to spending more time on social media because of failures by the government to provide resources to help.
In other words, this rush to ban social media for kids is, in effect, an attempt by government officials to cover up their own failures.
The government could be doing all sorts of things to actually help kids. It could invest in better digital literacy, training kids how to use the technology more appropriately. It could provide better mental health resources for people of all ages. It could provide more space and opportunities for kids to freely spend time outdoors. These are all good uses of the government’s powers that tackle the issues they claim matter here.
Surveilling kids and collecting private data on them which everyone knows will eventually leak, and then banning them from spaces that many, many kids have said make their lives and mental health better, seems unlikely to help.
Of course, it’s only at the very end of the article linked above that the reporters include a few quotes from academics pointing out that age verification could create privacy and security problems, and that such laws could backfire. But the article never even mentions that the claims made by politicians are also full of shit.
Apparently, Congress only “listens to the children” when they agree with what the kids are saying. As soon as some kids oppose something like KOSA, their views no longer count.
It’s no surprise given the way things were going, but the Senate today overwhelmingly passed KOSA by a 91 to 3 vote. The three no votes were from Senators Ron Wyden, Rand Paul, and Mike Lee.
There are still big questions about whether the House will follow suit, and, if so, how different their bill would be, and how the bills from the two chambers would be reconciled, but this is a step closer to KOSA becoming law, and creating all of the many problems people have been highlighting about it for years.
One thing I wanted to note, though, is how cynical the politicians supporting this have been. It’s become pretty typical for senators to roll out “example kids” as a kind of prop as for why they have to pass these bills. They will have stories about horrible things that happened, but with no clear explanation for how this bill would actually prevent that bad thing, and while totally ignoring the many other bad things the bill would cause.
In the case of KOSA, we’ve already highlighted how it would do harm to all sorts of information and tools that are used to help and protect kids. The most obvious example is LGBTQ+ kids, who often use the internet to help find their identity or to communicate with others who might feel isolated in their physical communities. Indeed, GOP support for KOSA was conditioned on the idea that the law would be used to suppress LGBTQ+ related content.
But, I did find it notable that, after all of the pro-KOSA team using kids as props to vote for the bill, how little attention was given last week to the ACLU sending hundreds of students to Congress to tell them how much KOSA would harm them.
Last week, the American Civil Liberties Union sent 300 high school students to Capitol Hill to lobby against the Kids Online Safety Act, a bill meant to protect children online.
The teenagers told the staffs of 85 lawmakers that the legislation could censor important conversations, particularly among marginalized groups like L.G.B.T.Q. communities.
“We live on the internet, and we are afraid that important information we’ve accessed all our lives will no longer be available,” said Anjali Verma, a 17-year-old rising high school senior from Bucks County, Pa., who was part of the student lobbying campaign. “Regardless of your political perspective, this looks like a censorship bill.”
But somehow, that perspective gets mostly ignored in all of this.
It would have been nice to have had an actual discussion on the policy challenges here, but from the beginning, KOSA co-sponsors Richard Blumenthal and Marsha Blackburn refused to take any of the concerns about the bill seriously. They frequently insisted that any criticism of the bill was just “big tech” talking points.
And, while they made cosmetic changes to try to appease some, the bill does not (and cannot) fix its fundamental problems. The bill is, fundamentally at its heart, a bill that is about censorship. And, while it does not directly demand censorship, the easiest and safest way to comply with the law will be to takedown whatever culture war hot topic politicians don’t like.
It’s kind of incredible that many of those who voted for the bill today were big supporters of the Missouri case against the administration (including Missouri’s Attorney General who brought that suit, Eric Schmitt, who voted in favor of KOSA today). So, apparently, according to Schmitt, governments should never try to influence how social media companies decide to take down content, but also government should have the power to take enforcement action against companies that don’t take down content the FTC decides is harmful.
There is a tremendous amount of hypocrisy here. And it would be nice if someone asked the senators voting in favor of this law why they were going against the wishes of all the kids who visited the Hill last week. After all, that’s what the senators who trotted out kids on the other side tried to do to those few senators who pointed out the flaws in this terrible law.
We live in the age of performative lawmaking. Something must be done! This is something. We will do it. Who cares about the tradeoffs, nuances, or the evidence? Throw all that out the window and DO SOMETHING. And if you’re going to DO SOMETHING why not make it big, bold, and already proven ineffective? At least it will get you headlines.
The underlying concerns about kids and technology are often quite legitimate. It’s reasonable to worry about kids being distracted or spending too much time on phones or social media. But just because there are concerns, it doesn’t mean that an outright ban is an effective policy or necessary. It would be nice if policy making involved actually looking at the evidence rather than making calls based on gut decisions.
But apparently, that’s not how it works.
Last month, we had an article about California Governor Gavin Newsom’s wife pushing an evidence-free moral panic about kids and social media. The very next day, we had a story by two Australian professors who had looked at all research on the question of whether or not banning phones in school was effective. They found that the evidence simply did not support banning phones in school. They concluded “the evidence for banning mobile phones in schools is weak and inconclusive.”
Certainly, some studies showed small positive benefits to removing phones, but many also showed negative effects. As we discussed on our most recent podcast with another researcher in the field, such bans can cause other problems as well.
Gov. Gavin Newsom called on Tuesday for a statewide ban on smartphone use in California schools, joining a growing national effort to curb cyberbullying and classroom distraction by limiting access to the devices.
Mr. Newsom, who has four school-age children, said he would work this summer with state lawmakers to dramatically restrict phone use during the school day in the nation’s most populous state.
Again, the actual evidence has shown that it’s not at all clear that an outright ban is effective, and it has failed in many places. New York City tried to ban phones in schools a decade ago and it failed, miserably. It was enforced unequally, often targeting kids in low-income communities, and parents wanted to know that in an emergency, their kids could call. At the time, NYC’s school chancellor said “lifting the cell phone ban is about common sense.”
Apparently, here in California, we no longer believe in common sense. Or evidence. We believe in the “feels” of the governor and his wife.
Of course, New York seems to be backsliding as well. Just a few weeks ago, New York’s Governor Kathy Hochul… also called for banning phones in schools, as if there wasn’t already evidence as to why such bans don’t work in her own state.
Again, I don’t think anyone believes that kids should be on their phones all day. But an outright ban is a blunt instrument that hasn’t worked all that well. Instead, it seems like there should be room for variability. Let parents, teachers, and school principals figure these things out on a more micro level, rather than implementing a flat out statewide ban.
But, alas, when we’re living in an age of moral panics, apparently such nuances and more focused approaches aren’t allowed.
Mobile phones are currently banned in all Australian state schools and many Catholic and independent schools around the country. This is part of a global trend over more than a decade to restrict phone use in schools.
But previous research has shown there is little evidence on whether the bans actually achieve these aims.
Many places that restricted phones in schools before Australia did have now reversed their decisions. For example, several school districts in Canada implemented outright bans then revoked them as they were too hard to maintain. They now allow teachers to make decisions that suit their own classrooms.
A ban was similarly revoked in New York City, partly because bans made it harder for parents to stay in contact with their children.
What does recent research say about phone bans in schools?
Our study
We conducted a “scoping review” of all published and unpublished global evidence for and against banning mobile phones in schools.
Our review, which is pending publication, aims to shed light on whether mobile phones in schools impact academic achievement (including paying attention and distraction), students’ mental health and wellbeing, and the incidence of cyberbullying.
A scoping review is done when researchers know there aren’t many studies on a particular topic. This means researchers cast a very inclusive net, to gather as much evidence as possible.
Our team screened 1,317 articles and reports as well as dissertations from masters and PhD students. We identified 22 studies that examined schools before and after phone bans. There was a mix of study types. Some looked at multiple schools and jurisdictions, some looked at a small number of schools, some collected quantitative data, others sought qualitative views.
In a sign of just how little research there is on this topic, 12 of the studies we identified were done by masters and doctoral students. This means they are not peer-reviewed but done by research students under supervision by an academic in the field.
But in a sign of how fresh this evidence is, almost half the studies we identified were published or completed since 2020.
The studies looked at schools in Bermuda, China, the Czech Republic, Ghana, Malawi, Norway, South Africa, Spain, Sweden, Thailand, the United Kingdom and the United States. None of them looked at schools in Australia.
Academic achievement
Our research found four studies that identified a slight improvement in academic achievement when phones were banned in schools. However, two of these studies found this improvement only applied to disadvantaged or low-achieving students.
Some studies compared schools where there were partial bans against schools with complete bans. This is a problem because it confuses the issue.
But three studies found no differences in academic achievement, whether there were mobile phone bans or not. Two of these studies used very large samples. This masters thesis looked at 30% of all schools in Norway. Another study used a nationwide cohort in Sweden. This means we can be reasonably confident in these results.
Mental health and wellbeing
Two studies in our review, including this doctoral thesis, reported mobile phone bans had positive effects on students’ mental health. However, both studies used teachers’ and parents’ perceptions of students’ wellbeing (the students were not asked themselves).
Two other studies showed no differences in psychological wellbeing following mobile phone bans. However, three studies reported more harm to students’ mental health and wellbeing when they were subjected to phone bans.
The students reported they felt more anxious without being able to use their phone. This was especially evident in one doctoral thesis carried out when students were returning to school after the pandemic, having been very reliant on their devices during lockdown.
So the evidence for banning mobile phones for the mental health and wellbeing of student is inconclusive and based only on anecdotes or perceptions, rather than the recorded incidence of mental illness.
Bullying and cyberbullying
Four studies reported a small reduction in bullying in schools following phone bans, especially among older students. However, the studies did not specify whether or not they were talking about cyberbullying.
Teachers in two other studies, including this doctoral thesis, reported they believed having mobile phones in schools increased cyberbullying.
But two other studies showed the number of incidents of online victimisation and harassment was greater in schools with mobile phone bans compared with those without bans. The study didn’t collect data on whether the online harassment was happening inside or outside school hours.
The authors suggested this might be because students saw the phone bans as punitive, which made the school climate less egalitarian and less positive. Other research has linked a positive school climate with fewer incidents of bullying.
There is no research evidence that students do or don’t use other devices to bully each other if there are phone bans. But it is of course possible for students to use laptops, tablets, smartwatches or library computers to conduct cyberbullying.
Even if phone bans were effective, they would not address the bulk of school bullying. A 2019 Australian study found 99% of students who were cyberbullied were also bullied face-to-face.
What does this tell us?
Overall, our study suggests the evidence for banning mobile phones in schools is weak and inconclusive.
As Australian education academic Neil Selwyn argued in 2021, the impetus for mobile phone bans says more about MPs responding to community concerns rather than research evidence.
Politicians should leave this decision to individual schools, which have direct experience of the pros or cons of a ban in their particular community. For example, a community in remote Queensland could have different needs and priorities from a school in central Brisbane.
Mobile phones are an integral part of our lives. We need to be teaching children about appropriate use of phones, rather than simply banning them. This will help students learn how to use their phones safely and responsibly at school, at home and beyond.
We keep pointing to research that suggests the narrative around “social media is bad for kids” is simply not supported by the data. Over and over again, we see studies that suggest that adults are overreacting to a few limited cases. Sometimes, problematic social media use seems to be due to a lack of systems in place to help with mental health issues, leading kids to spend more time on social media because they aren’t getting the support and help that they need.
One of the key points that came out of Jonathan Haidt’s problematic recent book was that he claimed one of the real downsides to social media was that it took kids away from spending time with other kids. And, this might feel reasonable. Lots of parents, certainly, have stories of kids staring at mobile device screens and seeming to have a less active social life from when the parents were kids.
But is it actually accurate?
A fascinating new study out of Norway suggests that Haidt (and the conventional wisdom many people believe about this) could be absolutely wrong.
Using five waves of biennially collected data from a birth cohort assessed throughout age 10–18 years (n = 812), we found that increased social media use predicted more time with friends offline but was unrelated to future changes in social skills. Age and sex did not moderate these associations but increased social media use predicted declined social skills among those high in social anxiety symptoms. The findings suggest that social media use may neither harm nor benefit the development of social skills and may promote, rather than displace, offline interaction with friends during adolescence. However, increased social media use may pose a risk for reduced social skills in socially anxious individuals.
Again, this seems to support much of the previous data we’ve seen suggesting that social media use is not inherently harmful for kids, and could actually be helpful in creating a new avenue for socializing with other kids in some cases.
But, the second part of the findings also confirms the other point we’ve raised here. For some adolescents, often those who are already struggling with mental health, they may end up turning to social media in response to a lack of other resources and help. This seems to be supported by the finding that social media could “pose a risk of reduced social skills in socially anxious individuals.”
Both of these findings support what we’ve been discussing all along: rather than focusing on outright bans for social media, the better focus should be on providing more mental health resources for kids (perhaps even inserting some of those resources and tools into social media apps) and working on better ways to determine that small cohort who are struggling.
Obviously, there may also be limits to this research as well. It’s based on an ongoing study in Trondheim, Norway, doing a long-term study on children born there in 2003 and 2004. It is entirely possible that young people in Trondheim are not representative of the wider world (or even just young people in Norway). But, at the very least, the depth and detail in the Trondheim Early Secure Study (TESS) suggests that they have pretty detailed data on this particular cohort.
Because the initial, overarching aim of TESS was to study mental health, we oversampled for children with emotional and behavioral problems, thus increasing variance and statistical power. More specifically, children were allocated to four strata according to their SDQ scores (cut-offs: 0–4, 5–8, 9–11, and 12–40), and the probability of selection increased with increasing SDQ scores (0.37, 0.48, 0.70, and 0.89 in the four strata, respectively (i.e., the higher SDQ scores, the higher odds for being drawn to the study)). This oversampling was corrected for in the analyses. In total, 1250 of those who consented were drawn to participate. From age 4 onwards (N = 1007) participants have been thoroughly assessed at the university clinic every second year, with 8 data waves completed, including information from the participant’s parents and teachers.
Also, the research on social media use was pretty in-depth as well and didn’t just rely on kids checking a box or something.
Social media use was assessed by semi-structured interviews conducted by the same trained personnel at all measurement points. Participants were asked about platforms used, overall frequency of use, and specific social media behavior. The main outcome constitutes the monthly sum of liking, commenting, and posting, which captured the participants’ responses to the following questions: 1)‘How often do you like other’s updates?‘; 2) ‘How often do you write comments to other’s updates or photos?‘; 3) ‘How often do you post (written) updates on your own social media sites?‘; 4) ‘How often do you post photos’? At ages 16 and 18, we also asked 5) ‘How often do you post selfies?‘. The questions were not specific to certain social media platforms, but as the participants were interviewed, the interviewers would provide examples of social media sites if needed, or in other ways facilitate a correct recall (e.g., ‘If you think about last week … ‘).
We also validated our main analysis and tested whether the results were replicated when using an alternative means of measuring the frequency of social media use, captured by interview at ages 10, 12, and 14 (total frequency of checking social media per day) and objectively measured at ages 16 and 18 (daily time spent on social media apps according to the phone’s screen time application).
They also closely measured time spent with friends through structured interviews. The data here appears to be pretty robust, whether or not the sample is representative of a wider set of young people.
While this particular research is the first of its kind, it seems to align with some other previous research:
To the best of our knowledge, the present study is the first to examine the relation between social media use and time spent with offline friends at the within-person level and capturing the years from late childhood to emerging adulthood. Importantly, during adolescence the boundaries between offline and online peer interactions are blurred, with offline friends also being online friends (van Zalk et al., 2020) being the new norm.
Our results align with studies showing that connecting with others and maintaining relationships are important motivations for adolescents’ use of social media (Ellison et al., 2007; Kircaburun et al., 2020; Park et al., 2009), connecting with people known from offline contexts being of particular importance (Reich et al., 2012). Use of online resources is found to reinforce already existing friendships (Desjarlais & Willoughby, 2010), which may explain why social media use promotes more time spent with friends face-to-face. Although one hypothesized mechanism for the association between social media use and time spent with friends is increased closeness with friends, potentially due to more self-disclosure, neither friendship closeness nor social anxiety moderated effects in the current study. However, it should be noted that we assessed closeness to best friend, whereas the outcome measure (i.e., time spent with friends face-to-face) did not differentiate between best friend and other friends, possibly contributing to the null finding.
Online interactions not only fuel existing relationships, but also enhance the initiation of new ones (Koutamanis et al., 2013), with more than half of US adolescents having made new friends online (Lenhart, 2015). Thus, it might also be that the relationship between increased social media use and time spent with friends is partly due to new friendships.
At the very least, this brings us back to where we were before, noting that the issue of kids and mental health, especially as it relates to social media use, is complicated. It does not appear to be as simple as “good” or “bad.” It’s good for some people. It’s bad for some people.
But, the idea that it somehow replaces in-person interactions does not appear to be supported by this particular study of this particular group of kids. Instead, it suggests the opposite.
Apparently, the world needs even more terrible bills that let ignorant senators grandstand to the media about how they’re “protecting the kids online.” There’s nothing more serious to work on than that. The latest bill comes from Senators Brian Schatz and Ted Cruz (with assists from Senators Chris Murphy, Katie Britt, Peter Welch, Ted Budd, John Fetterman, Angus King, and Mark Warner). This one is called the “The Kids Off Social Media Act” (KOSMA) and it’s an unconstitutional mess built on a long list of debunked and faulty premises.
It’s especially disappointing to see this from Schatz. A few years back, I know his staffers would regularly reach out to smart people on tech policy issues in trying to understand the potential pitfalls of the regulations he was pushing. Either he’s no longer doing this, or he is deliberately ignoring their expert advice. I don’t know which one would be worse.
The crux of the bill is pretty straightforward: it would be an outright ban on social media accounts for anyone under the age of 13. As many people will recognize, we kinda already have a “soft” version of that because of COPPA, which puts much stricter rules on sites directed at those under 13. Because most sites don’t want to deal with those stricter rules, they officially limit account creation to those over the age of 13.
In practice, this has been a giant mess. Years and years ago, Danah Boyd pointed this out, talking about how the “age 13” bit is a disaster for kids, parents, and educators. Her research showed that all this generally did was to have parents teach kids that “it’s okay to lie,” as parents wanted kids to use social media tools to communicate with grandparents. Making that “soft” ban a hard ban is going to create a much bigger mess and prevent all sorts of useful and important communications (which, yeah, is a 1st Amendment issue).
Schatz’s reasons put forth for the bill are just… wrong.
No age demographic is more affected by the ongoing mental health crisis in the United States than kids, especially young girls. The Centers for Disease Control and Prevention’s Youth Risk Behavior Survey found that 57 percent of high school girls and 29 percent of high school boys felt persistently sad or hopeless in 2021, with 22 percent of all high school students—and nearly a third of high school girls—reporting they had seriously considered attempting suicide in the preceding year.
Gosh. What was happening in 2021 with kids that might have made them feel hopeless? Did Schatz and crew simply forget about the fact that most kids were under lockdown and physically isolated from friends for much of 2021? And that there were plenty of other stresses, including millions of people, including family members, dying? Noooooo. Must be social media!
Studies have shown a strong relationship between social media use and poor mental health, especially among children.
Note the careful word choice here: “strong relationship.” They won’t say a causal relationship because studies have not shown that. Indeed, as the leading researcher in the space has noted, there continues to be no real evidence of any causal relationship. The relationship appears to work the other way: kids who are dealing with poor mental health and who are desperate for help turn to the internet and social media because they’re not getting help elsewhere.
Maybe offer a bill that helps kids get access to more resources that help them with their mental health, rather than taking away the one place they feel comfortable going? Maybe?
From 2019 to 2021, overall screen use among teens and tweens (ages 8 to 12) increased by 17 percent, with tweens using screens for five hours and 33 minutes per day and teens using screens for eight hours and 39 minutes.
I mean, come on Schatz. Are you trolling everyone? Again, look at those dates. WHY DO YOU THINK that screen time might have increased 17% for kids from 2019 to 2021? COULD IT POSSIBLY BE that most kids had to do school via computers and devices at home, because there was a deadly pandemic making the rounds?
Maybe?
Did Schatz forget that? I recognize that lots of folks would like to forget the pandemic lockdowns, but this seems like a weird way to manifest that.
I mean, what a weird choice of dates to choose. I’m honestly kind of shocked that the increase was only 17%.
Also, note that the data presented here isn’t about an increase in social media use. It could very well be that the 17% increase was Zoom classes.
Based on the clear and growing evidence, the U.S. Surgeon General issued an advisory last year, calling for new policies to set and enforce age minimums and highlighting the importance of limiting the use of features, like algorithms, that attempt to maximize time, attention, and engagement.
Wait. You mean the same Surgeon General’s report that denied any causal link between social media and mental health (which you falsely claim has been proved) and noted just how useful and important social media is to many young people?
From that report, which Schatz misrepresents:
Social media can provide benefits for some youth by providing positive community and connection with others who share identities, abilities, and interests. It can provide access to important information and create a space for self-expression. The ability to form and maintain friendships online and develop social connections are among the positive effects of social media use for youth. , These relationships can afford opportunities to have positive interactions with more diverse peer groups than are available to them offline and can provide important social support to youth. The buffering effects against stress that online social support from peers may provide can be especially important for youth who are often marginalized, including racial, ethnic, and sexual and gender minorities. , For example, studies have shown that social media may support the mental health and well-being of lesbian, gay, bisexual, asexual, transgender, queer, intersex and other youths by enabling peer connection, identity development and management, and social support. Seven out of ten adolescent girls of color report encountering positive or identity-affirming content related to race across social media platforms. A majority of adolescents report that social media helps them feel more accepted (58%), like they have people who can support them through tough times (67%), like they have a place to show their creative side (71%), and more connected to what’s going on in their friends’ lives (80%). In addition, research suggests that social media-based and other digitally-based mental health interventions may also be helpful for some children and adolescents by promoting help-seeking behaviors and serving as a gateway to initiating mental health care.
Did Schatz’s staffers just, you know, skip over that part of the report or nah?
The bill also says that companies need to not allow algorithmic targeting of content to anyone under 17. This is also based on a widely believed myth that algorithmic content is somehow problematic. No studies have legitimately shown that of current algorithms. Indeed, a recent study showed that removing algorithmic targeting leads to people being exposed to more disinformation.
Is this bill designed to force more disinformation on kids? Why would that be a good idea?
Yes, some algorithms can be problematic! About a decade ago, algorithms that tried to optimize solely for “engagement” definitely created some bad outcomes. But it’s been a decade since most such algorithms have been designed that way. On most social media platforms, the algorithms are designed in other ways, taking into account a variety of different factors, because they know that optimizing just on engagement leads to bad outcomes.
Then the bill tacks on Cruz’s bill to require schools to block social media. There’s an amusing bit when reading the text of that part of the law. It says that you have to block social media on “federally funded networks and devices” but also notes that it does not prohibit “a teacher from using a social media platform in the classroom for educational purposes.”
But… how are they going to access those if the school is required by law to block access to such sites? Most schools are going to do a blanket ban, and teachers are going to be left to do what? Show kids useful YouTube science videos on their phones? Or maybe some schools will implement a special teacher code that lets them bypass the block. And by the end of the first week of school half the kids in the school will likely know that password.
What are we even doing here?
Schatz has a separate page hyping up the bill, and it’s even dumber than the first one above. It repeats some of the points above, though this time linking to Jonathan Haidt, whose work has been trashed left, right, and center by actual experts in this field. And then it gets even dumber:
Big Tech knows it’s complicit – but refuses to do anything about it…. Moreover, the platforms know about their central role in turbocharging the youth mental health crisis. According to Meta’s own internal study, “thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse.” It concluded, “teens blame Instagram for increases in the rate of anxiety and depression.”
This is not just misleading, it’s practically fraudulent misrepresentation. The study Schatz is citing is one that was revealed by Frances Haugen. As we’ve discussed, it was done because Meta was trying to understand how to do better. Indeed, the whole point of that study was to see how teens felt about using social media in 12 different categories. Meta found that most boys felt neutral or better about themselves in all 12 categories. For girls, it was 11 out of 12. It was only in one category, body image, where the split was more pronounced. 32% of girls said that it made them feel worse. Basically the same percentage said it had no impact, or that it made them feel better.
Also, look at that slide’s title. The whole point of this study was to figure out if they were making kids feel worse in order to look into how to stop doing that. And now, because grandstanders like Schatz are falsely claiming that this proves they were “complicit” and “refuse to do anything about it,” no social media company will ever do this kind of research again.
Because, rather than proactively looking to see if they’re creating any problems that they need to try to fix, Schatz and crew are saying “simply researching this is proof that you’re complicit and refuse to act.”
Statements like this basically ensure that social media companies stick their heads in the sand, rather than try to figure out where harm might be caused and take steps to stop that harm.
Why would Schatz want to do that?
That page then also falsely claims that the bill does not require age verification. This is a silly two-step that lying politicians claim every time they do this. Does it directly mandate age verification? No. But, by making the penalties super serious and costly for failing to stop kids from accessing social media that will obviously drive companies to introduce stronger age verification measures that are inherently dangerous and an attack on privacy.
Perhaps Schatz doesn’t understand this, but it’s been widely discussed by many of the experts his staff used to talk to. So, really, he has no excuse.
The FAQ also claims that the bill will pass constitutional muster, while at the same time admitting that they know there will be lawsuits challenging it:
Yes. As, for example, First Amendment expert Neil Richards explains, “[i]nstead of censoring the protected expression present on these platforms, the act takes aim at the procedures and permissions that determine the time, place and manner of speech for underage consumers.” The Supreme Court has long held that the government has the right to regulate products to protect children, including by, for instance, restricting the sale of obscene content to minors. As Richards explains: “[i]n the same way a crowded bar or nightclub is no place for a child on their own”—or in the way every state in the country requires parental consent if it allows a minor to get a tattoo—“this rule would set a reasonable minimum age and maturity limitation for social media customers.”
While we expect legal challenges to any bill aimed at regulating social media companies, we are confident that this content-neutral bill will pass constitutional muster given the government interests at play.
There are many reasons why this is garbage under the law, but rather than breaking them all down (we’ll wait for judges to explain it in detail), I’ll just point out the major tell is in the law itself. In the definition of what a “social media platform” is in the law, there is a long list of exceptions of what the law does not cover. It includes a few “moral panics of yesteryear” that gullible politicians tried to ban and were found to have violated the First Amendment in the process.
It explicitly carves out video games and content that is professionally produced, rather than user-generated:
Remember the moral panics about video games and TV destroying kids’ minds? Yeah. So this child protection bill is hasty to say “but we’re not banning that kind of content!” Because whoever drafted the bill recognized that the Supreme Court has already made it clear that politicians can’t do that for video games or TV.
So, instead, they have to pretend that social media content is somehow on a whole different level.
But it’s not. It’s still the government restricting access to content. They’re going to pretend that there’s something unique and different about social media, and that they’re not banning the “content” but rather the “place” and “manner” of accessing that content. Except that’s laughable on its face.
You can see that in the quote above where Schatz does the fun dance where he first says “it’s okay to ban obscene content to minors” and then pretends that’s the same as restrictions on access to a bar (it’s not). One is about the content, and one is about a physical place. Social media is all about the content, and it’s not obscene content (which is already an exception to the First Amendment).
And, the “parental consent” for tattoos… I mean, what the fuck? Literally 4 questions above in the FAQ where that appears Schatz insists that his bill has nothing about parental consent. And then he tries to defend it by claiming it’s no different than parental consent laws?
The FAQ also claims this:
This bill does not prevent LGBTQ+ youth from accessing relevant resources online and we have worked closely with LGBTQ+ groups while crafting this legislation to ensure that this bill will not negatively impact that community.
I mean, it’s good you talked to some experts, but I note that most of the LGBTQ+ groups I’m aware of are not listed on your list of “groups supporting the bill” on the very same page. That absence stands out.
And, again, the Surgeon General’s report that you misleadingly cited elsewhere highlights how helpful social media can be to many LGBTQ+ youth. You can’t just say “nah, it won’t harm them” without explaining why all those benefits that have been shown in multiple studies, including the Surgeon General’s report, somehow don’t get impacted.
There’s a lot more, but this is just a terrible bill that would create a mess. And, I’m already hearing from folks in DC that Schatz is trying to get this bill added to the latest Christmas tree of a bill to reauthorize the FAA.
It would be nice if we had politicians looking to deal with the actual challenges facing kids these days, including the lack of mental health support for those who really need it. Instead, we get unconstitutional grandstanding nonsense bills like this.
Everyone associated with this bill should feel ashamed.
After demolishing the competition from 2020 through the first half of 2022, TikTok’s DAU growth rate has collapsed. In the fourth quarter of 2023, the video service lagged Snapchat, YouTube, Instagram, and Facebook. Yes, you read that right: The ancient big blue app grew faster than TikTok.
This reminds me of when Congress was super focused on regulating Facebook, even after it was shedding users rapidly.
That’s not to say that if there is any real evidence (again, none of which has yet been shown) of dangers associated with TikTok they should be ignored. But, it is a reminder that the internet space remains incredibly dynamic, even as the media and politicians act as if what’s happening today will continue to be the way it is.
Social media sites come and go. They’re cool for kids until their parents get involved or until they go through the inevitable enshittification curve. There appear to be at least some signals that TikTok may have passed its prime and folks are starting to move on.
In short, the “problem” (if there is one) may solve itself through the simple fact that… TikTok might just not be all that cool anymore. Business Insider suggests that the original TikTok generation, who were teenagers when the app first became cool, may have since graduated and started having to live life and get a job and stuff, leaving less time for TikTok. Of course, that would ignore the fact that as younger kids age into being teens, they’re still likely to join. But, perhaps not at the same rate as before.
The young adults I spoke to have been on social media for a decade or more and didn’t question the impact it was having on them until recently. They started noticing that TikTok, in particular, got in the way of sleep, work, household chores and relationships. Some even say it has kept them from chasing their own creative dreams. They are now deleting the app in pursuit of more in-person experiences and tidier homes.
While that may be anecdotal, there is at least some data to back it up:
TikTok’s U.S. average monthly users between the ages of 18 and 24 declined by nearly 9% from 2022 to 2023, according to mobile analytics firm Data.ai.
In short, as it often does, Congress may be fighting (badly) the last battle, and not realizing that some of this stuff… takes care of itself.
It seems like the only “bipartisan” support around regulations and the internet these days is… over the false, widely debunked moral panic that the internet is inherently harmful to children. Study after study has said it’s simply not true. Here’s the latest list (and I have one more to write up soon):
Last fall, the widely respected Pew Research Center did a massive study on kids and the internet, and found that for a majority of teens, social media was way more helpful than harmful.
This past May, the American Psychological Association (which has fallen for tech moral panics in the past, such as with video games) released a huge, incredibly detailed, and nuanced report going through all of the evidence, and finding no causal link between social media and harms to teens.
Soon after that, the US Surgeon General came out with a report which was misrepresented widely in the press. Yet, the details of that report also showed that no causal link could be found between social media and harms to teens. It did still recommend that we act as if there were a link, which was weird and explains the media coverage, but the actual report highlights no causal link, while also pointing out how much benefit teens receive from social media).
A few months later, an Oxford University study came out covering nearly a million people across 72 countries, noting that it could find no evidence of social media leading to psychological harm.
The Journal of Pediatrics recently published a new study again noting that after looking through decades of research, the mental health epidemic faced among young people appears largely due to the lack of open spaces where kids can be kids without parents hovering over them. That report notes that they explored the idea that social media was a part of the problem, but could find no data to support that claim.
In November a new study came out from Oxford showing that no evidence whatsoever of increased screentime having any impact on the functioning of brain development in kids.
And yet, if you talk to politicians or the media, they insist that there’s overwhelming evidence showing the opposite, that social media is inherently dangerous to children.
The latest to fall for this false moral panic is the powerful Herb Conaway, a New Jersey state representative who has been in the New Jersey Assembly since 1997. He has a bunch of moral panic related quotes. He’s claimed that the mental health epidemic among children “can be laid at the feed of social media” (despite all the studies saying otherwise). He also has claimed (again, contrary to the actual evidence) that social media “really has been horrific on the mental health and the physical health of our young people, particularly teenagers and particularly young girls.”
This is not, in fact, what the evidence shows. But it is how the moral panic has been passed around.
And so, the greatly misinformed Assemblymember has successfully been pushing Bill A5750, which requires age verification and parental consent for use of any social media platform with 5 million or more accounts worldwide. It has just passed out of committee and has a very real chance of becoming the law in New Jersey (until a federal court throws it out as unconstitutional —but we’ll get there).
Before we get to the legal problems with the bill, let’s talk about the fundamental problems.
Age verification is a privacy nightmare. This has been explained multiple times in great detail. There is no way to do age verification without putting everyone’s privacy at great risk. You don’t have to take my word for it, the French data protection agency CNIL studied every available age verification method and found that they were both unreliable and violate privacy rights.
Why would Assemblymember Conaway want to put his constituents’ privacy at risk?
Age verification only works by requiring someone to collect sensitive private data, and then hoping they can keep it safe. That’s… bad?
Next, parental verification is crazy dangerous. It can make sense in perfectly happy homes with parents who have a good relationship with their children but, tragically, that is not all homes. And if you have situations where (for example) there is an LGBTQ child in a home where the parents cannot accept their child’s identity, imagine how well that will go over.
And that’s especially true at a time when we’re seeing social media operations being created to specifically cater to marginalized groups. For example, the Trevor Project, the wonderful non-profit that helps LGBTQ youth, has their own social media network for those kids. Can you imagine how well that will work if parents of those kids had to get permission before they could make use of that site?
This law would put the most marginalized kids in society at much greater risk and cut them off from the communities and services that have been repeatedly found to help them the most.
Why?
Because of a moral panic that is not backed by the actual evidence.
The fact that this bill applies to any social media with greater than 5 million accounts means it would sweep in tons of smaller sites. Note that it’s not even active accounts or active monthly users. And it’s not just accounts in New Jersey. It’s 5 million global accounts. There are many sites that would qualify that simply could never afford to put in place age verification or parental controls, and thus the only answer will be to cut off New Jersey entirely.
So, again, the end result is cutting off marginalized and at-risk kids from the services that have repeatedly been found to be helpful.
On the legal front, these provisions are also quite clearly unconstitutional, and have been found by multiple courts to be so. Just in the past few months federal courts have rejected an Arkansas age verification bill and a California one. Neither of these were surprising results as they had been litigated in front of the Supreme Court decades ago.
The parental controls mandate is equally unconstitutional. In Brown v. EMA the Supreme Court noted that the 1st Amendment does not allow for the government “to prevent children from hearing or saying anything without their parents’ prior consent.” Children have 1st Amendment rights as well, and while they are somewhat more limited than for adults, the courts have found repeatedly that children have the right to access 1st Amendment-protected speech, and to do so without parental consent.
And, in cases like this, it’s even worse than in Brown, which was about a failed attempt by California to restrict access to violent video games. Here, the New Jersey bill attempts to limit access to all social media, not just specifically designated problematic ones. So it’s an even broader attack on the 1st Amendment rights of children than Brown was.
So, in the end we have a terribly drafted bill that will sweep in a ton of companies, even ones with limited presence in New Jersey, ordering them to invest in expensive and faulty features that have already been shown to put private info at risk, while doing so in a way that has also been shown to put the most marginalized and at-risk children at much greater risk. And all of this has already been found to be unconstitutional.
All based on a moral panic that has been widely debunked by research.
Yet the bill is sailing through the New Jersey legislature, and almost guarantees that the state of New Jersey is going to have to spend millions in taxpayer funds to defend this law in court, only to be told exactly what I’m telling them for free.