When a school district sues social media companies claiming they can’t educate kids because Instagram filters exist, that district is announcing to the world that it has fundamentally failed at its core mission. That’s exactly what New York City just did with its latest lawsuit against Meta, TikTok, and other platforms.
The message is unmistakable: “We run the largest school system in America with nearly a million students, but we’re unable to teach children that filtered photos aren’t real or help them develop the critical thinking skills needed to navigate the modern world. So we’re suing someone else to fix our incompetence.”
This is what institutional failure looks like in 2025.
NYC first got taken in by this nonsense last year, as Mayor Adams said all social media was a health hazard and toxic waste. However, that lawsuit was rolled into the crazy, almost impossible to follow, consolidated version of that lawsuit in California that currently has over 2300 filings on the docket. So, apparently, NYC dropped that version, and has now elected to sue, sue again. With the same damn law firm, Keller Rohrback, that kicked off this trend and are the lawyers behind a big chunk of these lawsuits.
The actual complaint is bad, and everyone behind it should feel bad. It’s also 327 pages, and there’s no fucking way I’m going to waste my time going through all of it, watching my blood pressure rise as I have to keep yelling at my screen “that’s not how any of this works.”
The complaint leads with what should be Exhibit A for why NYC schools are failing their students—a detailed explanation of adolescent brain development that perfectly illustrates why education matters:
Children and adolescents are especially vulnerable to developing harmful behaviors because their prefrontal cortex is not fully developed. Indeed, it is one of the last regions of the brain to mature. In the images below, the blue color depicts brain development.
Because the prefrontal cortex develops later than other areas of the brain, children and adolescents, as compared with adults, have less impulse control and less ability to evaluate risks, regulate emotions and regulate their responses to social rewards.
Stop right there. NYC just laid out the neurological case for why education exists. Kids have underdeveloped prefrontal cortexes? They struggle with impulse control, risk evaluation, and emotional regulation? THAT’S LITERALLY WHY WE HAVE SCHOOLS.
The entire premise of public education is that we can help children develop these exact cognitive and social skills. We teach them math because their brains can learn mathematical reasoning. We teach them history so they can evaluate evidence and understand cause and effect. We teach them literature so they can develop empathy and critical thinking.
But apparently, when it comes to digital literacy—arguably one of the most important skills for navigating modern life—NYC throws up its hands and sues instead of teaches.
This lawsuit is a 327-page confession of educational malpractice.
The crux of the lawsuit is, effectively, “kids like social media, and teachers just can’t compete with that shit.”
In short, children find it particularly difficult to exercise the self-control required to regulate their use of Defendants’ platforms, given the stimuli and rewards embedded in those platforms, and as a foreseeable and probable consequence of Defendants’ design choices tend to engage in addictive and compulsive use. Defendants engaged in this conduct even though they knew or should have known that their design choices would have a detrimental effect on youth, including those in NYC Plaintiffs’ community, leading to serious problems in schools and the community.
By this logic, basically any products that children like are somehow a public nuisance.
This lawsuit is embarrassing to the lawyers who brought it and to the NYC school system.
Take the complaint’s hysterical reaction to Instagram filters, which perfectly captures the educational opportunity NYC is missing:
Defendants’ image-altering filters cause mental health harms in multiple ways. First, because of the popularity of these editing tools, many of the images teenagers see have been edited by filters, and it can be difficult for teenagers to remain cognizant of the use of filters. This creates a false reality wherein all other users on the platforms appear better looking than they actually are, often in an artificial way. As children and teens compare their actual appearances to the edited appearances of themselves and others online, their perception of their own physical features grows increasingly negative. Second, Defendants’ platforms tend to reward edited photos, through an increase in interaction and positive responses, causing young users to prefer the way they look using filters. Many young users believe they are only attractive when their images are edited, not as they appear naturally. Third, the specific changes filters make to individuals’ appearances can cause negative obsession or self-hatred surrounding particular aspects of their appearance. The filters alter specific facial features such as eyes, lips, jaw, face shape, and face slimness—features that often require medical intervention to alter in real life
Read that again. The complaint admits that “it can be difficult for teenagers to remain cognizant of the use of filters” and that kids struggle to distinguish between edited and authentic images.
That’s not a legal problem. That’s a curriculum problem.
A competent school system would read that paragraph and immediately start developing age-appropriate digital literacy programs. Media literacy classes. Critical thinking exercises about online authenticity. Discussions about self-image and social comparison that have been relevant since long before Instagram existed.
Instead, NYC read that paragraph and decided the solution is to sue the companies rather than teach the kids.
This is educational malpractice masquerading as child protection. If you run a million-student school system and your response to kids struggling with digital literacy is litigation rather than education, you should resign and let someone competent take over.
They’re also getting sued for… not providing certain features, like age verification. Even though, as we keep pointing out, age verification is (1) likely unconstitutional outside of the narrow realm of pornographic content, and (2) a privacy and security nightmare for kids.
The broader tragedy here extends beyond one terrible lawsuit. NYC is participating in a nationwide trend of school districts abandoning their educational mission in favor of legal buck-passing. These districts, often working with the same handful of contingency-fee law firms, have decided it’s easier to blame social media companies than to do the hard work of preparing students for digital citizenship.
This represents a fundamental misunderstanding of what schools are supposed to do. We don’t shut down the world to protect children from it—we prepare children to navigate the world as it exists. That means teaching them to think critically about online content, understand privacy and security, develop healthy relationships with technology, and build the cognitive skills to resist manipulation.
Every generation gets a moral panic or two, and apparently “social media is destroying kids’ brains” is our version of moral panics of years past. We’ve seen this movie before: the waltz would corrupt young women’s morals, chess would stop kids from going outdoors, novels would rot their brains on useless fiction, bicycles would cause moral decay, radio would destroy family conversation, pinball machines would turn kids into delinquents, television would make them violent, comic books would corrupt their minds, and Dungeons & Dragons would lead them to Satan worship.
As society calmed down, eventually, after each of those, we now look back on those moral panics as silly, hysterical overreactions. You would hope that a modern education system would take note that they have an opportunity to use these new forms of media as a learning opportunity.
But faced with social media, America’s school districts have largely given up on education and embraced litigation. That should terrify every parent more than any Instagram filter ever could.
The real scandal isn’t that social media exists. It’s that our schools have become so risk-averse and educationally bankrupt that they’ve forgotten their core purpose: preparing young people to be thoughtful, capable adults in the world they’ll actually inherit.
It is a measure of how fast the field of AI has developed in the three years since Walled Culture the book (free digital versions available) was published that the issue of using copyright material for training AI systems, briefly mentioned in the book, has become one of the hottest topics in the copyright world, as numerous posts on this blog attest.
The current situation sees the copyright industry pitted against the generative AI companies. The former wants to limit how copyright material can be used, while the latter want a free for all. But that crude characterization does not mean that the AI companies can be regarded as on the side of the angels when it comes to broadening access to online material. They may want unfettered access for themselves, but it is becoming increasingly clear that as more companies rush to harvest key online resources for AI training purposes, they risk hobbling access for everyone else, and even threaten the very nature of the open Web.
The problem is particularly acute for non-commercial sites offering access to material for free, because they tend to be run on a shoestring, and are thus unable to cope easily with the extra demand placed on their servers by AI companies downloading holdings en masse. Even huge sites like the Wikimedia Projects, which describes itself as “the largest collection of open knowledge in the world”, are struggling with the rise of AI bots:
We are observing a significant increase in request volume, with most of this traffic being driven by scraping bots collecting training data for large language models (LLMs) and other use cases. Automated requests for our content have grown exponentially, alongside the broader technology economy, via mechanisms including scraping, APIs, and bulk downloads. This expansion happened largely without sufficient attribution, which is key to drive new users to participate in the movement, and is causing a significant load on the underlying infrastructure that keeps our sites available for everyone.
Specifically:
Since January 2024, we have seen the bandwidth used for downloading multimedia content grow by 50%. This increase is not coming from human readers, but largely from automated programs that scrape the Wikimedia Commons image catalog of openly licensed images to feed images to AI models. Our infrastructure is built to sustain sudden traffic spikes from humans during high-interest events, but the amount of traffic generated by scraper bots is unprecedented and presents growing risks and costs.
A valuable new report from the GLAM-E Lab explores how widespread this problem is in the world of GLAMs – galleries, libraries, archives, and museums. Here’s the main result:
Bots are widespread, although not universal. Of 43 respondents, 39 had experienced a recent increase in traffic. Twenty-seven of the 39 respondents experiencing an increase in traffic attributed it to AI training data bots, with an additional seven believing that bots could be contributing to the traffic.
Although the sites that responded to the survey were generally keen for their holdings to be accessed, there comes a point where AI bots are degrading the service to human visitors. The question then becomes: what can be done about it?
There is already a tried and tested way to block bots, using robots.txt, a tool that “allows websites to signal to bots which parts of the site the bots should not visit. Its most widely adopted use is to indicate which parts of sites should not be indexed by search engines,” as the report explains. However, there is no mechanism for enforcing the robot.txt rules, which often leads to problems:
Respondents reported that robots.txt is being ignored by many (although not necessarily all) AI scraping bots. This was widely viewed as breaking the norms of the internet, and not playing fair online.
Reports of these types of bots ignoring robots.txt are widespread, even beyond respondents. So widespread, in fact, that there are currently a number of efforts to develop new or updated robots.txt-style protocols to specifically govern AI-related bot behavior online.
One solution is to use a firewall to block traffic according to certain rules. For example, to block by IP addresses, by geography, or by particular domains. Another is to offload the task of blocking to a third party. The most popular among survey respondents is Cloudflare:
One [respondent] noted that, although they can still see the bot traffic spikes in their Cloudflare dashboard, since implementing protections, none of those spikes had managed to negatively impact the system. Others appreciated the effectiveness of Cloudflare but worried that an environment of persistent bot traffic would mean they would have to rely on Cloudflare in perpetuity.
And that means paying Cloudflare in perpetuity, which for many non-profit sites is a challenge, as is simply increasing server capability or moving to a cloud-based system – other ways of coping with surges in demand. A radically different approach to tackling AI bots is to move collections behind a login. But for many in the GLAM world, there is a big problem with this kind of shift:
the larger objection to moving works behind a login screen was philosophical. Respondents expressed concern that moving work behind a login screen, even if creating an account was free, ran counter to their collection’s mission to make their collections broadly available online. Their goal was to create an accessible collection, and adding barriers made that collection less available.
More generally, this would be a terrible move for the open Web, which has at its heart the frictionless access to knowledge. Locking things down simply to keep out the AI bots would go against that core philosophy completely. It would also bolster arguments frequently made by the copyright industry that access to everything online should by default require permission.
It seems unfair that groups working for the common good are forced by the onslaught of AI bots to carry out extra work constantly re-configuring firewalls, to pay for extra services, or to undermine the openness that lies at the heart of their missions. An article on the University of North Carolina Web site discussing how the university’s library tackled this problem of AI bots describes an interesting alternative approach that could offer a general solution. Faced with a changing pattern of access by huge numbers of AI bots, the library brought in local tech experts:
[Associate University Librarian for Digital Strategies & Information Technology] Shearer turned to the University’s Information Technology Services, which serves the entire campus. They had never encountered an attack quite like this either, and they readily brought their security and networking teams to the table. By mid-January a powerful AI-based firewall was in place, blocking the bots while permitting legitimate searches.
Stopping just the AI bots requires spotting patterns in access traffic that distinguishes them from human visitors in order to allow the latter to continue with their visits unimpeded. Finding patterns quickly in large quantities of data is something that modern AI is good at, so using it to filter out the constantly shifting patterns of AI bot access by tweaking the site’s firewall rules in real time is an effective solution. It’s also an apt one: it means that the problems that AI is creating can be solved by AI itself.
Such an AI-driven firewall management system needs to be created and updated to keep ahead of the rapidly-evolving AI bot landscape. It would make a great open source project that coders and non-profits around the world could work on together, since the latter face a common problem, and many have too few resources to do it on their own. Open source applications of the latest AI technologies are rather thin on the ground, even if most generative AI systems are based on open source code. An AI-driven firewall management system optimized for the GLAM sector would be a great place for the free software world to start remedying that.
A bill purporting to target the issue of misinformation and defamation caused by generative AI has mutated into something that could change the internet forever, harming speech and innovation from here on out.
The Nurture Originals, Foster Art and Keep Entertainment Safe (NO FAKES) Act aims to address understandable concerns about generative AI-created “replicas” by creating a broad new intellectual property right. That approach was the first mistake: rather than giving people targeted tools to protect against harmful misrepresentations—balanced against the need to protect legitimate speech such as parodies and satires—the original NO FAKES just federalized an image-licensing system.
The updated bill doubles down on that initial mistaken approach by mandating a whole new censorship infrastructure for that system, encompassing not just images but the products and services used to create them, with few safeguards against abuse.
The new version of NO FAKES requires almost every internet gatekeeper to create a system that will a) take down speech upon receipt of a notice; b) keep down any recurring instance—meaning, adopt inevitably overbroad replica filters on top of the already deeply flawed copyright filters; c) take down and filter tools that might have been used to make the image; and d) unmask the user who uploaded the material based on nothing more than the say so of person who was allegedly “replicated.”
This bill would be a disaster for internet speech and innovation.
Targeting Tools
The first version of NO FAKES focused on digital replicas. The new version goes further, targeting tools that can be used to produce images that aren’t authorized by the individual, anyone who owns the rights in that individual’s image, or the law. Anyone who makes, markets, or hosts such tools is on the hook. There are some limits—the tools must be primarily designed for, or have only limited commercial uses other than making unauthorized images—but those limits will offer cold comfort to developers given that they can be targeted based on nothing more than a bare allegation. These provisions effectively give rights-holders the veto power on innovation they’ve long sought in the copyright wars, based on the same tech panics.
Takedown Notices and Filter Mandate
The first version of NO FAKES set up a notice and takedown system patterned on the DMCA, with even fewer safeguards. NO FAKES expands it to cover more service providers and require those providers to not only take down targeted materials (or tools) but keep them from being uploaded in the future. In other words, adopt broad filters or lose the safe harbor.
Filters are already a huge problem when it comes to copyright, and at least in that instance all it should be doing is flagging for human review if an upload appears to be a whole copy of a work. The reality is that these systems often flag things that are similar but not the same (like two different people playing the same piece of public domain music). They also flag things for infringement based on mere seconds of a match, and they frequently do not take into account context that would make the use authorized by law.
But copyright filters are not yet required by law. NO FAKES would create a legal mandate that will inevitably lead to hecklers’ vetoes and other forms of over-censorship.
The bill does contain carve outs for parody, satire, and commentary, but those will also be cold comfort for those who cannot afford to litigate the question.
Threats to Anonymous Speech
As currently written, NO FAKES also allows anyone to get a subpoena from a court clerk—not a judge, and without any form of proof—forcing a service to hand over identifying information about a user.
We’ve already seen abuse of a similar system in action. In copyright cases, those unhappy with the criticisms being made against them get such subpoenas to silence critics. Often that the criticism includes the complainant’s own words as proof of the criticism, an ur-example of fair use. But the subpoena is issued anyway and, unless the service is incredibly on the ball, the user can be unmasked.
Not only does this chill further speech, the unmasking itself can cause harm to users. Either reputationally or in their personal life.
Threats to Innovation
Most of us are very unhappy with the state of Big Tech. It seems like not only are we increasingly forced to use the tech giants, but that the quality of their services is actively degrading. By increasing the sheer amount of infrastructure a new service would need to comply with the law, NO FAKES makes it harder for any new service to challenge Big Tech. It is probably not a coincidence that some of these very giants are okay with this new version of NO FAKES.
Requiring removal of tools, apps, and services could likewise stymie innovation. For one, it would harm people using such services for otherwise lawful creativity. For another, it would discourage innovators from developing new tools. Who wants to invest in a tool or service that can be forced offline by nothing more than an allegation?
This bill is a solution in search of a problem. Just a few months ago, Congress passed Take It Down, which targeted images involving intimate or sexual content. That deeply flawed bill pressures platforms to actively monitor online speech, including speech that is presently encrypted. But if Congress is really worried about privacy harms, it should at least wait to see the effects of the last piece of internet regulation before going further into a new one. Its failure to do so makes clear that this is not about protecting victims of harmful digital replicas.
NO FAKES is designed to consolidate control over the commercial exploitation of digital images, not prevent it. Along the way, it will cause collateral damage to all of us.
Originally posted to the EFF’s Deeplinks blog, with a link to EFF’s Take Action page on the NO FAKES bill, which helps you tell your elected officials not to support this bill.
This is a combo piece with the first half written by law student Elizabeth Grossman about her take on the recent FTC moral panic about the internet, and the second part being some additional commentary and notes from her professor, Jess Miers.
The FTC is fanning the flames of a moral panic. On June 4, 2025, the Commission held a workshop called The Attention Economy: How Big Tech Firms Exploit Children and Hurt Families. I attended virtually from the second panel until the end of the day. Panelists discussed how the FTC could “help” parents, age verification as the “future,” and “what can be done outside of Washington DC.” But the workshop’s true goal was to reduce the Internet to only content approved by the Christian Right, regardless of the Constitution—or the citizens of the United States.
Claim #1: The FTC Should Prevent Minors From Using App Stores and Support Age Verification Laws
FTC panelists argued that because minors lack the legal capacity to contract, app stores must obtain parental consent before allowing them to create accounts or access services. That, in turn, requires age verification to determine who is eligible. This contractual framing isn’t new—but it attempts to sidestep a well-established constitutional concern: that mandatory age verification can burden access to lawful speech. In Brown v. Entertainment Merchants Association, the Supreme Court reaffirmed minors’ rights to access protected content, while Reno v. ACLU struck down ID requirements that chilled adult access to speech. Today, state-level attempts to mandate age verification across the Internet have repeatedly failed on First Amendment grounds.
But by recasting the issue as a matter of contract formation rather than speech, proponents seek to sidestep those constitutional questions. This is the same argument at the heart of Paxton v. Free Speech Coalition, a case the FTC appears to be watching closely. FTC staff repeatedly described a ruling in favor of Texas as a “good ruling,” while suggesting a decision siding with the Free Speech Coalition would run “against” the agency’s interests. The case challenges Texas’ H.B. 1181, which mandates age verification for adult content sites.
The FTC now insists that age verification isn’t about restricting access to content, but about ensuring platforms only contract with legal adults. But this rationale collapses under scrutiny. Minors can enter into contracts—the legal question is whether and when they can disaffirm them. The broader fallacy about minors’ contractual incapacity aside, courts have repeatedly rejected similar logic. Most recently, NetChoice v. Yost reaffirmed that age verification mandates can still violate the First Amendment, no matter how creatively they’re framed. In other words, there is no contract law exception to the First Amendment.
Claim #2: Chatbots Are Dangerous To Minors
The panel’s concerns over minors using chatbots to access adult content felt like a reboot of the violent video game panic. Jake Denton, Chief Technology Officer of the FTC, delivered an unsubstantiated tirade about an Elsa-themed chatbot allegedly engaging in sexual conversations with children, but offered no evidence to support the claim. In practice, inappropriate outputs from chatbots like those on Character.AI generally occur only when users—minors or adults—intentionally steer the conversation in that direction. Even then, the platform enforces clear usage policies and deploys guardrails to keep bots within fictional contexts and prevent unintended interactions.
Yes, teens will test boundaries, as they always have, but that doesn’t eliminate their constitutional rights. As the Supreme Court held in Brown v. Entertainment Merchants Association, minors have a protected right to access legal expressive content. Then, it was video games. Today, it’s chatbots.
FTC Commissioner Melissa Holyoak adopted a more cautious tone, suggesting further study before regulation. But even then, the agency failed to offer meaningful evidence that chatbots pose widespread or novel harm to justify sweeping intervention.
Claim #3: Pornography is Not Protected Speech
Several panelists called for pornography to be stripped of First Amendment protection and for online pornography providers to be denied Section 230 immunity. Joseph Kohm, of Family Policy Alliance, in particular, delivered a barrage of inflammatory claims, including: “No one can tell me with any seriousness that the Founders had pornography in mind […] those cases were wrongly decided. We can chip away […] it is harmful.” He added that “right-minded people have been looking for pushback against the influence of technology and pornography,” and went so far as to accuse unnamed “elites” of wanting children to access pornography, without offering a shred of evidence.
Of course, pornography predates the Constitution, and the Founders drafted the First Amendment to forbid the government from regulating speech, not just the speech it finds moral or comfortable. Courts have consistently held that pornography, including online adult content, is protected expression under the First Amendment. Whether panelists find that inconvenient or not, it is not the FTC’s role to re-litigate settled constitutional precedent, much less redraw the boundaries of our most fundamental rights.
During the final panel, Dr. Mehan said that pornography “is nothing to do with the glorious right of speech and we have to get the slowest of us, i.e. judges to see it as well.” He succeeds in disrespecting a profession he is not a part of and misunderstanding the law in one foul swoop. He also said “boys are lustful” because of pornography and “girls are vain” because of social media. Blatant misogyny aside, it’s absurd to blame social media for “lust” and “vanity”–after all, Shakespeare was writing about them long before XXX videos and Instagram—and even if it weren’t, teenage lust is not a problem for the government to solve.
Panelist Terry Schilling from the American Principles Project—known for his vehemently anti-LGBT positions—called for stripping Section 230 protections from pornography sites that fail to implement age verification. As discussed, the proposal not only contradicts longstanding First Amendment precedent but also reveals a fundamental misunderstanding of what Section 230 does and whom it protects.
Bizarrely, Dr. Mehan noted “Powerpoints,” he said, “are ruining the humanities.” And he compared online or text communication to home invasion: if his daughter was talking on the phone to a boy at 11 o’clock at night, he said, that boy would be invading his home.
This alarmist narrative ignores both the many benefits of Internet access for minors and the real harms of cutting them off. For young people, especially LGBTQ youth in unsupportive environments or those with niche interests, online spaces can be essential sources of community, affirmation, and safety. Just as importantly, not all parents share the same values or concerns as the government (or Dr. Mehan). It is the role of parents, not the government, to decide when and how their children engage with the Internet.
In the same vein, the Court in NetChoice v. Uthmeyer rejected the idea that minors are just “mere people-in-waiting,” affirming their full participation in democracy as “citizens-in-training.” The ruling makes clear that social media access is a constitutional right, and attempts to strip minors of First Amendment protections are nothing more than censorship disguised as “safety.”
Conclusion
The rhetoric at this event mirrored the early pages of Project 2025, pushing for the outright criminalization of pornography and a fundamental rewrite of Section 230. Speakers wrapped their agenda in the familiar slogan of “protecting the kids,” bringing up big right-wing talking points like transgender youth in sports and harping on good old family values—all while advocating for sweeping government control over the Internet.
This movement is not about safety. It is about power. It seeks to dictate who can speak, what information is accessible, and whose identities are deemed acceptable online. The push for broad government oversight and censorship undercuts constitutional protections not just for adults, but for minors seeking autonomy in digital spaces. These policies could strip LGBTQ youth in restrictive households of the only communities where they feel safe, understood, and free to exist as themselves.
This campaign is insidious. If successful, it won’t just reshape the Internet. It will undermine free speech, strip digital anonymity and force every American to comply with a singular, state-approved version of “family values.”
The First Amendment exists to prevent exactly this kind of authoritarian overreach. The FTC should remember that.
Elizabeth Grossman is a first-year law student at the University of Akron School of Law in the Intellectual Property program and with a goal of working in tech policy.
Prof. Jess Miers’ Comments
Elizabeth’s summary makes it painfully clear: this wasn’t a serious workshop run by credible experts in technology law or policy. The title alone, “How Big Tech Firms Exploit Children and Hurt Families,” telegraphed the FTC’s predetermined stance and signaled a disinterest in genuine academic inquiry. More tellingly, the invocation of “families” serves as a dog whistle, gesturing toward the narrow, heteronormative ideals typically championed by the religious Right: white, patriarchal, Christian, and straight. The FTC may not say the quiet part out loud, but it doesn’t have to.
Worse still, most of the invited speakers weren’t experts in the topics they were pontificating on. At best, they’re activists. At worst, they’re ideologues—people with deeply partisan agendas who have no business advising a federal agency, let alone shaping national tech policy.
Just a few additional observations from me.
Chair Ferguson opened by claiming the Internet was a “fundamentally different place” 25 years ago, reminiscing about AOL Instant Messenger, Myspace Tom, and using a family computer his parents could monitor. The implication: the Internet was safer back then, and parents had more control. As someone who also grew up in that era, I can’t relate.
I, too, had a family computer in the living room and tech-savvy parents. It didn’t stop me from stumbling into adult AOL chatrooms, graphic porn, or violent videos, often unintentionally. I remember the pings of AIM just as vividly as the cyberbullying on Myspace and anonymous cruelty on Formspring. Parental controls were flimsy, easy to bypass, and rarely effective. My parents tried, but the tools of the time simply weren’t up to the task. The battle over my Internet use was constant, and my experience was hardly unique.
Still, even then, the Internet offered real value, especially for a queer kid who moved often and struggled to make “IRL” friends. But it also forced me to grow up fast in ways today’s youth are better shielded from. Parents now have far more effective tools to manage what their kids see and who they interact with. And online services have a robust toolbox for handling harmful content, not just because advertisers demand it, but thanks to Section 230, a uniquely forward-thinking law that encourages cleanup efforts. It built safety into the system before “trust and safety” became a buzzword. Contrary to Mark Meador’s baseless claims, that result was precisely its authors’ intent.
A more serious conversation would focus on what we’ve learned and how the FTC can build on that progress to support a safer Internet for everyone, rather than undermining it.
That aside, what baffles me most about these “protect the kids” conversations, which almost always turn out to be about restricting adults’ access to disfavored content, is how the supposed solution is more surveillance of children. The very services the FTC loves to criticize are being told to collect more sensitive information about minors—biometrics, ID verification, detailed behavioral tracking—to keep them “safe.” But as Eric Goldman and many other scholars who were notably absent from the workshop have extensively documented, there is no current method of age verification that doesn’t come at the expense of privacy, security, and anonymity for both youth and adults.
A discussion that ignores these documented harms, that fails to engage with the actual expert consensus around digital safety and privacy, is not a serious discussion about protecting kids.
Which is why I find it especially troubling that groups positioning themselves as privacy champions are treating this workshop as credible. In particular, IAPP’s suggestion that the FTC laid the groundwork for “improving” youth safety online is deeply disappointing. Even setting aside the numerous privacy issues associated with age verification, does the IAPP really believe that a digital ecosystem shaped by the ideological goals of these panelists will be an improvement for kids, especially those most in need of support? For queer youth, for kids in intolerant households, for those seeking information about reproductive health or gender-affirming care?
This workshop made the FTC’s agenda unmistakable. They’re not pursuing a safer Internet for kids. As Elizabeth said, the FTC is pushing a Christian nationalist vision of the web, built on censorship and surveillance, with children as the excuse and the collateral.
In 1872 California enacted a law declaring that “every one who offers to the public to carry persons, property, or messages, excepting only telegraphic messages, is a common carrier of whatever he thus offers to carry.” In 2022 the Republican National Committee sued Google, alleging that, by shunting GOP fundraising emails into Gmail spam folders, it had violated this 150-year-old common-carrier law. A federal district court dismissed the complaint. The RNC took the case to the U.S. Court of Appeals for the Ninth Circuit, where last month it submitted its opening brief.
I’m a firm believer in the value of “show, don’t tell” as a principle of writing, and I will address the RNC’s legal arguments in due course. But this is a rare instance where it’s probably best simply to announce up front that what’s happening here is stupid and insane. I’m never happy when lawyers try to redesign via lawsuit complex systems they don’t understand. But this one is jaw-dropping. Why not let some law that governed nineteenth-century blacksmiths dictate how we build rockets? Can someone dig up a decree setting standards for sixteenth-century door locks? Might be useful in a suit against a cybersecurity firm.
“It is revolting,” Oliver Wendell Holmes wrote, “to have no better reason for a rule of law than that so it was laid down in the time of Henry IV.” And “it is still more revolting,” he went on, “if the grounds upon which it was laid down have vanished long since.” You could object that Holmes over-rotated on his disdain for tradition, and I would agree with you. But the GOP’s attempt to make email conform to a statute crafted for coaches, trains, and ferry boats could indeed be called “revolting”—I might go with “demented”—and for essentially the reason Holmes cites: the grounds upon which California’s ancient common-carrier law was laid down have vanished.
There is a reason why digital technologies are thought to have brought on an “Information Age.” When a pair of researchers at UC Berkeley tried twenty-five years ago to measure all the information in the world, they estimated that “printed documents of all kinds comprised only .003% of the total.” That trend has only accelerated. Something like ninety percent of the data that exists today was created in the last three or four years.
To send a letter in California in 1872, you had to buy paper and ink, to print words on the paper by hand or with a machine, and to pay for a massive postal apparatus—clerks, conductors, drivers, engines, cars, coaches, horses, mules, and more—to carry the paper from one place to another. Today, by contrast, the marginal cost of distributing information is nearly zero. Anyone with a computer and an internet connection can create and send virtually unlimited copies of an email free of charge. (Because partisan fundraising emails are full of formulaic slop, not even content creation ought to cost the GOP much.)
“Mail” and “email” sound like they must be very similar, but they aren’t. Mail has a cost; email essentially does not; and that makes all the difference. This has been clear from the start. In 1984 a computer scientist named Jacob Palme noticed that “an electronic mail system can, if used by many people, cause severe information overload.” The “cause of this problem,” he explained, is that “it is so easy to send a message to a large number of people”—the sender has “too much control of the communication process.” As a result, “people get too many messages” and “the really important messages are difficult to find in a large flow of less important messages.” Palme proceeded to sketch a system of recipient-side message controls that looks remarkably like contemporary spam-filtering.
Another distinction is that an email service, unlike a mail service, does not “carry” messages for you. Your missives travel through an internet service provider, a domain-name-system server, and internet backbone providers, then into a recipient email service. Your email service is just an internet edge provider. It’s not like the stage company in the 1870s, carrying your letter from station to station; it’s like a secretary in the 1940s, making sure your letter goes into the right outbox. The RNC responds that California’s 1872 law reaches anyone who “offers” to carry things. The GOP sees Gmail as a carrier, the argument effectively runs, so a court should overlook the GOP’s ignorance of (and their lawyers’ refusal to accept) how the internet actually works. But this just brings us back to Jacob Palme’s prescient concern. Google “offers” notto carry stuff for you, but to sort it for you. It offers to separate the really important messages from the less important ones. Filtering spam is the heart of its service, as shown by its boast that Gmail “blocks 99.9%” of it.
The 1872 law does not cover telegraphy, the most advanced communication method of the time. Nor did California simply invoke the 1872 law when it recently decided to impose common-carrier mandates on ISPs; it passed a distinct law instead. Contrary to the RNC’s claims, therefore, the 1872 law is not some deeply evolving statute, always rushing out in front of great leaps in technology. That Gmail does not “carry” messages is no trifling detail. That spam-filtering is integral to what Google “offers” is not a mere technicality. These are material facts that take Gmail outside the scope of California’s nineteenth-century common-carrier law. (When they saw how much that law cares about things like “schedule[d] time[s] for the starting of trains or vessel[s] from their respective stations or wharves,” the RNC’s lawyers should have admitted defeat and shelved their complaint. But here we are.)
Not surprisingly, the RNC wants to duck responsibility for trying to break your spam filter. Its solution is to contend that common carriers are allowed to filter spam, but that the RNC’s emails are not spam, and that Google has treated them as spam in bad faith.
The RNC’s emails are spam. Their tone would make a used-car salesman blush—“URGENT . . . Patriot, 20X matching EXPIRING SOON”—and they’ve been known to swarm inboxes by the dozen each day. I’ve written elsewhere about what rotten spammy spam they are; I won’t rehash here how they look like legalized elder abuse. The RNC’s lawsuit was dismissed on the pleadings, so we’re stuck, for now, having to take the accusation of bad faith more or less at face value.
But that still leaves the RNC’s assumption that a common carrier is allowed to “filter some . . . spam-related expression.” The RNC plucks those words from Judge Andy Oldham’s opinion inNetChoice v. Paxton (5th Cir. 2022), without acknowledging that, in the part of the opinion they’re quoting, Judge Oldham was writing for himself alone. (Never mind that the whole opinion was also blown to pieces and vacated by the Supreme Court.) Go to that part of the opinion, moreover, and you will find no citation, drawn from the hoary common-carrier cases, for this supposed rule about common carriers and spam—an unavoidable omission, since spam came into its own only with recent technological developments. The 1872 law has nothing to say about spam: it demands that a common carrier “accept and carry whatever is offered to him . . . of a kind that he is accustomed to carry.” Maybe a court could cram a spam exception into that “accustomed to carry” bit, but that new rule would bear no connection to what the common carriers of old did (they’d never heard of “spam”). Rather, it would be cut by judges from whole cloth, and it would place on them the task of drawing from scratch a comprehensive set of lines separating “spam” and “non-spam.” Judges would be anointing themselves the arbiters of Jacob Palme’s distinction between important and unimportant messages.
Google has no magic spam sorting wand. For that matter, email does not arrive in neat “spam” and “non-spam” categories. Email comes in a terrific array of gradations between those two poles, and Google uses a variety of signals—e.g., the sender’s message cadence, the recipient’s reading habits, the presence of certain trigger words—to determine which emails cross the line and fall into the spam folder. It’s a game of cat and mouse, with spammers constantly deploying new strategies to evade Google’s filters, and Google constantly adjusting and filling gaps in its process. The RNC’s new strategy is to file a lawsuit in hopes of evading Google’s filters with the help of a court. It’s a strategy with immense upside potential: if the RNC succeeds, Google will be unable to adjust; the RNC will possess a ticket to pass through Google’s spam defenses indefinitely. This is a great prize the RNC covets, and it is important for the Ninth Circuit to understand that many other entities, too, would go to great lengths to win it. If the RNC succeeds, things will not end there. A mob of other spammers will pile into the litigation strategy of spam-filter evasion.
The issue here is not only that judges would be no good at second-guessing email services’ spam-filtering decisions—though that is of course true. It is also that, faced with the burden and expense of litigating their spam-filtering decisions, email services would likely opt simply to block much less spam.
Don’t take my word for it: the FCC has said as much with regard to text-messaging. Various groups for years urged the FCC to subject text-messaging services to common-carrier rules under the Communications Act of 1934. In 2018 the agency, at the time—and I cannot stress this enough—under Republican control, issued an order declining to do so. Although it had to start by explaining why text-messaging services aren’t common carriers under the somewhat arcane standards set forth in the Communications Act, the FCC devoted most of its energy to protesting that common-carrier rules for text-messaging is just a dumb idea. Why? Because it’d stop text-messaging services from blocking spam.
The FCC “disagree[d] with commenters that [common-carrier rules] would not limit providers’ ability to prevent spam . . . from reaching customers.” Tellingly, some of those commenters were purveyors of “mass-text[s],” who were seeking “to leverage the common carriage [rules] to stop wireless providers from . . . incorporating robotext-blocking, anti-spoofing measures, and other anti-spam features into their offerings.” With common-carrier rules in place, those “spammers” would be free, the agency concluded (quoting a trade group), to “bring endless challenges to filtering practices” and destroy services’ ability to “address evolving threats.” Ultimately, common-carrier rules would “open the floodgates to unwanted messages—drowning consumers in spam at precisely the moment when their tolerance for such messages is at an all-time low.”
The FCC’s 2018 order knocks down two of the main points raised by the RNC today. First, the RNC claims, as we’ve seen, that common-carrier requirements and spam-filtering policies are compatible. Looking, however, at telephone services—quintessential common carriers—the FCC concluded otherwise. The agency had “generally found call blocking by providers to be unlawful, and typically permit[ted] it only in specific, well-defined circumstances.” Hence the FCC’s belief that common-carriage status for text messages would lead to a flood of spam.
Second, the RNC treats Gmail as a “market-dominant” service capable of “systematically chok[ing] off one major political party’s” fundraising emails. But as the FCC observed, communications “providers have every incentive to ensure the delivery of messages that customers want to receive in order to . . . retain consumer loyalty.” Services that over-filter messages “risk losing th[eir] customers” to competitors. This market mechanism is, if anything, stronger in the context of email than in the context of text messages, as it is far easier to set up an email service than to enter the wireless industry. As Justice Clarence Thomas notes, “No small group of people controls e-mail”—its “protocol” is “decentralized.” (That’s right: Thomas is an outspoken proponent of common-carrier rules for social media, and even he seems to understand that such rules make no sense for email.)
By far the most plausible explanation for why the GOP’s emails landed in Gmail spam folders is that the GOP dishes out tons of spam. It would be nice if the Ninth Circuit could cut to the chase and say so. (This would have the added benefit of cleanly slicing through other legal arguments the RNC raises, in addition to its common-carrier argument.) Given the case’s posture (again, the lawsuit was dismissed on the pleadings), the court probably won’t do that. Google will have to retreat to the more subtle, but no less critical, matter of who is to judge what qualifies as spam. Should we leave it to competing email services to make these calls? Or are we better off if any disgruntled third party can throw such decisions into the courts? This is not a hard one. The Ninth Circuit should make clear that it wants nothing to do with email product design and managing your inbox. Along the way, maybe it can pause to mock the RNC’s revolting use, in a case about the internet, of a law fit for horses and steam engines.
Corbin K. Barthold is Internet Policy Counsel at TechFreedom.
We keep pointing out that, contrary to the uninformed opinion of lawmakers across both major parties, laws that require age verification are clearly unconstitutional*.
Such laws have been tossed out everywhere as unconstitutional, except in Texas (and even then, the district court got it right, and only the 5th Circuit is confused). And yet, we hear about another state passing an age verification law basically every week. And this isn’t a partisan/culture war thing, either. Red states, blue states, purple states: doesn’t matter. All seem to be exploring unconstitutional age verification laws.
Indiana came up with one last year, which targeted adult content sites specifically. And, yes, there are perfectly good arguments that kids should not have access to pornographic content. However, the Constitution does not allow for any such restriction to be done in a sloppy manner that is both ineffective at stopping kids and likely to block protected speech. And yet, that’s what every age-gating law does. The key point is that there are other ways to restrict kids’ access to porn, rather than age-gating everything. But they often involve this thing called parenting.
The court starts out by highlighting that geolocating is an extraordinarily inexact science, which is a problem, given that the law requires adult content sites to determine when visitors are from Indiana and to age verify them.
But there is a problem: a computer’s IP address is not like a return address on an envelope because an IP address is not inherently tied to any location in the real world but consists of a unique string of numbers written by the Internet Service Provider for a large geographic area. (See id. ¶¶ 12–13). This means that when a user connects to a website, the website will only know the user is in a circle with a radius of 60 miles. (Id. ¶ 14). Thus, if a user near Springfield, Massachusetts, were to connect to a website, the user might be appearing to connect from neighboring New York, Connecticut, Rhode Island, New Hampshire, or Vermont. (Id.). And a user from Evansville, Indiana, may appear to be connecting from Illinois or Kentucky. The ability to determine where a user is connecting from is even weaker when using a phone with a large phone carrier such as Verizon with error margins up to 1,420 miles. (Id. ¶¶ 16, 19). Companies specializing in IP address geolocation explain the accuracy of determining someone’s state from their IP address is between 55% and 80%. (Id. ¶ 17). Internet Service Providers also continually change a user’s IP address over the course of the day, which can make a user appear from different states at random.
Also, users can hide their real IP address in various ways:
Even when the tracking of an IP address is accurate, however, internet users have myriad ways to disguise their IP address to appear as if they are located in another state. (Id. ¶ B (“Website users can appear to be anywhere in the world they would like to be.”)). For example, when a user connects to a proxy server, they can use the proxy server’s IP address instead of their own (somewhat like having a PO box in another state). (Id. ¶ 22). ProxyScrape, a free service, allows users to pretend to be in 129 different countries for no charge. (Id.). Virtual Private Network (“VPN”) technology allows something similar by hiding the user’s IP address to replace it with a fake one from somewhere else.
All these methods are free or cheap and easy to use. (Id. ¶¶ 21–28). Some even allow users to access the dark web with just a download. (Id. ¶ 21). One program, TOR, is specifically designed to be as easy to use as possible to ensure as many people can be as anonymous as possible. (Id.). It is so powerful that it can circumvent Chinese censors.
The reference to “Chinese censors” is a bit weird, but okay, point made: if people don’t want to appear as if they’re from Indiana, they can do so.
The court also realizes that just blocking adult content websites won’t block access to other sources of porn. The ruling probably violates a bunch of proposed laws against content that is “harmful to minors” by telling kids how to find porn:
Other workarounds include torrents, where someone can connect directly to another computer—rather than interacting with a website—to download pornography. (Id. ¶ 29). As before, this is free. (Id.). Minors could also just search terms like “hot sex” on search engines like Bing or Google without verifying their age. (Id. ¶ 32–33). While these engines automatically blur content to start, (Glogoza Decl. ¶¶ 5–6), users can simply click a button turning off “safe search” to reveal pornographic images, (Sonnier Decl. ¶ 32). Or a minor could make use of mixed content websites below the 1/3 mark like Reddit and Facebook
And thus, problem number one with age verification: it’s not going to be even remotely effective for achieving the policy goals being sought here.
With this background, it is easy to see why age verification requirements are ineffective at preventing minors from viewing obscene content. (See id. ¶¶ 14–34 (discussing all the ways minors could bypass age verification requirements)). The Attorney General submits no evidence suggesting that age verification is effective at preventing minors from accessing obscene content; one source submitted by the Attorney General suggests there must be an “investigation” into the effectiveness of preventive methods, “such as age verification tools.
And that matters. Again, even if you agree with the policy goals, you should recognize that putting in place an ineffective regulatory regime that is easily bypassed is not at all helpful, especially given that it might also restrict speech for non-minors.
Unlike the 5th Circuit, this district court in Indiana understands the precedents related to this issue and knows that Ashcroft v. ACLU already dealt with the main issue at play in this case:
In the case most like the one here, the Supreme Court affirmed the preliminary enjoinment of the Child Online Protection Act. See Ashcroft II, 542 U.S. at 660–61. That statute imposed penalties on websites that posted content that was “harmful to minors” for “commercial purposes” unless those websites “requir[ed the] use of a credit card” or “any other reasonable measures that are feasible under available technology” to restrict the prohibited materials to adults. 47 U.S.C. § 231(a)(1). The Supreme Court noted that such a scheme failed to clear the applicable strict scrutiny bar. Ashcroft II, 542 U.S. at 665–66 (applying strict scrutiny test). That was because the regulations were not particularly effective as it was easy for minors to get around the requirements, id. at 667– 68, and failed to consider less restrictive alternatives that would have been equally effective such as filtering and blocking software, id. at 668–69 (discussing filtering and blocking software). All of that is equally true here, which is sufficient to resolve this case against the Attorney General.
Indiana’s Attorney General points to the 5th Circuit ruling that tries to ignore Ashcroft, but the judge here is too smart for that. He knows he’s bound by the Supreme Court, not whatever version of Calvinball the 5th Circuit is playing:
Instead of applying strict scrutiny as directed by the Supreme Court, the Fifth Circuit applied rational basis scrutiny under Ginsberg v. New York, 390 U.S. 629 (1968), even though the Supreme Court explained how Ginsberg was inapplicable to these types of cases in Reno, 521 U.S. at 865–66. The Attorney General argues this court should follow that analysis and apply rational basis scrutiny under Ginsberg.
However, this court is bound by Ashcroft II. See Agostini v. Felton, 521 U.S. 203, 237–38 (1997) (explaining lower courts “should follow the case which directly controls”). To be sure, Ashcroft II involved using credit cards, and Indiana’s statute requires using a driver’s license or third-party identification software.10 But as discussed below, this is not sufficient to take the Act beyond the strictures of strict scrutiny, nor enough to materially advance Indiana’s compelling interest, nor adequate to tailor the Act to the least restrictive means.
And thus, strict scrutiny must apply, unlike in the 5th Circuit, and this law can’t pass that bar.
Among other things, the age verification in this law doesn’t just apply to material that is obscene to minors:
The age verification requirements do not just apply to obscene content and also burden a significant amount of protected speech for two reasons. First, Indiana’s statute slips from the constitutional definition of obscenity and covers more material than considered by the Miller test. This issue occurs with the third prong of Indiana’s “material harmful to minors” definition, where it describes the harmful material as “patently offensive” based on “what is suitable matter for . . . minors.” Ind. Code § 35- 49-2-2. It is well established that what may be acceptable for adults may still be deleterious (and subject to restriction) to minors. Ginsberg, 390 U.S. at 637 (holding that minors “have a more restricted right than that assured to adults to judge and determine for themselves what sex material they may read or see”); cf. ACLU v. Ashcroft, 322 F.3d 240, 268 (3d Cir. 2003) (explaining the offensiveness of materials to minors changes based on their age such that “sex education materials may have ‘serious value’ for . . . sixteen-year-olds” but be “without ‘serious value’ for children aged, say, ten to thirteen”), aff’d sub nom. in relevant part, 542 U.S. 656 (2004). Put differently, materials unsuitable for minors may not be obscene under the strictures of Miller, meaning the statute places burdens on speech that is constitutionally protected but not appropriate for children
Also, even if the government has a compelling interest in protecting kids from adult content, this law doesn’t actually do a good job of that:
To be sure, protecting minors from viewing obscene material is a compelling interest; the Act just fails to further that interest in the constitutionally required way because it is wildly underinclusive when judged against that interest. “[A] law cannot be regarded as protecting an interest ‘of the highest order’ . . . when it leaves appreciable damage to that supposedly vital interest unprohibited.” …
The court makes it clear how feeble this law is:
To Indiana’s legislature, the materials harmful to minors are not so rugged that the State believes they should be unavailable to adults, nor so mentally debilitating to a child’s mind that they should be completely inaccessible to children. The Act does not function as a blanket ban of these materials, nor ban minors from accessing these materials, nor impose identification requirements on everybody displaying obscene content. Instead, it only circumscribes the conduct of websites who have a critical mass of adult material, whether they are currently displaying that content to a minor or not. Indeed, minors can freely access obscene material simply by searching that material in a search engine and turning off the blur feature. (Id. ¶¶ 31–33). Indiana’s legislature is perfectly willing “to leave this dangerous, mind-altering material in the hands of children” so long as the children receive that content from Google, Bing, any newspaper, Facebook, Reddit, or the multitude of other websites not covered.
The court also points out how silly it is that the law only applies to sites with a high enough threshold (33%) of adult content. If the goal is to block kids’ access to porn, that’s a stupid way to go about it. Indeed, the court effectively notes that a website could get around the ban just by adding a bunch of non-adult imagery content.
The Attorney General has not even attempted to meet its burden to explain why this speaker discrimination is necessary to or supportive of to its compelling interest; why is it that a website that contains 32% pornographic material is not as deleterious to a minor as a website that contains 33% pornographic material? And why does publishing news allow a website to display as many adult-images as it desires without needing to verify the user is an adult? Indeed, the Attorney General has not submitted any evidence suggesting age verification would prohibit a single minor from viewing harmful materials, even though he bears the burden of demonstrating the effectiveness of the statute. Ultimately, the Act favors certain speakers over others by selectively imposing the age verification burdens. “This the State cannot do.” Sorrell v. IMS Health Inc., 564 U.S. 552, 580 (2011). The Act is likely unconstitutional.
In a footnote, the judge highlights an even dumber part of the law: that the 33% is based on the percentage of imagery, and gives a hypothetical of a site that would be required to age gate:
Consider a blog that discusses new legislation the author would like to see passed. It contains hundreds of posts discussing these proposals. The blog does not include images save one exception: attached to a proposal suggesting the legislature should provide better sexual health resources to adult-entertainment performers is a picture of an adult-entertainer striking a raunchy pose. Even though 99% of the blog is core political speech, adults would be unable to access the website unless they provide identification because the age verification provisions do not trigger based on the amount of total adult content on the website, but rather based on the percentage of images (no matter how much text content there is) that contain material harmful to minors.
The court suggests some alternatives to this law, from requiring age verification for accessing any adult content (though, it notes that’s also probably unconstitutional, even if it’s less restrictive) to having the state offer up free filtering and blocking tech for parents to make use of for their kids:
Indiana could make freely available and/or require the use of filtering and blocking technology on minors’ devices. This is a superior alternative. (Sonnier Decl. ¶ 47 (“Internet content filtering is a superior alternative to Internet age verification.”); see also Allen Decl. ¶¶ 38–39 (not disputing that content filtering is superior to age verification as “[t]he Plaintiff’s claim makes a number of correct positive assertions about content filtering technology” but noting “[t]here is no reason why both content filtering and age verification could not be deployed either consecutively or concurrently”)). That is true for the reasons discussed in the background section: filtering and blocking software is more accurate in identifying and blocking adult content, more difficult to circumvent, allows parents a place to participate in the rearing of their children, and imposes fewer costs on third-party websites.
And thus, due to the fact that the law is pretty obviously unconstitutional, the judge grants the injunction, blocking the law from going into effect. Indiana will almost certainly appeal and we’ll have to just keep going through this nonsense over and over again.
Thankfully, Indiana is in the 7th Circuit, not the 5th, so there’s at least somewhat less of a chance for pure nuttery on appeal.
Various states and the federal government are proposing and passing a wide variety of “kid safety” laws. Almost all of them pretend that they’re about conduct of social media sites and not about the content on them, but when you boil down what the underlying concerns are, they all end up actually being about the content.
There are demands for age verification and for blocking certain kinds of “harmful” content. This content often includes things like pornography or other sexual content, as well as content about self-harm or eating disorders.
We keep trying to explain to people who support these laws that stopping such content is not as easy as you think. Indeed, attempts at removing eating disorder content have often resulted in more harm, rather than less. This is because there is user demand for the content, and they start seeking it out in darker corners of the internet, rather than on the major sites, where other users and the sites themselves are more likely to try to intervene and guide people towards resources to help with recovery.
A new article from the Markup highlights how schools are discovering just how difficult it is to stop “dangerous” content online, and their default is to just completely block sites — including tons of sites that are actually really important and useful in helping kids. In some cases, these are due to schools trying to comply with CIPA, the Children’s Internet Protection Act from 2000.
A middle school student in Missouri had trouble collecting images of people’s eyes for an art project. An elementary schooler in the same district couldn’t access a picture of record-breaking sprinter Florence Griffith Joyner to add to a writing assignment. A high school junior couldn’t read analyses of the Greek classic “The Odyssey” for her language arts class. An eighth grader was blocked repeatedly while researching trans rights.
All of these students saw the same message in their web browsers as they tried to complete their work: “The site you have requested has been blocked because it does not comply with the filtering requirements as described by the Children’s Internet Protection Act (CIPA) or Rockwood School District.”
CIPA, a federal law passed in 2000, requires schools seeking subsidized internet access to keep students from seeing obscene or harmful images online—especially porn.
None of this should be a surprise. After all, the American Library Association rightly challenged the law after it passed. The district court said that because filtering technology sucks, it would block constitutionally protected speech. Unfortunately, the Supreme Court eventually said the law was fine.
Of course, the law was never fine, and it does not appear that the filtering technology has gotten much better in the two decades since that ruling came out.
The Markup obtained filtering records from a bunch of schools and found that they are aggressive in blocking content, possibly in a manner that is unconstitutional. But, tellingly, a lot of the content includes things that might help LGBTQ youth:
But the Rockwood web filter blocks The Trevor Project for middle schoolers, meaning that Steldt couldn’t have accessed it on the school network. Same for It Gets Better, a global nonprofit that aims to uplift and empower LGBTQ+ youth, and The LGBTQ+ Victory Fund, which supports openly LGBTQ+ candidates for public office nationwide. At the same time, the filter allows Rockwood students to see anti-LGBTQ+ information online from fundamentalist Christian group Focus on the Family and the Alliance Defending Freedom, a legal nonprofit the Southern Poverty Law Center labeled an anti-LGBTQ+ hate group in 2016.
According to the article, the school district’s CIO believes that they should block first, and then only unblock if someone makes “a compelling case” for why that content should be unblocked.
And it’s not just info for LGBTQ youth either. Information on sex education and abortion was also blocked in many schools, making it difficult for students trying to research those topics.
Maya Perez, a senior in Fort Worth, Texas, is the president of her high school’s Feminist Club, and she and her peers create presentations to drive their discussions. But research often proves nearly impossible on her school computer. She recently sought out information for a presentation about health care disparities and abortion access.
“Page after page was just blocked, blocked, blocked,” Perez said. “It’s challenging to find accurate information a lot of times.”
[….]
Alison Macklin spent almost 20 years as a sex educator in Colorado; at the end of her lessons she would tell students that they could find more information and resources on plannedparenthood.org. “Kids would say, ‘No, I can’t, miss,’” she remembered. She now serves as the policy and advocacy director for SIECUS, a national nonprofit advocating for sex education.
Only 29 states and the District of Columbia require sex education, according to SIECUS’ legislative tracking. Missouri is not one of them. The Rockwood and Wentzville school districts in Missouri were among those The Markup found to be blocking sex education websites. The Markup also identified blocks to sex education websites, including Planned Parenthood, in Florida, Utah, Texas, and South Carolina.
In Manatee County, Florida, students aren’t the only ones who can’t access these sites — district records show teachers are blocked from sex education websites too.
There’s a lot more in the article, but it’s a preview of the kind of thing that will happen at a much larger scale if things like “Age Appropriate Design” or “Kids Online Safety” bills keep passing. These bills seek to hold companies liable if kids access any content that adults or law enforcement deem “harmful.” We can see from just this report that, today, that already includes a ton of very helpful information for kids.
And this is already causing real harm in schools:
In the Center for Democracy and Technology’s survey, nearly three-quarters of students said web filters make it hard to complete assignments. Even accounting for youthful exaggeration, 57 percent of teachers said the same was true for their students.
Kristin Woelfel, a policy counsel at CDT, said she and her colleagues started to think of the web filters as a “digital book ban,” an act of censorship that’s as troubling as a physical book ban but far less visible. “You can see whether a book is on a shelf,” she said. By contrast, decisions about which websites or categories to block happen under the radar.
But at least, right now, under CIPA, those are limited to just computers on school campuses. If we get some of these other laws in place, it will be internet-wide blocking of some of this content as service providers seek to avoid any potential liability.
This is why opposing these laws is so important. Having them in place will do real harm, using the law to censor all sorts of useful and important content all in the false belief that the laws are “protecting” children.
Lawmakers in the Alabama state legislature have voted for a bill that would require parental controls and NSFW content filters to be enabled on every phone and tablet sold in the state. House Bill (HB) 298, or the Protection of Minors from Unfiltered Devices Act, cleared the state House with an overwhelming 70-8 vote, with two dozen members abstaining from voting, last week. Now in the Senate, HB 298 is seeing success after the bill’s sole sponsor, state Rep. Chris Sells, failed in some previous legislative sessions to push this legislation to approval.
If it were to become law, this bill would make Alabama one of the only other states in the union to have such a law on the books. Only Utah (surprise, surprise) has a law that requires porn filtering software to be enabled on all mobile devices sold. Passed in 2021, this law was the product of state Rep. Susan Pulsipher and the anti-porn lobbying group NCOSE (the National Center on Sexual Exploitation; formerly the right-wing religious group Morality in Media). Utah Gov. Spencer Cox gave the greenlight to Pulsipher’s House Bill 72 which has a caveat built into the legalese. Utah’s porn filtering law will remain dormant until five other states adopt similar laws. This would seem like a tall order, but, let’s be real.
The moral panic surrounding pornography has grown so toxic that anything sexual is considered “porn” in the eyes of the contemporary Republican Party. It is important to remember that content filtering laws, no matter what it is being filtered, are very ineffective and lack common sense. I wrote for Techdirt in March about content filtering. There, I refer to research conducted by the Oxford Internet Institute in mid-2018.
Victoria Nash and Andrew Przybylski, the researchers in question, conducted an analysis of the impacts a national porn filter would have on content access and free speech in the United Kingdom. They found that content filters aren’t consistent and could under- or over-block content that actually meets the criteria of being obscene to minors or that is non-obscene content like comprehensive sexual health material or LGBTQ subject matter. Content filters have evolved since then but the consensus, based on case law, is that blocking material isn’t necessarily censorship.
In the case United States v. American Library Association, a plurality of the Supreme Court upheld a law that requires public libraries and schools to have internet content filters in place. However, this was not a mandate for everyone: just as a condition to be eligible for certain federal grants. A full mandate for everyone may lead to a different outcome. While the American Civil Liberties Union condemned the ruling as a violation of the First Amendment, the high court tried to minimize the harm to adults as much as possible. The American Library Association is right to still maintain the position that it opposes filters that block content for both adults and minors, as such content is likely a form of constitutionally protected speech. Applying a content filter legal mandate to personal devices opens up a Pandora’s box of issues that a state or local government is unable to address.
For starters, there are clear interstate commerce issues that need to be addressed. If Alabama becomes the next U.S. state to require porn filters turned on mobile devices at the point of purchase by a consumer and enforces the law without a caveat like the one found in the Utah filtering law, the Commerce Clause is already violated. Only Congress and the federal government can regulate interstate commerce and foreign trade.
Since mobile device manufacturers rely on labor in other countries, like China, and global supply and value chains to bring a single mobile device to consumers, the added requirements for one relatively small share (Alabama has a population of over 5.04 million) in the overall North American mobile device consumer market could incentivize a complete withdrawal or drawn-back divestment. Alabama would also have no constitutional standing to enforce such a law through a consumer protection context. This line of reasoning is exactly what mobile service providers argued about HB 298 during the committee phase a few weeks ago. Five lobbyists spoke in opposition during the committee hearing, according to AL.com, including a lobbyist representing the Motion Picture Association and the Entertainment Software Association named Knox Argo, who pointed out that the bill is “blatantly unconstitutional.”
The CTIA, the trade association representing wireless providers like AT&T and T-Mobile, was represented by lobbyist Jake Lestock. “Mandating Alabama-specific technical requirements on devices sold nationally is unworkable,” he said. “Operating systems and other functionalities are not designed on a state-by-state basis.” In addition to the wireless industry’s clear opposition, the other negative element of the bill is a blatant demonstration of religious do-gooderism run amok. Rep. Sells is a self-described social conservative who is on record endorsing prayer in schools. He is the type like Rep. Pulsipher in Utah to see porn as a crisis.
In fact, Alabama declared porn a public health crisis in 2020. There is no evidence of a public health crisis as it relates to pornography consumption. There is also no medical or scientific evidence supporting pornography addiction. But, bills like HB 298 are presented as public safety and health interventions because of the clear bias proponents of this type of legislation maintain. No, filtering content will not improve public health. While no minors should ever see porn, bills like these attempt to strip away the civil liberties of everyone — youth and adults alike.
On top of all the issues described above, other benchmark Supreme Court cases like Reno v. American Civil Liberties Union, Ashcroft v. American Civil Liberties Union, and Ashcroft v. Free Speech Coalitionfurther suggest that such onerous laws that would limit free speech on the internet are unconstitutional. As a reminder, Reno rendered the Communications Decency Act of 1996 unconstitutional as it significantly chilled online speech and solidified the safe harbor clause in Section 230 that is the de facto “First Amendment of the internet.”
I’ll keep you all posted as to the latest developments on content filtering proposals moving forward. It’s high time people realize much more is at stake than someone’s ability to wank to their iPhone.
Michael McGrady is a journalist and commentator focusing on the tech side of the online porn business, among other things.Disclosure: The author is a member of the Free Speech Coalition. He wrote this column without compensation from the coalition, its officers, or its member firms.
A couple of years ago, Utah became the first state in the union to mandate that content filters be enabled on all mobile devices sold by manufacturers like Samsung, Apple, Lenovo, or TCL. The measure was a hit among the anti-porn crowd because it created a precedent for other states which sought to curtail the viewership of otherwise legal, consensual, regulated pornography among minors. The Utah law, House Bill 72, was passed through a religious-conservative state legislature and signed into law by Republican Gov. Spencer Cox back in 2021. Since then, the law has sat dormant.
That’s because House Bill 72 states that the content filter mandates can’t enter into force until five states have similar laws in place. That means anti-porn activists in Utah are watching closely for other state legislators to fall for the fear that just because a thirteen-year-old has a cell phone, they’re all of a sudden going to look up hardcore porn rather than waste hundreds of dollars on Robux.
There’s a (super unsurprisingly) widely held belief that no kids under the age of 18 years should watch porn. But if some do, we, as a rational society, can adopt non-punitive measures to engage in education and prevention. But content filtering mandates don’t do that. At all..
Lawmakers in eight other states have proposed laws that require all mobile devices and tablets sold in a state to have the operating system’s built-in content filters enabled or, alternatively, an “anti-porn” filtering software pre-installed at the point of purchase. Based on model legislation proposed by advocates of the somewhat obnoxious National Center on Sexual Exploitation, the intent of the bills is to prevent minors from viewing age-restricted materials, like pornographic videos.
The intent may be commendable, but the outcome is a complete mess. First, there are clear constitutional issues. Second, such laws are nearly impossible to enforce. Last and most importantly, there is effectively no scientific or legal evidence that suggests online content filters actually work. Content filters appear to be inconsequential in whether minors view age-restricted content.
NBC News recently reported on the legislative drive in these eight states to implement laws that are similar to Utah House Bill 72. Lawmakers in Florida, South Carolina, Maryland, Tennessee, Iowa, Idaho, Texas, and Montana have introduced a variety of content filtering mandates with varying degrees of severity and punishment for manufacturers and users who fail to comply.
Out of the eight states, the bills in Montana and Idaho are the only ones to have seen some sort of traction to date. The Montana proposal highlights the lack of knowledge and concern when it comes to any sort of mandate on restricting protected forms of expression on the internet.
A cursory review of the language in the Montana bill, House Bill 349, depicts a regulatory scheme that would require an “electronic device” sold in Montana to be sold with an “obscenity filter” installed. The only means to disable the obscenity filter would be a passcode that is provided by the manufacturer to an adult or the parents.
According to the bill, an “obscenity filter” is defined as “software installed on an electronic device that is capable of preventing the electronic device from accessing or displaying obscenity… through the internet or any applications owned and controlled by the manufacturer and installed on the device.” Obscenity is defined based on existing Montana statute. And, it’s important to note the definition of “obscene” is quite a bit broader than federal law including a “description of normal ultimate sexual acts” that “appeals to the prurient interest in sex”
The obscenity filter requirement is defined to compel a device manufacturer to “manufacture an electronic device that, when activated in the state, automatically enables an obscenity filter that:
(1) prevents the user from accessing or downloading material that is obscene to minors on mobile data networks, applications owned and controlled by the manufacturer, and wired or wireless internet networks;
(2) notifies a user of the electronic device when the obscenity filter blocks the device from downloading an application or accessing a website;
(3) gives a user with a passcode the opportunity to unblock a filtered application or website; and
(4) reasonably precludes a user other than a user with a passcode the opportunity to deactivate, modify, or uninstall the obscenity filter.”
In order to remove the filter, the manufacturer must provide the code to disable the filter. Manufacturers cannot, per the bill, engage knowingly in “reckless disregard” of the law if a passcode is given to a minor or if a device is sold without the content filtering software installed at the point of sale. Violators of the law would face civil action and could be subject to a new tort or class action for violation of the law. By such a standard, this could lead device manufacturers to completely block the sale of their products in Montana due to the regulatory burden or completely change their marketing and sales approach at potentially great cost to not only the device maker, but to the retailers, cell phone service providers, ISPs, and consumers. Plus, this is a significant breach of the federal government’s supremacy over regulating interstate commerce between other U.S. states and foreign governments which have no such regulation in place. But, that would be the least of their concerns.
As it’s currently drafted, Montana House Bill 349 could morph into a de facto age verification mandate. Samir Jain, VP of policy at the Center for Democracy and Technology, pointed this out when he was asked about this particular language by NBC News, saying that such language could lead to device manufacturers collecting age and identity data from potential customers just to simply unlock the pre-installed content filter. This could require a form of government identification or a valid credit card number.
“There are no restrictions as such on how providers can then use this data for other purposes. So even the sort of age verification aspect of this, I think, both creates burdens and gives rise to privacy concerns,” Jain told. NBC News, adding that the policy structures in the Montana bill are simply “crude mandatory filtering.”
Crude, indeed.
Perhaps even more important than any of the above concerns: there’s no successful implementation of this sort of policy in the U.S. or Western Europe. Even if this whole scheme were to become law, the chances of it working as intended are quite slim, if not zero. In 2018, researchers at the Oxford Internet Institute found that content filters aren’t consistent and could lead to the “under-” and “over-blocking” of content that is either legitimately obscene to minors or is actually information dealing with health education, LGBTQ subject matter, or in dealing with adolescent romantic relationships. This research was conducted at a time when the United Kingdom was considering a national filter on porn.
“Although this position might make intuitive sense, there is little empirical evidence that Internet filters provide an effective means to limit children’s and adolescents’ exposure to online sexual material,” write the researchers, Oxford’s Victoria Nash and Andrew Przybylski. It’s wrongheaded to assume that a mandate on content filtering serves as a child protection measure.
Simply put: these porn filtering mandates won’t work. Instead, these proposals will serve as vehicles for special interest groups that want to censor forms of speech, pornographic or not, that they don’t like.
Michael McGrady is a journalist and commentator focusing on the tech side of the online porn business, among other things.
We’ve written a number of posts about the problems of KOSA, the Kids Online Safety Act from Senators Richard Blumenthal and Marsha Blackburn (both of whom have fairly long and detailed histories for pushing anti-internet legislation). As with many “protect the children” or “but think of the children!” kinds of legislation, KOSA is built around moral panics and nonsense, blaming the internet any time anything bad happens, and insisting that if only this bill were in place, somehow, magically, internet companies would stop bad stuff from happening. It’s fantasyland thinking, and we need to stop electing politicians who live in fantasyland.
KOSA itself has not had any serious debate in Congress, nor been voted out of committee. And yet, there Blumenthal admitted he was was actively seeking to get it included in one of the “must pass” year end omnibus bills. When pressed about this, we heard from Senate staffers that they hadn’t heard much “opposition” to the bill, so they figured there was no reason to stop it from moving forward. Of course, that leaves out the reality: the opposition wasn’t that loud because there hadn’t been any real public opportunity to debate the bill, and since until a few weeks ago it didn’t appear to be moving forward, everyone was spending their time trying to fend off other awful bills.
But, if supporters insist there’s no opposition, well, now they need to contend with this. A coalition of over 90 organizations has sent a letter to Congress this morning explaining why KOSA is not just half-baked and not ready for prime time, but that it’s so poorly thought out and drafted that it will be actively harmful to many children.
Notably, signatories on the letter — which include our own Copia Institute — also include the ACLU, EFF, the American Library Association and many more. It also includes many organizations who do tremendous work actually fighting to protect children, rather than pushing for showboating legislation that pretends to help children while actually doing tremendous harm.
I actually think the letter pulls some punches and doesn’t go far enough in explaining just how dangerous KOSA can be for kids, but it does include some hints of how bad it can be. For example, it mandates parental controls, which may be reasonable in some circumstances for younger kids, but KOSA covers teenagers as well, where this becomes a lot more problematic:
While parental control tools can be important safeguards for helping
young children learn to navigate the Internet, KOSA would cover older minors as well, and
would have the practical effect of enabling parental surveillance of 15- and 16-year-olds. Older
minors have their own independent rights to privacy and access to information, and not every
parent-child dynamic is healthy or constructive. KOSA risks subjecting teens who are
experiencing domestic violence and parental abuse to additional forms of digital surveillance
and control that could prevent these vulnerable youth from reaching out for help or support. And
by creating strong incentives to filter and enable parental control over the content minors can
access, KOSA could also jeopardize young people’s access to end-to-end encrypted
technologies, which they depend on to access resources related to mental health and to keep
their data safe from bad actors.
The letter further highlights how the vague “duty of care” standard in the bill will be read to require filters for most online services, but we all know how filters work out in practice. And it’s not good:
KOSA establishes a burdensome, vague “duty of care” to prevent harms to minors for a broad
range of online services that are reasonably likely to be used by a person under the age of 17.
While KOSA’s aims of preventing harassment, exploitation, and mental health trauma for minors
are laudable, the legislation is unfortunately likely to have damaging unintended consequences
for young people. KOSA would require online services to “prevent” a set of harms to minors,
which is effectively an instruction to employ broad content filtering to limit minors’ access to
certain online content. Content filtering is notoriously imprecise; filtering used by schools and
libraries in response to the Children’s Internet Protection Act has curtailed access to critical
information such as sex education or resources for LGBTQ+ youth. Online services would face
substantial pressure to over-moderate, including from state Attorneys General seeking to make
political points about what kind of information is appropriate for young people. At a time when
books with LGBTQ+ themes are being banned from school libraries and people providing
healthcare to trans children are being falsely accused of “grooming,” KOSA would cut off
another vital avenue of access to information for vulnerable youth.
And we haven’t even gotten to the normalizing-surveillance and diminishing-privacy aspects of KOSA:
Moreover, KOSA would counter-intuitively encourage platforms to collect more personal
information about all users. KOSA would require platforms “reasonably likely to be used” by
anyone under the age of 17—in practice, virtually all online services—to place some stringent
limits on minors’ use of their service, including restricting the ability of other users to find a
minor’s account and limiting features such as notifications that could increase the minor’s use of
the service. However sensible these features might be for young children, they would also
fundamentally undermine the utility of messaging apps, social media, dating apps, and other
communications services used by adults. Service providers will thus face strong incentives to
employ age verification techniques to distinguish adult from minor users, in order to apply these
strict limits only to young people’s accounts. Age verification may require users to provide
platforms with personally identifiable information such as date of birth and government-issued
identification documents, which can threaten users’ privacy, including through the risk of data
breaches, and chill their willingness to access sensitive information online because they cannot
do so anonymously. Rather than age-gating privacy settings and safety tools to apply only to
minors, Congress should focus on ensuring that all users, regardless of age, benefit from strong
privacy protections by passing comprehensive privacy legislation.
There’s even more in the letter, and Congress can no longer say there’s no opposition to the bill. At the very least, sponsors of the bill (hey, Senator Blumenthal!) should be forced to respond to these many issues, rather than just spouting silly platitudes about how we “must protect the children” when his bill will do the exact opposite.