Elizabeth Grossman's Techdirt Profile

Elizabeth Grossman

About Elizabeth Grossman

Posted on Techdirt - 17 June 2025 @ 03:41pm

Yes, The FTC Wants You To Think The Internet Is The Enemy To The Great American Family

This is a combo piece with the first half written by law student Elizabeth Grossman about her take on the recent FTC moral panic about the internet, and the second part being some additional commentary and notes from her professor, Jess Miers.

The FTC is fanning the flames of a moral panic. On June 4, 2025, the Commission held a workshop called The Attention Economy: How Big Tech Firms Exploit Children and Hurt Families. I attended virtually from the second panel until the end of the day. Panelists discussed how the FTC could “help” parents, age verification as the “future,” and “what can be done outside of Washington DC.”  But the workshop’s true goal was to reduce the Internet to only content approved by the  Christian Right, regardless of the Constitution—or the citizens of the United States. 

Claim #1: The FTC Should Prevent Minors From Using App Stores and Support Age Verification Laws

FTC panelists argued that because minors lack the legal capacity to contract, app stores must obtain parental consent before allowing them to create accounts or access services. That, in turn, requires age verification to determine who is eligible. This contractual framing isn’t new—but it attempts to sidestep a well-established constitutional concern: that mandatory age verification can burden access to lawful speech. In Brown v. Entertainment Merchants Association, the Supreme Court reaffirmed minors’ rights to access protected content, while Reno v. ACLU struck down ID requirements that chilled adult access to speech. Today, state-level attempts to mandate age verification across the Internet have repeatedly failed on First Amendment grounds.

But by recasting the issue as a matter of contract formation rather than speech, proponents seek to sidestep those constitutional questions. This is the same argument at the heart of Paxton v. Free Speech Coalition, a case the FTC appears to be watching closely. FTC staff repeatedly described a ruling in favor of Texas as a “good ruling,” while suggesting a decision siding with the Free Speech Coalition would run “against” the agency’s interests. The case challenges Texas’ H.B. 1181, which mandates age verification for adult content sites. 

The FTC now insists that age verification isn’t about restricting access to content, but about ensuring platforms only contract with legal adults. But this rationale collapses under scrutiny. Minors can enter into contracts—the legal question is whether and when they can disaffirm them. The broader fallacy about minors’ contractual incapacity aside, courts have repeatedly rejected similar logic. Most recently, NetChoice v. Yost reaffirmed that age verification mandates can still violate the First Amendment, no matter how creatively they’re framed. In other words, there is no contract law exception to the First Amendment.

Claim #2: Chatbots Are Dangerous To Minors

The panel’s concerns over minors using chatbots to access adult content felt like a reboot of the violent video game panic. Jake Denton, Chief Technology Officer of the FTC,  delivered an unsubstantiated tirade about an Elsa-themed chatbot allegedly engaging in sexual conversations with children, but offered no evidence to support the claim. In practice, inappropriate outputs from chatbots like those on Character.AI generally occur only when users—minors or adults—intentionally steer the conversation in that direction. Even then, the platform enforces clear usage policies and deploys guardrails to keep bots within fictional contexts and prevent unintended interactions.

Yes, teens will test boundaries, as they always have, but that doesn’t eliminate their constitutional rights. As the Supreme Court held in Brown v. Entertainment Merchants Association, minors have a protected right to access legal expressive content. Then, it was video games. Today, it’s chatbots. 

FTC Commissioner Melissa Holyoak adopted a more cautious tone, suggesting further study before regulation. But even then, the agency failed to offer meaningful evidence that chatbots pose widespread or novel harm to justify sweeping intervention.

Claim #3: Pornography is Not Protected Speech

Several panelists called for pornography to be stripped of First Amendment protection and for online pornography providers to be denied Section 230 immunity. Joseph Kohm, of Family Policy Alliance,  in particular, delivered a barrage of inflammatory claims, including: “No one can tell me with any seriousness that the Founders had pornography in mind […] those cases were wrongly decided. We can chip away […] it is harmful.” He added that “right-minded people have been looking for pushback against the influence of technology and pornography,” and went so far as to accuse unnamed “elites” of wanting children to access pornography, without offering a shred of evidence.

Of course, pornography predates the Constitution, and the Founders drafted the First Amendment to forbid the government from regulating speech, not just the speech it finds moral or comfortable. Courts have consistently held that pornography, including online adult content, is protected expression under the First Amendment. Whether panelists find that inconvenient or not, it is not the FTC’s role to re-litigate settled constitutional precedent, much less redraw the boundaries of our most fundamental rights.

During the final panel, Dr. Mehan said that pornography  “is nothing to do with the glorious right of speech and we have to get the slowest of us, i.e. judges to see it as well.” He succeeds in disrespecting a profession he is not a part of and misunderstanding the law in one foul swoop. He also said “boys are lustful” because of pornography and “girls are vain” because of social media. Blatant misogyny aside, it’s absurd to blame social media for “lust” and “vanity”–after all, Shakespeare was writing about them long before XXX videos and Instagram—and even if it weren’t, teenage lust is not a problem for the government to solve.

Panelist Terry Schilling from the American Principles Project—known for his vehemently anti-LGBT positions—called for stripping Section 230 protections from pornography sites that fail to implement age verification. As discussed, the proposal not only contradicts longstanding First Amendment precedent but also reveals a fundamental misunderstanding of what Section 230 does and whom it protects.

Claim #4: The Internet Is Bad For Minors

FTC Commissioner Mark Meador compared Big Tech to Big Tobacco and said that letting children on the Internet is like dropping children off in the red light district. “This is not what congress envisioned,” he said, “when enacting Section 230.” Commissioner Melissa Holyoak similarly blamed social media for the rise in depression and anxiety diagnoses in minors. Yet, as numerous studies on social media and mental health have consistently demonstrated, this rise stems from a complex mix of factors—not social media.

Bizarrely, Dr. Mehan noted “Powerpoints,” he said, “are ruining the humanities.” And he compared online or text communication to home invasion: if his daughter was talking on the phone to a boy at 11 o’clock at night, he said, that boy would be invading his home.

This alarmist narrative ignores both the many benefits of Internet access for minors and the real harms of cutting them off. For young people, especially LGBTQ youth in unsupportive environments or those with niche interests, online spaces can be essential sources of community, affirmation, and safety. Just as importantly, not all parents share the same values or concerns as the government (or Dr. Mehan). It is the role of parents, not the government, to decide when and how their children engage with the Internet.

In the same vein, the Court in NetChoice v. Uthmeyer rejected the idea that minors are just “mere people-in-waiting,” affirming their full participation in democracy as “citizens-in-training.” The ruling makes clear that social media access is a constitutional right, and attempts to strip minors of First Amendment protections are nothing more than censorship disguised as “safety.”

Conclusion

The rhetoric at this event mirrored the early pages of Project 2025, pushing for the outright criminalization of pornography and a fundamental rewrite of Section 230. Speakers wrapped their agenda in the familiar slogan of “protecting the kids,” bringing up big right-wing talking points like transgender youth in sports and harping on good old family values—all while advocating for sweeping government control over the Internet.

This movement is not about safety. It is about power. It seeks to dictate who can speak, what information is accessible, and whose identities are deemed acceptable online. The push for broad government oversight and censorship undercuts constitutional protections not just for adults, but for minors seeking autonomy in digital spaces. These policies could strip LGBTQ youth in restrictive households of the only communities where they feel safe, understood, and free to exist as themselves.

This campaign is insidious. If successful, it won’t just reshape the Internet. It will undermine free speech, strip digital anonymity and force every American to comply with a singular, state-approved version of “family values.”

The First Amendment  exists to prevent exactly this kind of authoritarian overreach. The FTC should remember that.

Elizabeth Grossman is a first-year law student at the University of Akron School of Law in the Intellectual Property program and with a goal of working in tech policy.

Prof. Jess Miers’ Comments

Elizabeth’s summary makes it painfully clear: this wasn’t a serious workshop run by credible experts in technology law or policy. The title alone, “How Big Tech Firms Exploit Children and Hurt Families,” telegraphed the FTC’s predetermined stance and signaled a disinterest in genuine academic inquiry. More tellingly, the invocation of “families” serves as a dog whistle, gesturing toward the narrow, heteronormative ideals typically championed by the religious Right: white, patriarchal, Christian, and straight. The FTC may not say the quiet part out loud, but it doesn’t have to.

Worse still, most of the invited speakers weren’t experts in the topics they were pontificating on. At best, they’re activists. At worst, they’re ideologues—people with deeply partisan agendas who have no business advising a federal agency, let alone shaping national tech policy.

Just a few additional observations from me.

Chair Ferguson opened by claiming the Internet was a “fundamentally different place” 25 years ago, reminiscing about AOL Instant Messenger, Myspace Tom, and using a family computer his parents could monitor. The implication: the Internet was safer back then, and parents had more control. As someone who also grew up in that era, I can’t relate.

I, too, had a family computer in the living room and tech-savvy parents. It didn’t stop me from stumbling into adult AOL chatrooms, graphic porn, or violent videos, often unintentionally. I remember the pings of AIM just as vividly as the cyberbullying on Myspace and anonymous cruelty on Formspring. Parental controls were flimsy, easy to bypass, and rarely effective. My parents tried, but the tools of the time simply weren’t up to the task. The battle over my Internet use was constant, and my experience was hardly unique.

Still, even then, the Internet offered real value, especially for a queer kid who moved often and struggled to make “IRL” friends. But it also forced me to grow up fast in ways today’s youth are better shielded from. Parents now have far more effective tools to manage what their kids see and who they interact with. And online services have a robust toolbox for handling harmful content, not just because advertisers demand it, but thanks to Section 230, a uniquely forward-thinking law that encourages cleanup efforts. It built safety into the system before “trust and safety” became a buzzword. Contrary to Mark Meador’s baseless claims, that result was precisely its authors’ intent. 

A more serious conversation would focus on what we’ve learned and how the FTC can build on that progress to support a safer Internet for everyone, rather than undermining it. 

That aside, what baffles me most about these “protect the kids” conversations, which almost always turn out to be about restricting adults’ access to disfavored content, is how the supposed solution is more surveillance of children. The very services the FTC loves to criticize are being told to collect more sensitive information about minors—biometrics, ID verification, detailed behavioral tracking—to keep them “safe.” But as Eric Goldman and many other scholars who were notably absent from the workshop have extensively documented, there is no current method of age verification that doesn’t come at the expense of privacy, security, and anonymity for both youth and adults.

A discussion that ignores these documented harms, that fails to engage with the actual expert consensus around digital safety and privacy, is not a serious discussion about protecting kids. 

Which is why I find it especially troubling that groups positioning themselves as privacy champions are treating this workshop as credible. In particular, IAPP’s suggestion that the FTC laid the groundwork for “improving” youth safety online is deeply disappointing. Even setting aside the numerous privacy issues associated with age verification, does the IAPP really believe that a digital ecosystem shaped by the ideological goals of these panelists will be an improvement for kids, especially those most in need of support? For queer youth, for kids in intolerant households, for those seeking information about reproductive health or gender-affirming care? 

This workshop made the FTC’s agenda unmistakable. They’re not pursuing a safer Internet for kids. As Elizabeth said, the FTC is pushing a Christian nationalist vision of the web, built on censorship and surveillance, with children as the excuse and the collateral. 

Just as the playbook commands. 

Jess Miers is an Assistant Professor of Law at the University of Akron School of Law

Posted on Techdirt - 28 February 2025 @ 02:57pm

The ADL’s Misguided Attack On Steam

Last November, the Anti-Defamation League (ADL) released Steam-Powered Hate, accusing Valve’s game launcher, Steam, of fostering extremism. The report dropped just before Senator Mark Warner, a SAFE TECH Act proponent, threatened Steam’s owner, raising concerns about the political motivations behind the ADL’s claims.

The ADL analyzed over one billion data points, flagging just 0.5% as “hateful.” Yet, they misrepresent Steam—primarily a game marketplace—as a social media hub overrun with extremism, despite offering no real expertise in online content moderation or gaming culture. Meanwhile, they give powerful figures like Elon Musk a pass while pushing for government intervention in digital spaces they don’t understand.

This isn’t new—the ADL has a history of advocating speech restrictions, from social media to video games. As an American Jew, I find their big-government approach to content moderation alarming. Regulators must reject pressure from advocacy groups that misrepresent online communities and threaten free expression in the name of fighting extremism.

The ADL Misunderstands Gaming’s Complex and Notoriously Edgy Environment

Gaming communities operate on a different wavelength than typical online spaces. Gamers are notorious for their dark humor, edgier memes, and a communication style that can seem alien to outsiders. The ADL, in its attempt to analyze a platform central to gaming culture, failed to grasp this, making sweeping generalizations about a community it clearly doesn’t understand.

Take their report’s biggest claim: the vast majority of so-called “hateful content” was Pepe the Frog—a meme that, while hijacked by extremists in recent years, remains widely used in mainstream gaming culture. Even the meme’s creator was outraged by its association with hate groups. Yet the ADL doesn’t distinguish between an actual extremist Pepe and a harmless, widely used gaming meme. Instead, they lump them together, inflating their numbers.

Their AI system, “HateVision,” identified nearly one million extremist symbols—over half of which were Pepe. The AI was trained on a limited dataset of images and keywords the ADL pre-selected as hateful, but it failed to differentiate between legitimate extremism and gaming’s irreverent meme culture. Worse, it didn’t distinguish between U.S.-based and international users, ignoring the fact that gaming communities operate under different cultural norms worldwide.

The AI’s failures didn’t stop at images. It also couldn’t tell the difference between actual hate speech and the tongue-in-cheek, often provocative style of gaming communities. While gaming culture can be abrasive, the vast majority of players understand the difference between in-game trash talk and real-world hostility. The ADL? Not so much.

The ADL also went after copypastas—blocks of text copied and pasted to provoke reactions—identifying 1.83 million “potentially harmful” ones without bothering to check context. Their keyword-based approach flagged terms like “boogaloo” and “amerikaner” without acknowledging their multiple meanings. “Boogaloo” is mostly a Gen-Z meme, not a secret alt-right code word in gaming. “Boogalo” does have alt-right connotations, but there are other connotations like the one listed above. “Amerikaner” can refer to a cookie, the German word for “American,” or even a famous YouTuber’s username. They also flagged “Goyim” as a slur, despite it being a common and sometimes affectionate term used by Jewish people themselves. In the in-group of Jewish people it is often non-offensive. Though the term can be used in an offensive manner by antisemitic people, the ADL made no distinctions. 

Curious, I did a Steam keyword search for “Amerikaner.” The first result was a left-winger calling out racism. The second was someone mocking Americans in Counter-Strike. The third was a non-English post. None of the results, in my opinion, rose to the level of extremism. I also searched “Boogaloo” and found references to the classic “electric boogaloo” meme, a non-English speaker using the term, and a gaming forum name. The ADL didn’t bother with this level of nuance—they just scraped forums, pulled words out of context, and called it a day.

The ADL also attacked Garry’s Mod (G-Mod), a sandbox game known for its anything-goes creativity. They focused on one mod featuring maps of real-life mass shootings, citing comments with words like “based,” “Sigma,” and even “Subscribe to PewDiePie” as signs of extremism. But these are common ‘chronically online’ phrases with broad uses. “Based” is Gen-Z slang used by individuals on both the left and right. “Sigma” is a meme mocking “alpha male” tropes. And while the Christchurch shooter did mention PewDiePie, claiming the ADL is unfairly targeting him isn’t exactly a stretch. Yes, PewDiePie has had controversies, but painting him as a hate symbol is a major leap.

The report wraps up with the tragic white supremacist attack in Turkey, where the ADL notes that while there were red flags on the shooter’s Steam profile, there’s “no evidence” he was directly inspired by extremist content on the platform. Still, they use this tragedy to argue Steam isn’t doing enough to moderate content. But even their own research found Steam actively filters Swastikas into hearts—identifying only 11 profiles where this workaround failed. Eleven profiles. Out of millions. That’s an edge case, not a crisis.

To be fair, the study did identify a small number of fringe groups glorifying hate and violence. But the bigger question is whether the ADL’s findings actually reflect a serious problem—or if they’re simply misunderstanding an edgy, chaotic, but largely non-extremist gaming culture. And given what a small amount of extreme content that the ADL found worldwide, it looks like Steam is actually doing its job.

The ADL’s Steam Comparison is Hypocritical and Misguided

Still, the ADL reportedly takes issue with Steam’s so-called “ad hoc” approach to content moderation, claiming that despite Valve’s removal efforts, the platform still “fails to systematically address the issue of extremism and hate.” But this critique ignores the reality of gaming culture and Steam’s own policies.

Steam’s moderation reflects the nature of its community. Its content rules fall into two categories: one for games—allowing all titles except those that are illegal or blatant trolling—and another for user-generated content, which bans unlawful activity, harassment, IP violations, and commercial exploitation. The ADL criticizes Steam for not taking a stricter stance like Microsoft and Roblox, but that comparison is misleading at best.

Microsoft’s gaming history isn’t exactly a beacon of virtue. Xbox 360 live chats were infamous for racist slurs, and Call of Duty’s lobbies remain a toxic free-for-all. Meanwhile, Minecraft—the game the ADL seems to hold in high regard—was created by someone with a history of antisemitic remarks, and Microsoft itself has faced accusations of workplace discrimination. Yet, the ADL doesn’t seem nearly as concerned about these issues.

As for Roblox, while it does enforce stricter content moderation, it’s far from an extremist-free utopia. The Australian Federal Police have warned about the platform’s potential for radicalization, and NBC has reported extremist content explicitly targeting children. If anything, this suggests that heavy-handed moderation doesn’t necessarily eliminate bad actors—it just pushes them to adapt.

Steam’s approach may not align with the ADL’s ideal vision of content moderation, but pretending that Microsoft and Roblox represent the gold standard ignores their own deep-seated issues. It does not make sense for a platform like Steam to have policies identical or similar to XBox and Roblox. Both of those are fully live-service platforms, whereas Steam is primarily a consumption platform for games as opposed to a platform where users are constantly interacting with one another in-game, online through the platform.This creates market differentiation. Platform’s policies are a reflection of the services that they offer and if users feel the policies are problematic they can jump ship to another provider. 


Regulators Must Beware of Overreach from Non-Trust & Safety Experts Like the ADL

In its report, the ADL calls for a national gaming safety task force, urging policymakers to create a federally backed group to “combat this pervasive issue” through a multi-stakeholder approach. On paper, this sounds like a noble goal. In practice, it’s a recipe for government overreach that could stifle the gaming industry’s creative and independent spirit.

Gaming has thrived because of its grassroots nature—built by passionate developers and players, not by bureaucrats or advocacy groups with no real understanding of gaming culture, online community norms, or trust and safety. A federal task force risks imposing rigid, top-down regulations that don’t fit the dynamic and ever-evolving gaming world. Worse, it could open the door to politically motivated interventions that prioritize appearances over real solutions.

The ADL also suggests Steam engage in multi-stakeholder moderation efforts. But who controls the conversation? When powerful corporations and activist organizations dominate these discussions, smaller developers and gaming communities get sidelined. That’s how you end up with policies shaped by corporate interests and advocacy agendas rather than solutions that actually work for gamers. And let’s be blunt—the ADL has no business dictating content moderation policies for gaming platforms.

The ADL is not an expert on content moderation, online community dynamics, or trust and safety. It has no meaningful experience navigating the complexities of digital spaces, algorithmic content regulation, or the unique cultural norms that define gaming communities. Instead, their report relies on anecdotal evidence, an oversimplified AI model, and out-of-context symbols, all of which lead to flawed conclusions and misleading claims.

Steam isn’t Microsoft or Disney. It’s a privately owned company run by Valve and Gabe Newell, without the vast political and financial clout of industry giants. Forcing broad content moderation mandates onto platforms like Steam sets a dangerous precedent, burdening smaller businesses that lack the infrastructure of the major tech companies. And let’s be clear: Steam’s primary function is to sell video games, not to serve as a social media watchdog.

The ADL’s concerns about extremism may be well-intended, but their lack of expertise, misinterpretation of gaming culture, and one-size-fits-all approach make them uniquely unqualified to weigh in on this issue. Their push for federal intervention aligns with the broader SAFE TECH Act’s concerning political and financial motivations, which could disproportionately harm platforms that aren’t backed by corporate lobbying power.

Yes, online extremism is a problem—but handing control to out-of-touch regulators and advocacy groups that don’t understand the space isn’t the answer. The gaming industry must stay free, innovative, and independent—not bogged down by heavy-handed government oversight that threatens to erase the very culture that makes online gaming communities thrive.

Elizabeth Grossman is a first-year law student at the University of Akron School of Law in the Intellectual Property program and with a goal of working in tech policy.