Molly Buckley's Techdirt Profile

Molly Buckley

About Molly Buckley

Posted on Techdirt - 20 March 2026 @ 03:28pm

Rep. Finke Was Right: Age-Gating Isn’t About Kids, It’s About Control

When Rep. Leigh Finke spoke last month before the Minnesota House Commerce Finance and Policy Committee to testify against HF1434, a broad-sweeping proposal to age-gate the internet, she began with something disarming: agreement.

“I want to support the basic part of this,” she said, the shared goal of protecting young people online. Because that is not controversial: everyone wants kids to be safe. But HF1434, Minnesota’s proposed age-verification bill, simply won’t “protect children.” It mandates that websites hosting speech that is protected by the First Amendment for both adults and young people to verify users’ identities, often through government IDs or biometric data. As we’ve discussed before, the bill’s definition of speech that lawmakers deem “harmful to minors” is notoriously broad—broad enough to sweep in lawful, non-pornographic speech about sexual orientation, sexual health, and gender identity.

Rep. Finke, an openly transgender lawmaker, next raised a point that her critics have since tried to distort: age-verification laws like the Minnesota bill are already being used to block young LGBTQ+ people from exercising their First Amendment rights to access information that may be educational, affirming, or life-saving. Referencing the Supreme Court case Free Speech Coalition v. Paxton, she noted that state attorneys general have been “almost jubilant” about the ability to use these laws to restrict queer youth from accessing content. “We know that ‘prurient interest’ could be for many people, the very existence of transgender kids,” she added, referring to the malleable legal standard that would govern what content must be age-gated under the law. 

But despite years’ worth of evidence to back her up, Finke has faced a wave of attacks from countless media outlets and religious advocacy groups for her statements. Rep. Finke’s testimony was repeatedly mischaracterized as not having young people’s best interests in mind, when really she was accurately describing the lived reality of LGBTQ+ youth and advocating in support of their access to vital resources and community.

In fact, this backlash proves her point. Beyond attempting to silence queer voices and to scare other legislators from speaking up against these laws, it reveals how age-verification mandates are part of a larger effort to give the government much greater control of what young people are allowed to say, read, or see online. 

Rep. Finke was also right that these proposals are bad policy; they prevent all young people from finding community online, and that they violate young people and adults’ First Amendment rights.

Why FSC v. Paxton Matters

Rep. Finke was similarly right to bring up the Paxton case, because beyond the troubling Supreme Court precedent it produced, Texas’s age-verification law also drew eager support from an extraordinary number of amicus briefs from anti-LGBTQ organizations (some even designated hate groups by the Southern Poverty Law Center). 

In FSC v. Paxton, the Supreme Court gave Texas the green light to require age verification for sites where at least one-third of the content is sexual material deemed “harmful to minors,” which generally means explicit sexual content. This ruling, based on how young people do not have a First Amendment right to access explicit sexual content, allows states to enact onerous age-verification rules that will block adults from accessing lawful speech, curtail their ability to be anonymous, and jeopardize their data security and privacy. These are real and immense burdens on adults, and the Court was wrong to ignore them in upholding Texas’ law. 

But laws enacted by other states and Minnesota HF 1434 go further than the Texas statute. Rather than restricting young people from accessing sexual content, these proposals expand what the state deems “harmful to minors” to include any speech that may reference sex, sexuality, gender, and reproductive health. But young people have a First Amendment right to both speak on those topics and to access information online about them.

We will continue to fight against all online age restrictions, but bills like Minnesota’s HF 1434, which seek to restrict young people from accessing speech about their bodies, sexuality, and other truthful information, are especially pernicious.

EFF and Rep. Finke are on the same page here: age verification mandates create immense harm to our First Amendment rights, our right to privacy, as well as our online safety and security. These proposals also fully ignore the reality that LGBTQ young people often rely on the internet for information they cannot get elsewhere. 

But the Paxton case, and the coalition behind it, illustrates exactly how these laws can be weaponized. They weren’t there just to stand up for young people’s privacy online—they were there to argue that the state has a compelling interest in shielding minors from material that, in practice, often includes LGBTQ content. Ultimately, these groups would like to age-gate not just porn sites, but also any content that might discuss sex, sexuality, gender, reproductive health, abortion, and more.

Using Children as Props to Enact Censorship 

The coalition of organizations that filed amicus briefs in support of Texas’s age verification law tells us everything we need to know about the true intentions behind legislating access to information online: censorship, surveillance, and control. After all, if the race to age-gate the internet was purely about child safety, we would expect its strongest supporters to be child-development experts or privacy advocates. Instead, the loudest advocates are organizations dedicated to policing sexuality, attacking LGBTQ+ folks and reproductive rights, and censoring anything that doesn’t fit within their worldview.

Below are some of the harmful platforms that the organizations supporting the age-gating movement are advancing, and how their arguments echo in the attacks on Rep. Finke today:

Policing sexuality, bodily autonomy, and reproductive rights

Many of the organizations backing age-verification laws have spent decades trying to restrict access to accurate sexual health information and reproductive care.

Groups like Exodus Cry, for example, who filed a brief in support of the Texas AG in the SCOTUS case, frame pornography as part of a broader moral crisis. Founded by a “Christian dominionist” activist, Exodus Cry advocates for the criminalization of porn and sex work, and promotes a worldview that defines “sexual immorality” as any sexual activity outside marriage between one man and one woman. Its leadership describes the internet as a battleground in a “pornified world” that has to be reclaimed. Another brief in support of the age-verification law was filed by a group of organizations including the Public Advocate of the United States (an SPLC-designated hate group) and America’s Future. America’s Future is an organization that was formed to “revitalize the role of faith in our society” and fiercely advocates in favor of trans sports bans

These groups see age-verification laws as attractive solutions because they create a legal mechanism to wall off large swaths of content that merely mentions sex from not only young people but millions of adults, too.

Attacking LGBTQ+ Rights

Several of the most prominent legal advocates behind age-verification laws have also led the crusade against LGBTQ+ equality. The internet that these groups envision is one that heavily censors critical and even life-saving LGBTQ+ resources, community, and information. 

The Alliance Defending Freedom (ADF), for instance (which is another SPLC-designated hate group), built its reputation on litigation aimed at rolling back LGBTQ+ protections—including  allowing businesses to refuse service to same-sex couples, criminalizing same-sex relationships abroad, and restricting transgender rights

Then there’s other groups like Them Before Us and Women’s Liberation Front, both of which submitted amici in support of the Texas Attorney General and are devoted to upending LGBTQ+ rights in the United States. Them Before Us says it’s “committed to putting the rights and well-being of children ahead of the desires and agendas of adults.” But it’s also running a campaign to “End Obergefell,” the 2015 Supreme Court case that upheld the right to same-sex marriage, and has been on the cutting edge of transphobic campaigning and pseudoscientific fearmongering about IVF and surrogacy. The Women’s Liberation Front, on the other hand, is an organization that has a long track record of supporting transphobic policies such as bathroom bills, bans on gender-affirming healthcare, and efforts to define “sex” strictly as the biological sex assigned at birth. 

Through cases like FSC v. Paxton, groups like these three continue to advance a vision of society that creates government mandates to enforce their worldviews over personal freedom, while hiding behind a shroud of concern for children’s safety. But when they also describe LGBTQ+ people as “evil” threats to children and run countless campaigns against their human rights, they are being clear about their intentions. This is why we continue to say: the impact of age verification measures goes beyond porn sites.

Expanding censorship beyond the internet into real-life public spaces

As we’ve said for years now, the push to age-gate the internet is part of a broader campaign to control what information people can access in public life both on- and offline. Many of the same organizations advancing these proposals claim to be acting on behalf of young people, but their arguments consistently use children as props to justify giving the government more control over speech and information.

Many of the organizations advocating for online age verification have also supported book bans, attacks on DEI policies and education, and efforts to remove LGBTQ+ materials from schools and libraries. Two of the organizations who supported the Texas Attorney General, Citizens Defending Freedom and Manhattan Institute, have led campaigns around the country to “abolish DEI” and ban classical books like “The Bluest Eye” by Toni Morrison from school libraries. These efforts are not different from the efforts to restrict access to the internet—they reflect a broader strategy to restrict access to ideas or information that these groups find objectionable. And they discourage free thought, inquiry, and the ability for people to decide how to live their lives. 

These campaigns rely on the same core argument, that certain ideas are inherently dangerous to young people and must therefore be restricted. But that framing misrepresents an important reality: if lawmakers genuinely want to address harms that young people experience online, they should start by listening to young people themselves. When EFF spoke directly with young people about their online experiences, they overwhelmingly rejected restrictions on their access to the internet and came back with nuanced and diverse perspectives. Once that principle—that certain ideas are inherently dangerous—is accepted, the internet, once a symbol of free expression, connection, creativity, and innovation, becomes the next logical target. 

This also wouldn’t be the first time a vulnerable group is used as a prop to advance internet censorship laws. We’ve seen this playbook during the debate over FOSTA/SESTA, where many of the same advocates claimed to speak for trafficking victims/survivors and sex workers, while pushing legislation that ultimately censored online speech and harmed the very communities it invoked. It’s a familiar pattern: you invoke a vulnerable group, frame certain speech as a threat, and use that as a way to expand government control over the flow of information. And as we said in the fight against FOSTA: if lawmakers are serious about addressing harms to particular communities, they should start by talking to those communities. This means that lawmakers seeking to address online harms to young people should be talking to young people, not groups who claim their interests. 

Rep. Finke Was Not Radical. She Was Right.

The Paxton case, and the coalition backing age verification laws in the U.S., shows us exactly why the messaging around these laws draws superficial support from parents and lawmakers. But we’ve heard the quiet part said out loud before. Marsha Blackburn, a sponsor of the federal Kids Online Safety Act, has said that her goal with the legislation was to address what she called “the transgender” in society. When lawmakers and advocacy groups frame queer existence itself as a threat to young people, age-verification laws become ideological enforcement instead of regulatory policy.

In defending free speechprivacy, and the right of young people to access truthful information about themselves, Rep. Leigh Finke was not radical—she was right. She was warning that broad, ideologically driven laws will be used to erase, silence, and isolate young people under the banner of child protection. 

What’s at stake in the fight against age verification is not just a single bill in a single state, or even multiple states, for that matter. It’s about whether “protecting children” becomes a legal pretext for embedding government control over the internet to enforce specific moral and religious judgments—judgments that deny marginalized people access to speech, community, history, and truth—into law. 

And more people in public office need the courage of Rep. Finke to call this out.

Republished from the EFF’s Deeplinks blog.

Posted on Techdirt - 4 December 2025 @ 12:01pm

A Surveillance Mandate Disguised As Child Safety: Why The GUARD Act Won’t Keep Us Safe

A new bill sponsored by Sen. Hawley (R-MO), Sen. Blumenthal (D-CT), Sen. Britt (R-AL), Sen. Warner (D-VA), and Sen. Murphy (D-CT) would require AI chatbots to verify all users’ ages, prohibit minors from using AI tools, and implement steep criminal penalties for chatbots that promote or solicit certain harms. That might sound reasonable at first, but behind those talking points lies a sprawling surveillance and censorship regime that would reshape how people of all ages use the internet.

The GUARD Act may look like a child-safety bill, but in practice it’s an age-gating mandate that could be imposed on nearly every public-facing AI chatbot—from customer-service bots to search-engine assistants. The GUARD Act could force countless AI companies to collect sensitive identity data, chill online speech, and block teens from using the digital tools that they rely on every day.

EFF has warned for years that age-verification laws endanger free expression, privacy, and competition. There are legitimate concerns about transparency and accountability in AI, but the GUARD Act’s sweeping mandates are not the solution.

Young People’s Access to Legitimate AI Tools Could Be Cut Off Entirely. 

The GUARD Act doesn’t give parents a choice—it simply blocks minors from AI companions altogether. If a chat system’s age-verification process determines that a user is under 18, that user must then be locked out completely. The GUARD Act contains no parental consent mechanism, no appeal process for errors in age estimation, and no flexibility for any other context.

The bill’s definition of an AI “companion” is ambiguous enough that it could easily be interpreted to extend beyond general-use LLMs like ChatGPT, causing overcautious companies to block young people from other kinds of AI services too. In practice, this means that under the GUARD Act, teenagers may not be able to use chatbots to get help with homework, seek customer service assistance for a product they bought, or even ask a search engine a question. It could also cut off all young people’s access to educational and creative tools that have quickly become a part of everyday learning and life online.

By treating all young people—whether seven or seventeen—the same, the GUARD Act threatens their ability to explore their identities, get answers to questions free from shame or stigma, and gradually develop a sense of autonomy as they mature into adults. Denying teens’ access to online spaces doesn’t make them safer, it just keeps them uninformed and unprepared for adult life.  

The GUARD Act’s sponsors claim these rules will keep our children safe, but that’s not true. Instead, it will undermine both safety and autonomy by replacing parental guidance with government mandates and building mass surveillance infrastructure instead of privacy controls.

All Age Verification Systems Are Dangerous. This Is No Different. 

Teens aren’t the only ones who lose out under the GUARD Act. The bill would require platforms to confirm the ages of all users—young and old—before allowing them to speak, learn, or engage with their AI tools.

Under the GUARD Act, platforms can’t rely on a simple “I’m over 18” checkbox or self-attested birthdate. Instead, they must build or buy a “commercially reasonable” age-verification system that collects identifying information (like a government ID, credit record, or biometric data) from every user before granting them access to the AI service. Though the GUARD Act does contain some data minimization language, its mandate to periodically re-verify users means that platforms must either retain or re-collect that sensitive user data as needed. Both of those options come with major privacy risks.  

EFF has long documented the dangers of age-verification systems:

  • They create attractive targets for hackers. Third-party services that collect users’ sensitive ID and biometric data for the purpose of age verification have been repeatedly breached, exposing millions to identity theft and other harms.
  • They implement mass surveillance systems and ruin anonymity. To verify your age, a system must determine and record who you are. That means every chatbot interaction could feasibly be linked to your verified identity.
  • They disproportionately harm vulnerable groups. Many people—especially activists and dissidents, trans and gender-nonconforming folks, undocumented people, and survivors of abuse—avoid systems that force identity disclosure. The GUARD Act would entirely cut off their ability to use these public AI tools.
  • They entrench Big TechOnly the biggest companies can afford the compliance and liability burden of mass identity verification. Smaller, privacy-respecting developers simply can’t compete.

As we’ve said repeatedly, there’s no such thing as “safe” age verification. Every approach—whether it’s facial or biometric scans, government ID uploads, or behavioral or account analysis—creates new privacy, security, and expressive harms.

Vagueness + Steep Fines = Censorship. Full Stop. 

Though mandatory age-gates provide reason enough to oppose the GUARD Act, the definitions of “AI chatbot” and “AI companion” are also vague and broad enough to raise alarms. In a nutshell, the Act’s definitions of these two terms are so expansive that they could cover nearly any system capable of generating “human-like” responsesincluding not just general-purpose LLMs like ChatGPT, but also more tailored services like those used for customer service interactions, search-engine summaries, and subject-specific research tools.

The bill defines an “AI chatbot” as any service that produces “adaptive” or “context-responsive” outputs that aren’t fully predetermined by a developer or operator. That could include Google’s search summaries, research tools like Perplexity, or any AI-powered Q&A tool—all of which respond to natural language prompts and dynamically generate conversational text.

Meanwhile, the GUARD Act’s definition of an “AI companion”—a system that both produces “adaptive” or “context-responsive” outputs and encourages or simulates “interpersonal or emotional interaction”—will easily sweep in general-purpose tools like ChatGPT. Courts around the country are already seeing claims that conversational AI tools manipulate users’ emotions to increase engagement. Under this bill, that’s enough to trigger the “AI companion” label, putting AI developers at risk even when they do not intend to cause harm.

Both of these definitions are imprecise and unconstitutionally overbroad. And, when combined with the GUARD Act’s incredibly steep fines (up to $100,000 per violation, enforceable by the federal Attorney General and every state AG), companies worried about their legal liability will inevitably err on the side of prohibiting minors from accessing their chat systems. The GUARD Act leaves them these options: censor certain topics en masse, entirely block users under 18 from accessing their services, or implement broad-sweeping surveillance systems as a prerequisite to access. No matter which way platforms choose to go, the inevitable result for users is less speech, less privacy, and less access to genuinely helpful tools.

How You Can Help

While there may be legitimate problems with AI chatbots, young people’s safety is an incredibly complex social issue both on- and off-line. The GUARD Act tries to solve this complex problem with a blunt, dangerous solution.

In other words, protecting young people’s online safety is incredibly important, but to do so by forcing invasive ID checks, criminalizing AI tools, and banning teens from legitimate digital spaces is not a good way out of this.

The GUARD Act would make the internet less free, less private, and less safe for everyone. It would further consolidate power and resources in the hands of the bigger AI companies, crush smaller developers, and chill innovation under the threat of massive fines. And it would cut off vulnerable groups’ ability to use helpful everyday AI tools, further stratifying the internet we know and love.

Lawmakers should reject the GUARD Act and focus instead on policies that provide transparency, more options for users, and comprehensive privacy for all. Help us tell Congress to oppose the GUARD Act today.

Originally posted to the EFF’s Deeplinks blog.

Posted on Techdirt - 18 December 2024 @ 01:31pm

ExTwitter’s Last-Minute Update To Kids Online Safety Act Still Fails To Protect Kids—Or Adults—Online

Last week, the Senate released yet another version of the Kids Online Safety Act, written, reportedly, with the assistance of X CEO Linda Yaccarino in a flawed attempt to address the critical free speech issues inherent in the bill. This last minute draft remains, at its core, an unconstitutional censorship bill that threatens the online speech and privacy rights of all internet users.

Update Fails to Protect Users from Censorship or Platforms from Liability

The most important update, according to its authors, supposedly minimizes the impact of the bill on free speech. As we’ve said before, KOSA’s “duty of care” section is its biggest problem, as it would force a broad swath of online services to make policy changes based on the content of online speech. Though the bill’s authors inaccurately claim KOSA only regulates designs of platforms, not speech, the list of harms it enumerates—eating disorders, substance use disorders, and suicidal behaviors, for example—are not caused by the design of a platform.  

KOSA is likely to actually increase the risks to children, because it will prevent them from accessing online resources about topics like addiction, eating disorders, and bullying. It will result in services imposing age verification requirements and content restrictions, and it will stifle minors from finding or accessing their own supportive communities online. For these reasons, we’ve been critical of KOSA since it was introduced in 2022. 

This updated bill adds just one sentence to the “duty of care” requirement: “Nothing in this section shall be construed to allow a government entity to enforce subsection a [the duty of care] based upon the viewpoint of users expressed by or through any speech, expression, or information protected by the First Amendment to the Constitution of the United States.” But the viewpoint of users was never impacted by KOSA’s duty of care in the first place. The duty of care is a duty imposed on platforms, not users. Platforms must mitigate the harms listed in the bill, not users, and the platform’s ability to share users’ views is what’s at risk—not the ability of users to express those views. Adding that the bill doesn’t impose liability based on user expression doesn’t change how the bill would be interpreted or enforced. The FTC could still hold a platform liable for the speech it contains.

Let’s say, for example, that a covered platform like reddit hosts a forum created and maintained by users for discussion of overcoming eating disorders. Even though the speech contained in that forum is entirely legal, often helpful, and possibly even life-saving, the FTC could still hold reddit liable for violating the duty of care by allowing young people to view it. The same could be true of a Facebook group about LGBTQ issues, or for a post about drug use that X showed a user through its algorithm. If a platform’s defense were that this information is protected expression, the FTC could simply say that they aren’t enforcing it based on the expression of any individual viewpoint, but based on the fact that the platform allowed a design feature—a subreddit, Facebook group, or algorithm—to distribute that expression to minors. It’s a superfluous carveout for user speech and expression that KOSA never penalized in the first place, but which the platform would still be penalized for distributing. 

It’s particularly disappointing that those in charge of X—likely a covered platform under the law—had any role in writing this language, as the authors have failed to grasp the world of difference between immunizing individual expression, and protecting their own platform from the liability that KOSA would place on it.  

Compulsive Usage Doesn’t Narrow KOSA’s Scope 

Another of KOSA’s issues has been its vague list of harms, which have remained broad enough that platforms have no clear guidance on what is likely to cross the line. This update requires that the harms of “depressive disorders and anxiety disorders” have “objectively verifiable and clinically diagnosable symptoms that are related to compulsive usage.” The latest text’s definition of compulsive usage, however, is equally vague: “a persistent and repetitive use of a covered platform that significantly impacts one or more major life activities, including socializing, sleeping, eating, learning, reading, concentrating, communicating, or working.” This doesn’t narrow the scope of the bill. 

It should be noted that there is no clinical definition of “compulsive usage” of online services. As in past versions of KOSA, this updated definition cobbles together a definition that sounds just medical, or just legal, enough that it appears legitimate—when in fact the definition is devoid of specific legal meaning, and dangerously vague to boot. 

How could the persistent use of social media not significantly impact the way someone socializes or communicates? The bill doesn’t even require that the impact be a negative one. Comments on an Instagram photo from a potential partner may make it hard to sleep for several nights in a row; a lengthy new YouTube video may impact someone’s workday. Opening a Snapchat account might significantly impact how a teenager keeps in touch with her friends, but that doesn’t mean her preference for that over text messages is “compulsive” and therefore necessarily harmful. 

Nonetheless, an FTC weaponizing KOSA could still hold platforms liable for showing content to minors that they believe results in depression or anxiety, so long as they can claim the anxiety or depression disrupted someone’s sleep, or even just changed how someone socializes or communicates. These so-called “harms” could still encompass a huge swathe of entirely legal (and helpful) content about everything from abortion access and gender-affirming care to drug use, school shootings, and tackle football. 

Dangerous Censorship Bills Do Not Belong in Must-Pass Legislation

The latest KOSA draft comes as incoming nominee for FTC Chair, Andrew Ferguson—who would be empowered to enforce the law, if passed—has reportedly vowed to protect free speech by “fighting back against the trans agenda,” among other things. As we’ve said for years (and about every version of the bill), KOSA would give the FTC under this or any future administration wide berth to decide what sort of content platforms must prevent young people from seeing. Just passing KOSA would likely result in platforms taking down protected speech and implementing age verification requirements, even if it’s never enforced; the FTC could simply express the types of content they believe harms children, and use the mere threat of enforcement to force platforms to comply.  

No representative should consider shoehorning this controversial and unconstitutional bill into a continuing resolution. A law that forces platforms to censor truthful online content should not be in a last minute funding bill.

Republished from the EFF’s Deeplinks blog.

Posted on Techdirt - 6 May 2024 @ 12:57pm

The U.S. House Version Of KOSA: Still A Censorship Bill

companion bill to the Kids Online Safety Act (KOSA) was introduced in the House last month. Despite minor changes, it suffers from the same fundamental flaws as its Senate counterpart. At its core, this bill is still an unconstitutional censorship bill that restricts protected online speech and gives the government the power to target services and content it finds objectionable. Here, we break down why the House version of KOSA is just as dangerous as the Senate version, and why it’s crucial to continue opposing it. 

Core First Amendment Problems Persist

EFF has consistently opposed KOSA because, through several iterations of the Senate bill, it continues to open the door to government control over what speech content can be shared and accessed online. Our concern, which we share with others, is that the bill’s broad and vague provisions will force platforms to censor legally protected content and impose age-verification requirements. The age verification requirements will drive away both minors and adults who either lack the proper ID, or who value their privacy and anonymity.   

The House version of KOSA fails to resolve these fundamental censorship problems.

Dangers for Everyone, Especially Young People

One of the key concerns with KOSA has been its potential to harm the very population it aims to protect—young people. KOSA’s broad censorship requirements would limit minors’ access to critical information and resources, including educational content, social support groups, and other forms of legitimate speech. This version does not alleviate that concern. For example, this version of KOSA could still: 

  • Suppress search results for young people seeking sexual health and reproductive rights information; 
  • Block content relevant to the history of oppressed groups, such as the history of slavery in the U.S; 
  • Stifle youth activists across the political spectrum by preventing them from connecting and advocating on their platforms; and 
  • Block young people seeking help for mental health or addiction problems from accessing resources and support. 

As thousands of young people have told us, these concerns are just the tip of the iceberg. Under the guise of protecting them, KOSA will limit minors’ ability to self-explore, to develop new ideas and interests, to become civically engaged citizens, and to seek community and support for the very harms KOSA ostensibly aims to prevent. 

What’s Different About the House Version?

Although there are some changes in the House version of KOSA, they do little to address the fundamental First Amendment problems with the bill. We review the key changes here.

1. Duty of Care Provision   

We’ve been vocal about our opposition to KOSA’s “duty of care” censorship provision. This section outlines a wide collection of harms to minors that platforms have a duty to prevent and mitigate by exercising “reasonable care in the creation and implementation of any design feature” of their product. The list includes self-harm, suicide, eating disorders, substance abuse, depression, anxiety, and bullying, among others. As we’ve explained before, this provision would cause platforms to broadly over-censor the internet so they don’t get sued for hosting otherwise legal content that the government—in this case the FTC—claims is harmful.

The House version of KOSA retains this chilling effect, but limits the “duty of care” requirement to what it calls “high impact online companies,” or those with at least $2.5 billion in annual revenue or more than 150 million global monthly active users. So while the Senate version requires all “covered platforms” to exercise reasonable care to prevent the specific harms to minors, the House version only assigns that duty of care to the biggest platforms.

While this is a small improvement, its protective effect is ultimately insignificant. After all, the vast majority of online speech happens on just a handful of platforms, and those platforms—including Meta, Snap, X, WhatsApp, and TikTok—will still have to uphold the duty of care under this version of KOSA. Smaller platforms, meanwhile, still face demanding obligations under KOSA’s other sections. When government enforcers want to control content on smaller websites or apps, they can just use another provision of KOSA—such as one that allows them to file suits based on failures in a platform’s design—to target the same protected content.

2. Tiered Knowledge Standard 

Because KOSA’s obligations apply specifically to users who are minors, there are open questions as to how enforcement would work. How certain would a platform need to be that a user is, in fact, a minor before KOSA liability attaches? The Senate version of the bill has one answer for all covered platforms: obligations attach when a platform has “actual knowledge” or “knowledge fairly implied on the basis of objective circumstances” that a user is a minor. This is a broad, vague standard that would not require evidence that a platform actually knows a user is a minor for it to be subject to liability. 

The House version of KOSA limits this slightly by creating a tiered knowledge standard under which platforms are required to have different levels of knowledge based on the platform’s size. Under this new standard, the largest platforms—or “high impact online companies”—are required to carry out KOSA’s provisions with respect to users they “knew or should have known” are minors. This, like the Senate version’s standard, would not require proof that a platform actually knows a user is a minor for it to be held liable. Mid-sized platforms would be held to a slightly less stringent standard, and the smallest platforms would only be liable where they have actual knowledge that a user was under 17 years old. 

While, again, this change is a slight improvement over the Senate’s version, the narrowing effect is small. The knowledge standard is still problematically vague, for one, and where platforms cannot clearly decipher when they will be liable, they are likely to implement dangerous age verification measures anyway to avoid KOSA’s punitive effects.

Most importantly, even if the House’s tinkering slightly reduces liability for the smallest platforms, this version of the bill still incentivizes large and mid-size platforms—which, again, host the vast majority of all online speech—to implement age verification systems that will threaten the right to anonymity and create serious privacy and security risks for all users.

3. Exclusion for Non-Interactive Platforms

The House bill excludes online platforms where chat, comments, or interactivity is not the predominant purpose of the service. This could potentially narrow the number of platforms subject to KOSA’s enforcement by reducing some of the burden on websites that aren’t primarily focused on interaction.

However, this exclusion is legally problematic because its unclear language will again leave platforms guessing as to whether it applies to them. For instance, does Instagram fall into this category or would image-sharing be its predominant purpose? What about TikTok, which has a mix of content-sharing and interactivity? This ambiguity could lead to inconsistent enforcement and legal challenges—the mere threat of which tend to chill online speech.

4. Definition of Compulsive Usage 

Finally, the House version of KOSA also updates the definition of “compulsive usage” from any “repetitive behavior reasonably likely to cause psychological distress” to any “repetitive behavior reasonably likely to cause a mental health disorder,” which the bill defines as anything listed in the Diagnostic and Statistical Manual of Mental Disorders, or DSM. This change pays lip service to concerns we and many others have expressed that KOSA is overbroad, and will be used by state attorneys general to prosecute platforms for hosting any speech they deem harmful to minors. 

However, simply invoking the name of the healthcare professionals’ handbook does not make up for the lack of scientific evidence that minors’ technology use causes mental health disorders. This definition of compulsive usage still leaves the door open for states to go after any platform that is claimed to have been a factor in any child’s anxiety or depression diagnosis. 

KOSA Remains a Censorship Threat 

Despite some changes, the House version of KOSA retains its fundamental constitutional flaws.  It encourages government-directed censorship, dangerous digital age verification, and overbroad content restrictions on all internet users, and further harms young people by limiting their access to critical information and resources. 

Lawmakers know this bill is controversial. Some of its proponents have recently taken steps to attach KOSA as an amendment to the five-year reauthorization of the Federal Aviation Administration, the last “must-pass” legislation until the fall. This would effectively bypass public discussion of the House version. Just last month Congress attached another contentious, potentially unconstitutional bill to unrelated legislation, by including a bill banning TikTok inside of a foreign aid package. Legislation of this magnitude deserves to pass—or fail—on its own merits. 

We continue to oppose KOSA—in its House and Senate forms—and urge legislators to instead seek alternatives such as comprehensive federal privacy law that protect young people without infringing on the First Amendment rights of everyone who relies on the internet.  

Originally posted to the EFF Deeplinks Blog.