Don’t Ban Kids From Using Chatbots

from the the-first-amendment-still-matters dept

Laws prohibiting minors from accessing AI-powered chatbots like ChatGPT would violate the First Amendment. But that’s not stopping lawmakers from trying.

Senator Josh Hawley has introduced the Guidelines for User Age-verification and Responsible Dialogue Act of 2025 (GUARD Act), which would require AI companies to “prohibit” minors under “18 years of age” from “accessing or using” AI chatbots that “produce[] new expressive content” in response to “open-ended natural-language or multimodal user input.” Earlier this year, Virginia and Oklahoma introduced similar bills, as did California last September. The crux is the same: to prohibit minors from accessing chatbots capable of producing human-like speech.

If passed, these bills will get struck down in court for violating the First Amendment, which prohibits laws “abridging the freedom of speech.” Specifically, minors have a First Amendment right to receive information. The Supreme Court has explained, “minors are entitled to a significant measure of First Amendment protection, and only in relatively narrow and well-defined circumstances may government bar public dissemination of protected materials to them.” This right applies to the Internet with full force.

When analyzing these laws under the First Amendment, a court would start by asking whether the government is regulating speech. Speech is a broad concept, including written and spoken words, photos, music, and other forms of expression like computer code and video games. Chatbot outputs are speech; they comprise all these forms of expression. Laws prohibiting minors from accessing chatbots regulate speech by cutting off young users from the ideas and information communicated in outputs.

Next, a court would assess whether minor chatbot bans regulate protected or unprotected speech. The vast majority of outputs are protected speech: Teens use chatbots to search for information, get help with schoolwork, for fun or entertainment, and to get news. Here, the only relevant category of unprotected speech is content that is obscene to minors. The GUARD Act, for example, states that “chatbots can generate and disseminate harmful or sexually explicit content to children,” and the Virginia bill would block chatbots “capable of … [e]ngaging in erotic or sexually explicit interactions with the minor user.” Sexually explicit outputs to minors are likely unprotected speech, but the bills go much further by blocking all youth access to chatbots.

Because these bills regulate a mix of protected and unprotected speech, the court would then assess whether the prohibition on teen usage is content-based or content-neutral. Content-based restrictions target speech based on its viewpoint, subject matter, topic, or substantive message. On the other hand, content-neutral laws regulate nonsubstantive aspects of speech, like its time, place, or manner.

These bills are content-based because they prohibit access based on the subject matter of chatbot outputs. The GUARD Act would prohibit minors from accessing chatbots capable of “interpersonal or emotional interaction, friendship, companionship, or therapeutic communication.” The Oklahoma bill would block chatbots that “express[] or invit[e] emotional attachment” or “form ongoing social or emotional bonds with users, whether or not such systems also provide information.” Similarly, the Virginia bill would ban minors from accessing chatbots “capable of … offering mental health therapy.” Regardless of the pros and cons of minors accessing such information, the prohibitions are based on the content of the outputs — not on merely nonsubstantive aspects of the speech.

Because these bills are content-based, the court would apply strict scrutiny. The government would have to prove the bills are narrowly tailored to advance a compelling governmental interest and that they are the least restrictive means of serving that interest. Banning minors from accessing chatbots arguably advances “a compelling interest in protecting the physical and psychological well-being of minors” by “shielding minors from the influence of” obscene outputs.

Strict scrutiny, however, requires lawmakers to use a less restrictive means than bans to protect minors. Lawmakers could, for example, require AI companies to provide parental controls or strict safeguards preventing their models from engaging in sexually explicit conversations with young users. In fact, AI companies already have policies and features to protect minor users. Because these bills aren’t narrowly tailored, a court would strike them down for violating the First Amendment.

Banning minors from using chatbots is also bad policy. Last October, California Governor Gavin Newsom vetoed the state’s proposed ban, stating, “AI is already shaping the world, and it is imperative that adolescents learn how to safely interact with AI systems … We cannot prepare our youth for a future where AI is ubiquitous by preventing their use of these tools altogether.”

Most U.S. teens use AI chatbots. These young users have a First Amendment right to receive the information the AIs output, which is generally protected speech. Prohibiting access to chatbots would violate minors’ constitutional rights and deprive them of the vast benefits of AI.

Andy Jung is associate counsel at TechFreedom, a nonprofit, nonpartisan think tank focused on technology law and policy.

Filed Under: , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Don’t Ban Kids From Using Chatbots”

Subscribe: RSS Leave a comment
42 Comments
Stephen T. Stone (profile) says:

Re: Re:

The use of AI chatbots to “solve” the “problem” of basic human socialization isn’t exactly something I’m willing to defend. We would do well to have some kind of guardrails on that, especially when it comes to children. Age verification probably isn’t (and shouldn’t be) the right approach, but letting chatbots remain near-entirely unregulated also isn’t the right approach.

Bloof (profile) says:

Re: Re:

I have a natural distrust of any group funded by the same people funding the likes of Josh Hawley, and it’s entirely possible for both sides to be bad or for people being on the right side of an issue for the wrong reason. It is important to know who people are and who else shares their bed before hopping in with them.

This comment has been deemed insightful by the community.
Thad (profile) says:

Re: Re: Re:

Yeah, but “Koch-funded organization opposes government regulation” isn’t some kind of shocking revelation.

The details of the specific regulation they’re opposed to aren’t incidental. In this case they’re on the right side of the issue.

We’re not talking about opposing oil industry regulation here, we’re talking about opposing government programs to track what websites people are visiting.

Arianity (profile) says:

Re: Re: Re:2

but “Koch-funded organization opposes government regulation” isn’t some kind of shocking revelation.

The revelation is that it’s Koch funded in the first place, since the only label it has is nonpartisan. It’s not surprising if you already knew it was Koch funded, but it’s conveniently not mentioned.

Bloof (profile) says:

Re: Re: Re:4

https://www.influencewatch.org/non-profit/techfreedom/

Alongside the Kochs, they take money from Google and Facebook to help lobby against antitrust efforts, and while the dollar amount isn’t given, I find it hard to beliefe a K-street lobby group is taking in huge amounts of small dollar donations as part of a grassroots movement to make the internet better.

Arianity (profile) says:

Re: Re:

I don’t trust Josh Hawley, but I also don’t trust a Koch-backed “nonprofit” talking it’s book to tell me what’s in the bill either. Kind of a 2 snakes don’t make a right situation.

Especially when it comes to Hawley, it’s important to get it right. He has a history of proposing “good” (or at least populist) bills he knows will die to get positive press.

Anonymous Coward says:

Similarly, the Virginia bill would ban minors from accessing chatbots “capable of … offering mental health therapy.”

Whoa whoa whoa there I’d like to ban such chatbots for everyone until we have sufficient data that a bot is a safe and effective provider of proper psychological therapy.

What the actual ever loving fuck are we doing here people, they can’t even help edit Mike’s articles without him holding its hand. That is not a technology ready to offer mental health care to anyone.

Anonymous Coward says:

How unsurprising...

…to find lobbyists paid to lie about AI…lying about AI. I’m sure the money’s good — there’s certainly enough of it being thrown around — and well, if another kid or two or fifty kill themselves because the AI/LLM companies can’t be bothered to build in any guardrails, welllllll, that’s just the price of progress.

Epic_Null (profile) says:

One thing I am always aware of about age gating is that there is never an opt-in for adults. It’s always “YOU ARE A MINOR OR YOU CONSENT TO THE THINGS WE DO NOT ALLOW COMPANIES TO DO TO MINORS”.

When you are that wrapped up in age verification, you miss the more obvious and less creepy ways to handle the problem.

Which is probably because it was never about the problem itself.

Anonymous Coward says:

Chatbot outputs are speech; they comprise all these forms of expression.

Is it really speech, though? If so, who created that speech?

It’s not the authors of the training material that the AI company used, otherwise the company would be on the hook for massive copyright violation.

It’s not the AI system itself. That’s just a machine that basically puts words through a blender. The Constitution does not prohibit restricting a machine’s “speech”.

So who, exactly, is speaking??

Anonymous Coward says:

Re:

“before the “market” solves it”

Let’s put it this way, if anyone doubts that:
lead was already known to be toxic when it was made a gasoline additive in the 1920s.

There were scientific reports about the damage lead from leaded gasoline was doing in the 1960s, and it started getting regulated in the 1970s.
(The market itself wasn’t doing jack shit about it; leaded gas was still popular at the time.)

Toxic leaded gasoline wasn’t banned, and thus forcibly removed from the US marketplace, until the 1990s.

Invisible hand of the marketplace, everybody.
Our air would probably STILL be full of lead if not for regulation.

Arianity (profile) says:

Sexually explicit outputs to minors are likely unprotected speech, but the bills go much further by blocking all youth access to chatbots.

The inability to control LLMs seems like it’s a significant confounding factor (to say nothing of the intentionally “spicy” LLMs aimed at kids from companies like Meta )

Lawmakers could, for example, require AI companies to provide parental controls or strict safeguards preventing their models from engaging in sexually explicit conversations with young users. In fact, AI companies already have policies and features to protect minor users.

The problem is that all of these have failed. The ability to bypass safeguards seems baked into how they work, at least right now. To say nothing of e.g. privacy issues if they’re monitored.

Because these bills are content-based, the court would apply strict scrutiny.

This doesn’t seem consistent with Paxton (which, mind you, should’ve been strict scrutiny… but wasn’t).

AI is going to be an invaluable and necessary tool for kids. But the way AI companies are handling it now seems pretty irresponsible, and it seems difficult to wrangle the genie in a bottle even if they were. Literally the reason we’re where we are is because companies like OpenAI wanted to release stuff faster and break things while slower companies like Google were working out the safety implications ahead of release.

Anonymous Coward says:

The biggest problem with banning kids from using chatbots (aside from the obvious one of having them grow up with no knowledge or experience of a widely used technology) is that politicians will just think “job done” and not bother even thinking about regulating the more problematic aspects of AI technology, which would be more disastrous in the long run.

I would prefer a more liberal and pragmatic approach of sensible regulation and education to help kids grow up with a healthier more positive relationship with the technology. I say the same about smartphones and social media. Banning it only harms them more long term.

Epic_Null (profile) says:

As they are part of a corporate entity selling access to their words, I say hold them to professional standards.

They misidentify a mushroom in a mushroom identifying app? The company is liable as ifthe chatbot could complete the task, with only impersonation of an expert as the out.

Canning advoce gives botulism? Poison.

Medical advice was wrong? Malpractice. Should not be giving medical advice.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...