Don’t Ban Kids From Using Chatbots
from the the-first-amendment-still-matters dept
Laws prohibiting minors from accessing AI-powered chatbots like ChatGPT would violate the First Amendment. But that’s not stopping lawmakers from trying.
Senator Josh Hawley has introduced the Guidelines for User Age-verification and Responsible Dialogue Act of 2025 (GUARD Act), which would require AI companies to “prohibit” minors under “18 years of age” from “accessing or using” AI chatbots that “produce[] new expressive content” in response to “open-ended natural-language or multimodal user input.” Earlier this year, Virginia and Oklahoma introduced similar bills, as did California last September. The crux is the same: to prohibit minors from accessing chatbots capable of producing human-like speech.
If passed, these bills will get struck down in court for violating the First Amendment, which prohibits laws “abridging the freedom of speech.” Specifically, minors have a First Amendment right to receive information. The Supreme Court has explained, “minors are entitled to a significant measure of First Amendment protection, and only in relatively narrow and well-defined circumstances may government bar public dissemination of protected materials to them.” This right applies to the Internet with full force.
When analyzing these laws under the First Amendment, a court would start by asking whether the government is regulating speech. Speech is a broad concept, including written and spoken words, photos, music, and other forms of expression like computer code and video games. Chatbot outputs are speech; they comprise all these forms of expression. Laws prohibiting minors from accessing chatbots regulate speech by cutting off young users from the ideas and information communicated in outputs.
Next, a court would assess whether minor chatbot bans regulate protected or unprotected speech. The vast majority of outputs are protected speech: Teens use chatbots to search for information, get help with schoolwork, for fun or entertainment, and to get news. Here, the only relevant category of unprotected speech is content that is obscene to minors. The GUARD Act, for example, states that “chatbots can generate and disseminate harmful or sexually explicit content to children,” and the Virginia bill would block chatbots “capable of … [e]ngaging in erotic or sexually explicit interactions with the minor user.” Sexually explicit outputs to minors are likely unprotected speech, but the bills go much further by blocking all youth access to chatbots.
Because these bills regulate a mix of protected and unprotected speech, the court would then assess whether the prohibition on teen usage is content-based or content-neutral. Content-based restrictions target speech based on its viewpoint, subject matter, topic, or substantive message. On the other hand, content-neutral laws regulate nonsubstantive aspects of speech, like its time, place, or manner.
These bills are content-based because they prohibit access based on the subject matter of chatbot outputs. The GUARD Act would prohibit minors from accessing chatbots capable of “interpersonal or emotional interaction, friendship, companionship, or therapeutic communication.” The Oklahoma bill would block chatbots that “express[] or invit[e] emotional attachment” or “form ongoing social or emotional bonds with users, whether or not such systems also provide information.” Similarly, the Virginia bill would ban minors from accessing chatbots “capable of … offering mental health therapy.” Regardless of the pros and cons of minors accessing such information, the prohibitions are based on the content of the outputs — not on merely nonsubstantive aspects of the speech.
Because these bills are content-based, the court would apply strict scrutiny. The government would have to prove the bills are narrowly tailored to advance a compelling governmental interest and that they are the least restrictive means of serving that interest. Banning minors from accessing chatbots arguably advances “a compelling interest in protecting the physical and psychological well-being of minors” by “shielding minors from the influence of” obscene outputs.
Strict scrutiny, however, requires lawmakers to use a less restrictive means than bans to protect minors. Lawmakers could, for example, require AI companies to provide parental controls or strict safeguards preventing their models from engaging in sexually explicit conversations with young users. In fact, AI companies already have policies and features to protect minor users. Because these bills aren’t narrowly tailored, a court would strike them down for violating the First Amendment.
Banning minors from using chatbots is also bad policy. Last October, California Governor Gavin Newsom vetoed the state’s proposed ban, stating, “AI is already shaping the world, and it is imperative that adolescents learn how to safely interact with AI systems … We cannot prepare our youth for a future where AI is ubiquitous by preventing their use of these tools altogether.”
Most U.S. teens use AI chatbots. These young users have a First Amendment right to receive the information the AIs output, which is generally protected speech. Prohibiting access to chatbots would violate minors’ constitutional rights and deprive them of the vast benefits of AI.
Andy Jung is associate counsel at TechFreedom, a nonprofit, nonpartisan think tank focused on technology law and policy.
Filed Under: 1st amendment, ai, chatbots, josh hawley, kids, strict scrutiny


Comments on “Don’t Ban Kids From Using Chatbots”
Sounds like a rather conservative, libertarian thing to say.
I think there are important things to fight for in this day and age – securing access to LLM chat bot for kids is not one of them.
Another piece from a Koch backed, K-street lobby group campaigning against AI regulation? Is it Friday already?
Re:
Just because we’re talking about AI doesn’t mean I’m suddenly in favor of “age verification” schemes promoted by Josh Hawley.
Re: Re:
The use of AI chatbots to “solve” the “problem” of basic human socialization isn’t exactly something I’m willing to defend. We would do well to have some kind of guardrails on that, especially when it comes to children. Age verification probably isn’t (and shouldn’t be) the right approach, but letting chatbots remain near-entirely unregulated also isn’t the right approach.
Re: Re: Re:
I suspect that this is one of those cases where the market really will correct itself, in that the bubble is going to burst and most corps are going to be a lot less inclined to ram AI down our throat.
But in case I’m wrong, yeah, I’m not averse to regulations or safeguards on principle. But this ain’t it.
Re: Re: Re:
I’m frankly in favor of just generally banning ‘chatbot services’ as a thing. It doesn’t seem likely to go anywhere good.
Besides, giving corporations human rights was already the biggest mistake we’ve ever made. Imagine if we also gave human rights to machines.
Re: Re: Re:2
Don’t ban ’em, make ’em liable for everything they say.
Let’s see how long they last if they’re beholden to the consequences of their actions.
Re: Re:
I have a natural distrust of any group funded by the same people funding the likes of Josh Hawley, and it’s entirely possible for both sides to be bad or for people being on the right side of an issue for the wrong reason. It is important to know who people are and who else shares their bed before hopping in with them.
Re: Re: Re:
Yeah, but “Koch-funded organization opposes government regulation” isn’t some kind of shocking revelation.
The details of the specific regulation they’re opposed to aren’t incidental. In this case they’re on the right side of the issue.
We’re not talking about opposing oil industry regulation here, we’re talking about opposing government programs to track what websites people are visiting.
Re: Re: Re:2
The revelation is that it’s Koch funded in the first place, since the only label it has is nonpartisan. It’s not surprising if you already knew it was Koch funded, but it’s conveniently not mentioned.
Re: Re: Re:3
Yeah, that’s a fair point, I’ll buy that. I’d like to see a more precise disclosure of just what exactly TechFreedom is and who’s funding it.
(Though any org with “Freedom” or “Liberty” in the name already sets off my “approach with caution” sensors. I’ve read Orwell.)
Re: Re: Re:4
https://www.influencewatch.org/non-profit/techfreedom/
Alongside the Kochs, they take money from Google and Facebook to help lobby against antitrust efforts, and while the dollar amount isn’t given, I find it hard to beliefe a K-street lobby group is taking in huge amounts of small dollar donations as part of a grassroots movement to make the internet better.
Re: Re: Re:5
Guys, no need to be conspiracy theorists. TechFreedom is a well known quantity. They’re a think tank. They’re not a lobbying shop. They don’t do lobbying. I’ve disagreed with them on some policies (net neutrality! copyright!) and agreed with them on many others (section 230, age verification, etc) but one thing you can never say about the folks there are that they aren’t principled in their beliefs and are very honest brokers.
Even on the things I disagree with them about I can have reasonable conversations with them. They’re not a shill factory at all.
Also, lately, they’ve been one of the strongest players pushing BACK on Brendan Carr’s FCC and Andrew Ferguson’s FTC, doing real work to stop both those agencies from attacking the internet.
And while it’s easy for those with no knowledge or experience to pretend that “oh they’re just anti regulation libertarians) they’ve actually been super supportive of the EU’s DSA approach to regulation (more willing to embrace it than I am).
The team there is smart and principled and speak honestly on the issues they believe in.
And, on this issue, Andy’s point is exactly right. Banning chatbots would be a clear First Amendment violation.
I’m really kinda sick of dipshits jumping to attacking people based on pretend beliefs about who they are, rather than dealing with the substance of what they say.
TechFreedom is a small think tank that does great work. Even when I disagree with them, I respect their approach and their team.
This comment has been flagged by the community. Click here to show it.
Re: Re: Re:6
A hit dog will holler
Re: Re: Re:7
Um. Besides that being a stupid saying, you’re not even using it accurately.
This comment has been flagged by the community. Click here to show it.
Re: Re: Re:8
Would you prefer “the lady doth protest too much”? How about “My ‘Not Bought and Paid for by the AI Lobby’ shirt has people asking a lot of questions already answered by my shirt”?
Re: Re: Re:9
I mean, look, if you want to appear stupid and ignorant of how things actually work, be my guest. I know you think you look savvy by being a cynical angry guy. But, in reality, you just look stupid and ignorant of how things work.
I’m quick to call out actual shilling behavior. This ain’t that.
When you grow up, maybe you’ll learn something.
This comment has been flagged by the community. Click here to show it.
Re: Re: Re:10
I’ll believe a word you say when it’s written on a note pinned to your shirt.
Re: Re:
I don’t trust Josh Hawley, but I also don’t trust a Koch-backed “nonprofit” talking it’s book to tell me what’s in the bill either. Kind of a 2 snakes don’t make a right situation.
Especially when it comes to Hawley, it’s important to get it right. He has a history of proposing “good” (or at least populist) bills he knows will die to get positive press.
Re:
It is here in Australia.
Whoa whoa whoa there I’d like to ban such chatbots for everyone until we have sufficient data that a bot is a safe and effective provider of proper psychological therapy.
What the actual ever loving fuck are we doing here people, they can’t even help edit Mike’s articles without him holding its hand. That is not a technology ready to offer mental health care to anyone.
Re:
But you don’t understand, we’re going to make so much money selling knock-off therapy to the kids whose lives we ruin by replacing their parents with barely functional chatbots
Ban them? No.
But you might as well let them go to 4chan and a maga rally.
Just please tattoo them as well so we all know your kid is full of misinformation.
In this administration, children are expected to suffer in silence
Re:
Yeah man that’s just conservatism.
Don’t question, don’t complain, don’t ask for help, don’t agitate for rights or improvement.
Just STFU and grab your bootstraps, little Billy. If you yank hard enough, you can fly up into heaven.
I SAID DON’T QUESTION
Re: In silence?
No, this administration simply don’t care whether they suffer in silence or not, providing they suffer.
OTOH they might well prefer to hear the screams as that’ll indicate that suffering is happening. Which is what they want.
How unsurprising...
…to find lobbyists paid to lie about AI…lying about AI. I’m sure the money’s good — there’s certainly enough of it being thrown around — and well, if another kid or two or fifty kill themselves because the AI/LLM companies can’t be bothered to build in any guardrails, welllllll, that’s just the price of progress.
One thing I am always aware of about age gating is that there is never an opt-in for adults. It’s always “YOU ARE A MINOR OR YOU CONSENT TO THE THINGS WE DO NOT ALLOW COMPANIES TO DO TO MINORS”.
When you are that wrapped up in age verification, you miss the more obvious and less creepy ways to handle the problem.
Which is probably because it was never about the problem itself.
But think of the parents though
Anything to protect parents from having to talk to their kids. Or worse, take away their iphone.
The horror! The horror!
Is it really speech, though? If so, who created that speech?
It’s not the authors of the training material that the AI company used, otherwise the company would be on the hook for massive copyright violation.
It’s not the AI system itself. That’s just a machine that basically puts words through a blender. The Constitution does not prohibit restricting a machine’s “speech”.
So who, exactly, is speaking??
Re:
I think this article might have just inspired Stephen Thaler’s next case….
Re:
Even if we treat it as mathematics, it is just as protected as speech if not more so. Didn’t we already go over that with ‘illegal numbers’?
Chat bots telling kids to kill themselves is not protected speech
You think user generated content is hard to moderate. Some regulation is required here. I’m not sure if it’s age verification or not, but these greedy billionaires will kill thousands of kids before the “market” solves it.
Re:
“before the “market” solves it”
Let’s put it this way, if anyone doubts that:
lead was already known to be toxic when it was made a gasoline additive in the 1920s.
There were scientific reports about the damage lead from leaded gasoline was doing in the 1960s, and it started getting regulated in the 1970s.
(The market itself wasn’t doing jack shit about it; leaded gas was still popular at the time.)
Toxic leaded gasoline wasn’t banned, and thus forcibly removed from the US marketplace, until the 1990s.
Invisible hand of the marketplace, everybody.
Our air would probably STILL be full of lead if not for regulation.
Re:
Doubt it. They can’t make money from customers that are dead.
The inability to control LLMs seems like it’s a significant confounding factor (to say nothing of the intentionally “spicy” LLMs aimed at kids from companies like Meta )
The problem is that all of these have failed. The ability to bypass safeguards seems baked into how they work, at least right now. To say nothing of e.g. privacy issues if they’re monitored.
This doesn’t seem consistent with Paxton (which, mind you, should’ve been strict scrutiny… but wasn’t).
AI is going to be an invaluable and necessary tool for kids. But the way AI companies are handling it now seems pretty irresponsible, and it seems difficult to wrangle the genie in a bottle even if they were. Literally the reason we’re where we are is because companies like OpenAI wanted to release stuff faster and break things while slower companies like Google were working out the safety implications ahead of release.
Of course not!
Just nuke all commercial AI.
I’d rather they banned people with the demonstrated mental capacities of toddlers from selling chatbots.
The biggest problem with banning kids from using chatbots (aside from the obvious one of having them grow up with no knowledge or experience of a widely used technology) is that politicians will just think “job done” and not bother even thinking about regulating the more problematic aspects of AI technology, which would be more disastrous in the long run.
I would prefer a more liberal and pragmatic approach of sensible regulation and education to help kids grow up with a healthier more positive relationship with the technology. I say the same about smartphones and social media. Banning it only harms them more long term.
Sure, let’s just ignore the cognitive atrophy and hasten the idiocracy because it’s good for Jensen Huang or Sam Altman. Fuck off, Andy.
Re:
By all means, post some studies that show that’s an issue. Otherwise I’m not going to oppose a labor-saving set of innovations.
As they are part of a corporate entity selling access to their words, I say hold them to professional standards.
They misidentify a mushroom in a mushroom identifying app? The company is liable as ifthe chatbot could complete the task, with only impersonation of an expert as the out.
Canning advoce gives botulism? Poison.
Medical advice was wrong? Malpractice. Should not be giving medical advice.