Hawley, Blumenthal Team Up To Push Nonsensical AI/230 Bill
from the did-an-ai-write-this? dept
There are some questions about whether or not Section 230 protects AI companies from being liable for the output from their generative AI tools. Matt Perrault published a thought-provoking piece arguing that 230 probably does not protect generative AI companies. Jess Miers, writing here at Techdirt, argued the opposite point of view (which I found convincing). Somewhat surprisingly, Senator Ron Wyden and former Rep. Chris Cox, the authors of 230 have agreed with Perrault’s argument.
The Wyden/Cox (Perrault) argument is summed up in this quote from Cox:
“To be entitled to immunity, a provider of an interactive computer service must not have contributed to the creation or development of the content at issue,” he told me. “So when ChatGPT creates content that is later challenged as illegal, Section 230 will not be a defense.”
At a first pass, that may sound compelling. But, as Miers noted in her piece, the details get a lot trickier once you start looking at them. As she points out, it’s already well established that 230 protects algorithmic curation and promotion (this was sorta, partly, at issue in the Gonzalez case, though by the time the Supreme Court heard the case, it was mostly dropped, in part because the lawyers backing Gonzalez realized that their initial argument probably would make search engines illegal).
Further, Miers notes, that 230 cases have already been found to protect algorithmically generated snippets that summarize content elsewhere, even though those are “created” by Google, based on (1) the search input “prompt” from the user, and (2) the giant database of content that Google has scanned.
And, that’s where the issue really gets tricky, and where those insisting that generative AI companies are clearly outside the scope of 230 feel like they haven’t quite thought through all of this: where is the line that you can draw between these two things? At what point do we go from one tool, Google, that scraped a bunch of content and creates a summary in response to input, to another tool, AI, that scrapes a bunch of content and creates “whatever” in response to input?
Well, the two Senators who hate the internet more than anyone else, the bipartisan “destroy the internet, and who cares what damage it does” buddies: Senator Richard Blumenthal and insurrectionist supporting Senator Josh Hawley have teamed up to introduce a bill that explicitly says AI companies get no 230 protection. Leaving aside the question of why any Democrat would be willing to team up with Hawley on literally anything at this moment, this bill is… well… weird.
First, just the fact that they had to write this bill suggests (perhaps surprisingly?) that Hawley and Blumenthal agree with Miers more than they agree with Wyden, Cox, or Perrault. If 230 didn’t apply to AI companies, why would they need to write this bill?
But, if you look at the text of the bill, you quickly realize that Hawley and Blumenthal (this part is not surprising) have no clue how to draft a bill that wouldn’t suck in a ton of other services, and strip them of 230 protections (perhaps that’s their real goal, as both have tried to destroy Section 230 going back many years).
The definition of “Generative Artificial Intelligence” is, well, a problem:
GENERATIVE ARTIFICIAL INTELLIGENCE.—The term ‘generative artificial intelligence’ means an artificial intelligence system that is capable of generating novel text, video, images, audio, and other media based on prompts or other forms of data provided by a person.’
First off, AI is quickly getting built into basically everything these days, so this definition is going to capture much of the internet within a few years. But, going back to the search example discussed above. Where the courts had said that 230 protected Google’s algorithmically generated summaries.
With this bill in place, that’s likely no longer true.
Or… as social media tools build in AI (which is absolutely coming) to help you craft better content, do all of those services then lose 230 protection? Just for helping users create better content?
And, of course, all of this confuses the point of Section 230, which, as we keep explaining, is just a procedural fast pass to get frivolous cases tossed out.
Just to make this point clear, let’s look at what happens should this bill become law. Say someone does a Google search on something, and finds that the automatically generated summary is written in a way that they feel is defamatory, even though it’s just a computerized attempt to summarize what others have written, in response to a prompt. The person sues Google, which is no longer protected by 230.
With Section 230, Google would be able to get the case kicked out with minimal hassle, as they’d file a relatively straightforward motion to dismiss pointing to 230 and get the case dismissed. Without that, they can still argue that the case is bad because, as an algorithm, Google could not have had the requisite knowledge to say anything defamatory. But, this is a more complicated (and more expensive) legal argument to make, and one that might not get tossed out on a motion to dismiss, but which would have to go through discovery, and to the more involved summary judgment stage, if not go all the way to trial.
In the end, it’s likely that Google still wins the case, because it had no knowledge at all as to whether the content was false, but now the process is expensive and wasteful. And, maybe it doesn’t matter for Google, which has buildings full of lawyers.
But, it does matter for basically every AI startup out there. Or any other company making use of AI to make their products better and more useful. If those products spew out some nonsense, even if no one believes it, must we have to fight a court battle over it?
Think back to the case we just recently spoke about regarding OpenAI being sued for defamation. Yes, ChatGPT appeared to make up some nonsense, but there remains no indication that anyone believed the nonsense. Only the one reporter saw it, and seemed to recognize it was fake. If he had then published the content, perhaps he would be liable for spreading something he knew was fake. But if it’s just ChatGPT writing it in response to that guy’s prompts, where is the harm?
In other words, even in the world of generative AI, there are still humans in the loop, and thus there can still be liability placed on the party responsible for (1) creating, via their prompts, and (2) spreading (if they publish it more widely) the violative content.
It still makes sense, then, for 230 to protect the AI tools.
Without that, what would AI developers do? How do you train an AI tool to never get anything wrong in producing content? And, even if you had some way to do that, wouldn’t that ruin many uses of AI? Lots of people use AI to deliberately generate fiction. I keep hearing about writers using it as a brainstorming tool. But if 230 doesn’t protect AI, then it would be way too risky for any AI tool to even offer to create “fiction.”
Yes, generative AI feels new and scary. But again, this all feels like an overreaction. The legal system today, including Section 230, seems pretty well equipped to handle specific scenarios that people seem most concerned about.
Filed Under: ai, algorithms, josh hawley, liability, richard blumenthal, section 230


Comments on “Hawley, Blumenthal Team Up To Push Nonsensical AI/230 Bill”
I’ll take Quantum Crypto AI for $1000, Alex
If AI companies could be sued for AI generated content… who do I sue if /dev/random has something offensive? AMD/Intel (assuming it’s an x86)?
/s
Re:
To get an answer, I’d fire off a letter to my lawyer, Dave Null. … he might be a while getting back to you, though.
I would not be surprised if once Microsoft makes their Windows assistant/copilot powered by OpenAI available to general end users in Windows 11 and the 365 suite of software, there would be at least some scenarios in some jurisdictions in which the software ends up liable for the content, particularly if anything generated without an explicit user instruction from a human as you would expect from “assistant” software.
Re:
You know what Microsoft’s copilot is liable to? Trademark issues. GitHub has their own copilot and when they see Microsoft copy their trademark, GitHub’s gonna sue Microsoft for trademark infringement.
Re: Re:
Um. Github is owned by Microsoft? So I don’t think Github is going to sue Microsoft.
Great, I’ll just ask ChatGPT to create a defamatory press release about me and then sue OpenAI. Easy money.
This comment has been flagged by the community. Click here to show it.
Re:
Lawyers have been doing that for years only they crowdfund losing cases which they engineer.
Someone defamed doesn’t even have to engineer a lawsuit, instead just introduce themselves anywhere they go and wait until some idiot Googles them.
Re: Re:
…hallucinates Jhon Smith.
Re: Re: Live boi or dead girl?
Hey Jhon just what did you get caught doing that you are so worried everyone will find out about?
This comment has been flagged by the community. Click here to show it.
Re: Re: Re:
No need for 230 if people are “finding out” the truth.
How petty is someone who thinks misnaming someone demonstrates any type of value…so fixated on someone so irrelevant.
Debating the larger issue of not protecting reputations against psychos (who can and have gone to prison when defamation becomes harassment btw) isn’t going to work, so out come the personal attacks and implied threats of blackmail (reported to law enforcment) should anyone sign their name, on a site with links to known cyberterrorists and now a foreign, state-sponsored hacking group.
Keep seething. It’s all you can do when handcuffed.
Manick runs this show anyway. Let’s see how he deals with a real investigative journalist in the mainstream media given a map with all connected dots. Something tells me he won’t handle the intensity very well.
Re: Re: Re:2
Jhon pls
Karl is one of them
Did your latest scam fail again
Re: Re: Re:2
You misnamed yourself, then chose to jump whenever the misname gets brought up like a trained Pavlovian dog.
So what kind of hack have you been working with for the last five years to get no results? NOW you’re starting to be serious? I thought the pirates took away all your mailing list money?
Re: Re: Re:2
Said the chump who spelled Mike’s name as “Manick”.
But suppose for the sake of argument that the misspelling of someone’s name, intentional or otherwise, was grounds for a defamation lawsuit… What do you think your angry, spiteful rants over the years are going to get you?
Don’t file checks your ass can’t cash, John. Prenda couldn’t do it, neither can you.
Re: Re: Re:2
Your self-awareness is refreshing, but that’s really about all the value you bring to the table.
I’m sure you’re just dying to go to a courtroom to have this all out in the open, including your full name and affiliations.
Apparently you’re fine with defamation and harassment when it’s not directed at you. But I’ll save you the trouble, your threats are as toothless now as they were against Otis Wright when he nailed your copyright heroes Prenda Law to the wall.
For what it’s worth, I don’t need a cyberterrorist organization or state-sponsored hacking group, I can make fun of you all the same without anyone’s help.
Re: Re:
Isn’t that exactly what Milorad Truklja did? Sued Google for pointing people to search results, instead of the news outlet that happened to accidentally photograph him in the same shot as a gangster?
Oh, but wait, you spent weeks, months and years trying to portray his ability to sue Google as a good thing. So what the hell are you complaining about now? I’m still waiting on that rape you threatened me with five years ago, though.
This comment has been flagged by the community. Click here to show it.
Re: Re: Re:
Go get fucked, PaulT. Why you and Leigh haven’t thrown yourselves off a cliff is beyond me.
I strongly recommend you do so before the police come knocking. At least that way you can save yourself some anal virginity when I’m done with you.
Re: Re: Re:2
You know, I really hope you do actually submit that lawsuit you keep threatening. You’ll have to finally put a name to the mystery Prenda Law fanboy.
Re: Re: Re:2
Jhon pls
Jhon pls
Did you trade your ass for privileges in prison before you seem to know all about that
This comment has been flagged by the community. Click here to show it.
Re: Re: Re:3
Paul Hansmeier will appeal. And he will win. And when he wins, your ass is grass.
Re: Re: Re:4
Jhon pls
Prenda Law is toast and not the delicious kind
Le Paul Hansmeier can’t file any more lawsuits that challenge the constitutionality of the federal mail fraud, wire fraud, and extortion statutes
Are you suffering from sunk cost fallacy because you clearly are
I guess you also like mail fraud, wire fraud and extortion since yu also had a scam too
Amazon remembers your scheme
Re: Re: Re:4
You can keep repeating your own legal fantasies, the same way you insisted that Otis Wright would be punished for going after Prenda Law. And just like before, your fervent repetitions won’t suddenly make your fantasies true.
Actually, had Paul Hansmeier not gone for repeated bites at the appeal apple, or immediately dove headfirst into the ADA trolling business after his copyright cases got canned, he’d probably not be as neck-deep in shit as he is now. Heck, he’d probably get time off for good behavior like his buddy John Steele did, or like Martin Shkreli did. But because he chose to double down repeatedly eventually the courts got tired of his shit. And that’s on top of the fact that he repeatedly tried to get the courts to be the engine for funneling his money.
You think the judges are going to go after Mike because he mocked your heroes of copyright? Judges hate it even more when you treat them like tech support scammers treat their money mules in the US. But you’d know a thing or two about scams, of course, with your white elephant merchandise for your mailing list subscribers.
Five years and all you have is insults and threats. The backing of all of Hollywood and the police force, and you still can’t bring one “small fry” like Masnick to task. You don’t have anything more than blowing smoke up asses after your best and brightest in copyright law fucked themselves over and everyone else in the IP enforcement industry, and the only thing you know to do is take your anger out on victims who can’t fight back. But that’s copyright enforcement in a nutshell, isn’t it? Which is why your team got angry when Malibu Media was told that the fines of several hundred thousand dollars had to be divided up between each John Doe, so they’d only have to pay several hundred dollars instead. So your team turned tail and ran away. And got themselves into even more hot water trying to hide assets from the judge.
You’re nothing, John Smith, and the fact that you have to skulk away on this site you claim nobody reads, just to shake your fist at Internet randos making fun of your own typographic errors, is proof.
Search results should be treated as if they were credit reports.
That actually sounds pretty rad. Clarkesworld and the WGA would probably love that you couldn’t use AI to spew out cheap garbage.
Re:
Writers in the WGA also write shows that aren’t cheap garbage, such as Ted Lasso and Reservation Dogs (do watch them if you haven’t done so already).
Re: Re:
I agree. I wasn’t implying that the WGA puts out cheap garbage. They do good work for the most part, and the studios want to turn them into underpaid script editors for AI-generated garbage.
Re: Re: Re:
As a long time movie fan with a fascination for behind the scenes stuff, it’s always fascinating to me at how badly writers are treated, but then without writing you don’t have a movie. I understand in theory why they place more stock in movie stars who attract people to the box office, but why would you spend hundreds of millions on a movie without a blueprint, and why would you abuse the people creating that?
Obviously, there’s examples where creatives have been given free reign and delivered flops or bad work, but there’s also examples where they invented franchises or even genres that made them billions. Why you’d turn that over to something that by definition lacks creativity is beyond me, but I can also name numerous movies where studio interference destroyed a movie’s chances only for it to be hailed as a classic when the original version was mad available.
Re: Re: Re:2
Because someone is always at the bottom, and the fewer people at the top need more acclaim and money. See every business ever.
If Hawley and Blummenthal are behind it, it’s bad for the internet.
Conflict of interest, what’s that? :p
Bluthmenthal: “Happy Pride Month!”
Also Bluthmenthal: Is a sponsor and/or cosponsor with the most extreme, delusional, and xenophobic bigots like Josh Hawley, Marsha Blackburn, and Lindsey Graham and extremist groups like NCOSE for bills that would potentially break the internet and make it a worse place for minorities like the Queer community.
Re:
Never expect consistency from politicians—you’ll frustrate yourself into despair and they’ll never apologize for their hypocrisy.
Re: Re:
They’ve made talking out of both sides of their mouth into an artform.
Re: Re: Re: Pet Detective
I can talk outta mah ass!!!
Re: Re: 'Consistently self-centered' is still consistent
Depends on what you expect them to be consistent on, they’re absolutely consistent in saying whatever they think will benefit themself the most at any given moment.
Missing a word in the article
Senator Richard Blumenthal and insurrectionist supporting Senator Josh Hawley have teamed up to introduce a bill that explicitly says AI companies get to 230 protection.
You probably meant: “…explicitly says AI companies don’t get 230 protection.”
“In other words, even in the world of generative AI, there are still humans in the loop”
This is something people seem to miss. There’s someone at the front end (supplying the prompts) and there’s someone at the back end (receiving the content). The expectation should be that before the results are published, the trade off for the AI doing the work is that it’s checked before publishing. Obviously there’s ways to automate things so that’s no longer true, but I think that should be expected as the cost of “saving” time by passing the work off.
In my work, I’ve sometimes used ChatGPT to help write or debug things and the results can be impressive. But, I’m not going to just deploy to production blindly, I’m going to check the code and go through the usual testing/approval phases, and it’s still my ass on the line if something goes wrong. I think that if you’re going to try and replace a writer with AI, you should still be required to have an editor.
How do you train an AI tool to never get anything wrong in producing content?
Next, do humans.
Re:
I am reminded of a short story, wherein the protagonist finds that he’s been given a “world processor”. The comedy begins with the typo “the quick red fox jumped over the lazy brown fog”, and Our Hero sees, yes, a fox jumping over a small brown cloud.
The punchline was Our Hero typing, “let there be pease on earth” … “And the porridge rains began.”
Re:
“How do you train an AI tool to never get anything wrong in producing content?”
Not possible.
Communication, education, critical thinking and an open mind are a good start. Politics gets in the way.
Start with Josh Hawley.
Oh, wait.
Why should someone be stalked by lies that anyone they encounter will inevitably find, and often act upon?
Might as well repeal all libel laws and let anyone say anything but oddly some seem to value reputation or we could just do that.
Re:
This “stalking” would be worth taking note if you ever cited an example worth a damn.
Except that even your biggest move in Google v Gonzalez failed.
Sucks, don’t it?
Re:
Jhon pls
you just mad you can’t gather private info via Amazon to sell to databrokers
I bet you also hate journalism
Your argument more or less works well at the moment, when ChatGPT is limited to private, prompted responses. Or even private, unprompted responses if it’s, say, implemented inside a word processor to provide unprompted suggestions just like current grammar and spellcheckers are. And that kind of use will certainly continue to exist. But it works only because the use is entirely private; the only way that defamatory content could become public (and therefore become illegal) is if a human reads it and decides to publish it.
But what happens a little bit down the road, when its successor is inevitably implemented in actual public-facing uses: as a bot to pretend to be Gandalf in r/LOTRmemes, or to provide product summaries in r/BIFL, or issue summaries in r/politics, or any of the other dozens of uses that will certainly be tried. (Assuming reddit survives that long…)
There is one human deliberately designing the bot activity… but they did not actually write the prompt it was given, the commenter did, nor did they generate the response, the AI company did.
There may be one human deliberately involved in calling the bot in their comment, but there also may not be if the bot is designed to appear everywhere or substantially everywhere as part of the sub organization.
Regardless, nobody has any idea what the bot has to say until the words have already been published. It seems, short of one of the above deliberately designing their prompt to request defamatory responses, that there is nobody responsible at all when defamatory statements are made.
And that produces essentially infinite leeway for easy defamation of anyone, anytime. After all, nobody at the AI company could possibly know how or why or what the AI responds with. The commenter has an even better claim to that ignorance, plus a claim to not having even intended to provide a prompt to said AI at all. And the one who wrote the API linking code was involved in neither generation of the prompt nor the response.
As for the comparison to Google snippet generation, it seems not too hard to differentiate between the two. Google intends for and purports that their snippet merely re-words information found in the specific source material it is linking to. It is in that sense more akin to sharing or re-tweeting or stitching another post, than it is to creating that content itself. I would hope that if Google began for some reason generating snippet information that was not in that source material, Google would be responsible for its content.
Generative AI in the sense of ChatGPT, is not intended to source specific information from anywhere at all, or to provide that information in its response. It is intended merely to write things that look like something humans might say or do in response to a given prompt. That could include accurate information that people have said before, inaccurate information that people have said before, or just completely made-up information that nobody has said before. Most often a combination of all 3.
Re:
You fail to grasp the very simple fact that there is always someone responsible for publishing content. Whether a user clicks “publish” or someone has set up an automated publishing process they are both responsible for any content their actions made public.
Whatever you think there’s always someone who initially pressed a button somewhere who will be considered the publisher of the content, regardless how it was created or generated. So no, there won’t be “infinite leeway for easy defamation” in the way you think.
I can only conclude that you think that AI is some magic machine that does whatever it wants of its own volition. It isn’t, it’s just another tool used by people and AFAIK people are still accountable for their actions.
Re: Re:
Sure, and that should be the person who asked ChatGPT for the content, not OpenAI, the creators of a machine that just did what it was told, as all machines are supposed to.
Re: Re:
Perhaps a more concrete example then.
Let’s imagine that a redditor on r/politics creates a bot which has ChatGPT generate a tldr on any comment of sufficient length in the sub.
This person is, as is argued in the article, immune via section 230 due to their tool being intended to summarize the words of others. And even if it “spews out some nonsense”, the article argues it should still be protected because it’s merely an algorithm and the writer of the bot had no requisite knowledge of the content in question. This is the google snippet creation tool, merely using chatGPT instead of whatever algorithm google uses.
Similarly, chatGPT the source of the generative algorithm is immune because they also had no requisite knowledge.
The person who posted also is immune, they had no choice in the matter, the bot operates outside of their control, and if the person who commented isn’t a regular on r/politics they may not even be aware the bot exists and couldn’t therefore reasonably be expected to artificially shorten their comment to avoid the appearance of the bot.
Now this works fine… so long as the bot doesn’t “spew out some nonsense”, as the article put it. If it actually summarizes things that the commenter said, then the commenter can be gone after for what they said.
But when it inevitably does, in fact, spew out nonsense that has little meaningful relationship to what the commenter said… then nobody is liable.
Re: Re: Re:
Why does someone have to be liable?
Re: Re: Re:2
Mostly because John Smith wants his mailing list money.