Regulations For Generative AI Should Be Based On Reality, Not Hallucinations.

from the it's-not-a-pack-of-gum dept

In a haste to do something about the growing threat of AI-fueled disinformation, harassment, and fraud, lawmakers risk introducing bills that ignore some fundamental facts about the technology. For California lawmakers in particular, this urgency is compounded by the fact that they preside over the world’s most prominent AI companies and are able to pass laws more quickly than congress can. 

Take, for example, California SB 942, which is an attempt to regulate generative AI, but which appears to have hallucinated some of the assumptions on which it’s built. 

In short, SB 942 would:

  1. Require a visible disclosure on images, video, and text that the content is AI-generated; a disclosure in their metadata; and an imperceptible disclosure that is machine readable.
  2. Require AI providers to create a detection tool where anyone can upload content to check if it was made using that provider’s service.

Sounds pretty good right? Wouldn’t this maybe help fight AI-generated abuse?

It’s unlikely. 

In a hearing last week, tech and policy entrepreneur Tom Kemp testified saying that we need this bill. He opened by pointing out how Google CEO Sundar Pichai believes AI will be more profound than the invention of fire or the internet. Then, holding up a pack of gum, said that if we can require a food label for a pack of gum, we should at least also require labels on something as profound as AI. 

 He concludes, saying:

“In summary, this bill puts AI content on the same level as a pack of gum in terms of disclosures, which is the transparency that Californians need.”

Huh? It’s a fun analogy, but not a useful one. The question we should ask is not whether generative AI is like food. After all, the regulation of food products has different legal considerations than the regulation of expressive generative AI content. 

What we should ask is: Will this policy solve the problems we want to solve?

Visible Disclosures:

SB 942 requires disclosures for AI-generated text, but there is no effective method for flagging and detecting content as being AI-generated. Unlike, say, a watermark for an image, a disclosure for text would need to fundamentally alter the message to communicate its synthetic nature. A written disclosure could precede the generated text, but users could simply cut that portion out.

This part of the bill made my Trust & Safety senses tingle. Platform policies that are unenforceable erode trust when there is a mismatch between the rules and what consumers expect. Similarly, if there is a law requiring disclosures of AI generated text, it may give consumers a false sense of protection when there is no way to reliably communicate these notices. 

The bill also assumes that generative AI can only be used for malicious purposes. There are many cases where having a disclosure simply doesn’t matter or is even undesirable. For example, if I want to generate an image of myself playing basketball on the moon, there won’t be any question about its inauthenticity. Or if I want to use Photoshop’s generative fill tool for a piece of marketing, I surely don’t want a watermark interrupting my design. To require by law that it all be labeled is a heavy handed approach that seems unlikely to withstand First Amendment scrutiny.

Detection Tools:

AI detection tools are actively being researched and developed, but at this point can’t offer definitive answers to questions of inauthenticity. They give answers with widely varying degrees of uncertainty. This nuance sometimes gets ignored to great consequence, as with the cases where students were falsely accused of plagiarism

In fact, the technology is so unreliable that last year OpenAI killed its own detection tool, citing its low rate of accuracy. If a safety-conscious AI company is pulling down its own detection tool because it does more harm than good, what incentive does a less conscientious business have to make their detection tool any less harmful? 

There are already several generative AI detection services, many offered for free, that are competing for this niche market. If detection tools make big advancements in reliability it won’t be because we required generative AI companies to also push one out just to comply with the law. 

It’s worth mentioning that during last week’s hearing, the bill’s author, Senator Becker, acknowledged that it’s a work in progress and promised to continue collaborating with industry to “strike the right balance.” I appreciate his frankness, but I’m afraid that would essentially mean scrapping it. I expect he’ll remove the mention of AI-generated text and hope he gets rid of the detection tool requirement, but that would still leave us with a vague and hard to comply with requirement to label all AI-generated images and video. 

The law should try to account for new and developing technology, but it also needs to operate based on fundamental ground truths about it. Otherwise, it will be no more useful than an AI hallucination.

Alan Kyle is a tech governance enthusiast who is looking for his next work opportunity at the intersection of trust & safety and AI.

Filed Under: , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Regulations For Generative AI Should Be Based On Reality, Not Hallucinations.”

Subscribe: RSS Leave a comment
21 Comments
Anonymous Coward says:

The article highlights California SB 942’s attempt to regulate generative AI, emphasizing the bill’s oversights and impracticalities. While proposing visible disclosures and detection tools for AI-generated content, it fails to address key challenges. The analogy likening AI content to a pack of gum in disclosure transparency simplifies the issue. Mandating disclosures on AI-generated text raises enforceability concerns and stifles creativity. Similarly, reliance on AI detection tools is problematic due to their unreliability. While the bill’s author acknowledges the need for balance, significant revisions are necessary to ensure effectiveness and practicality. As we navigate this evolving landscape, policymakers must understand the technology’s nuances for effective regulation.

Anonymous Coward says:

The people who should be in the room...

…when this bill is crafted — are not in the room. There are experts in AI, linguistics, security, privacy, etc. (to name a few: Emily Bender, Steve Bellovin, Matthew Green, Timnit Gebru, Jill Fain Lehman, Matt Blaze, Sherrod DeGrippo, Bruce Schneier) who are not working for AI/LLM companies AND who have vastly superior expertise. Those are the people who need to be heavily involved in this — the AI/LLM companies may safely be assumed to be acting completely in their own self-interest and thus their input can be discarded.

Arianity says:

Or if I want to use Photoshop’s generative fill tool for a piece of marketing, I surely don’t want a watermark interrupting my design.

Honestly I don’t see the issue with that.

what incentive does a less conscientious business have to make their detection tool any less harmful?

Besides complying with the law/future laws?

Platform policies that are unenforceable erode trust when there is a mismatch between the rules and what consumers expect.

It seems like the question here is whether it erodes trust more than doing nothing. And it’s not clear that it does.

It’s clearly no magic bullet, but these seem pretty reasonable steps that’ll help on the margins.

Anonymous Coward says:

For text you could use em‪‬‫‬‪‬‪‬‫‬‪‬‪‬‪‬‪‬‫‬‫‬‪‬‪‬‫‬‪‬‫‬‪‬‫‬‫‬‪‬‫‬‫‬‪‬‪‬‪‬‫‬‫‬‪‬‫‬‫‬‪‬‪‬‪‬‫‬‫‬‪‬‫‬‫‬‫‬‫‬‪‬‪‬‫‬‪‬‪‬‪‬‪‬‪‬‪‬‪‬‫‬‫‬‫‬‪‬‫‬‪‬‪‬‪‬‫‬‪‬‫‬‫‬‪‬‫‬‪‬‪‬‫‬‪‬‫‬‪‬‪‬‫‬bed some kind of tags with nonprinting unicode points, which would definitely hamper casual abuse. Yes it would not stop anyone who does even slighted diligence, but most people would not bother searching and removing such tags the same way most burglars carry their phones on ‘job’ and don’t use counterfeit license plates.

David Sanger (profile) says:

What we can do

From Prof. Eric Goldman’s address Generative AI is Doomed

“It would help to rebrand Generative AI to distance it from “AI.” If we were to more expressly acknowledge the Content Generation Function and Research Functions of Generative AI, it might reduce public fear and make the Constitution’s applicability more obvious.

“On that front, I encourage you to critically scrutinize every effort to regulate Generative AI. Don’t assume those efforts are being advanced for your benefit, or for legitimate reasons, or in a Constitutional manner. Once you notice how often such efforts are illegitimate, you will be better positioned to hold the advocates more accountable.”

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...