Regulations For Generative AI Should Be Based On Reality, Not Hallucinations.
from the it's-not-a-pack-of-gum dept
In a haste to do something about the growing threat of AI-fueled disinformation, harassment, and fraud, lawmakers risk introducing bills that ignore some fundamental facts about the technology. For California lawmakers in particular, this urgency is compounded by the fact that they preside over the world’s most prominent AI companies and are able to pass laws more quickly than congress can.
Take, for example, California SB 942, which is an attempt to regulate generative AI, but which appears to have hallucinated some of the assumptions on which it’s built.
In short, SB 942 would:
- Require a visible disclosure on images, video, and text that the content is AI-generated; a disclosure in their metadata; and an imperceptible disclosure that is machine readable.
- Require AI providers to create a detection tool where anyone can upload content to check if it was made using that provider’s service.
Sounds pretty good right? Wouldn’t this maybe help fight AI-generated abuse?
It’s unlikely.
In a hearing last week, tech and policy entrepreneur Tom Kemp testified saying that we need this bill. He opened by pointing out how Google CEO Sundar Pichai believes AI will be more profound than the invention of fire or the internet. Then, holding up a pack of gum, said that if we can require a food label for a pack of gum, we should at least also require labels on something as profound as AI.
He concludes, saying:
“In summary, this bill puts AI content on the same level as a pack of gum in terms of disclosures, which is the transparency that Californians need.”
Huh? It’s a fun analogy, but not a useful one. The question we should ask is not whether generative AI is like food. After all, the regulation of food products has different legal considerations than the regulation of expressive generative AI content.
What we should ask is: Will this policy solve the problems we want to solve?
Visible Disclosures:
SB 942 requires disclosures for AI-generated text, but there is no effective method for flagging and detecting content as being AI-generated. Unlike, say, a watermark for an image, a disclosure for text would need to fundamentally alter the message to communicate its synthetic nature. A written disclosure could precede the generated text, but users could simply cut that portion out.
This part of the bill made my Trust & Safety senses tingle. Platform policies that are unenforceable erode trust when there is a mismatch between the rules and what consumers expect. Similarly, if there is a law requiring disclosures of AI generated text, it may give consumers a false sense of protection when there is no way to reliably communicate these notices.
The bill also assumes that generative AI can only be used for malicious purposes. There are many cases where having a disclosure simply doesn’t matter or is even undesirable. For example, if I want to generate an image of myself playing basketball on the moon, there won’t be any question about its inauthenticity. Or if I want to use Photoshop’s generative fill tool for a piece of marketing, I surely don’t want a watermark interrupting my design. To require by law that it all be labeled is a heavy handed approach that seems unlikely to withstand First Amendment scrutiny.
Detection Tools:
AI detection tools are actively being researched and developed, but at this point can’t offer definitive answers to questions of inauthenticity. They give answers with widely varying degrees of uncertainty. This nuance sometimes gets ignored to great consequence, as with the cases where students were falsely accused of plagiarism.
In fact, the technology is so unreliable that last year OpenAI killed its own detection tool, citing its low rate of accuracy. If a safety-conscious AI company is pulling down its own detection tool because it does more harm than good, what incentive does a less conscientious business have to make their detection tool any less harmful?
There are already several generative AI detection services, many offered for free, that are competing for this niche market. If detection tools make big advancements in reliability it won’t be because we required generative AI companies to also push one out just to comply with the law.
It’s worth mentioning that during last week’s hearing, the bill’s author, Senator Becker, acknowledged that it’s a work in progress and promised to continue collaborating with industry to “strike the right balance.” I appreciate his frankness, but I’m afraid that would essentially mean scrapping it. I expect he’ll remove the mention of AI-generated text and hope he gets rid of the detection tool requirement, but that would still leave us with a vague and hard to comply with requirement to label all AI-generated images and video.
The law should try to account for new and developing technology, but it also needs to operate based on fundamental ground truths about it. Otherwise, it will be no more useful than an AI hallucination.
Alan Kyle is a tech governance enthusiast who is looking for his next work opportunity at the intersection of trust & safety and AI.
Filed Under: disclosures, generative ai, josh becker, sb 942, tom kemp, transparency, watermarks
Comments on “Regulations For Generative AI Should Be Based On Reality, Not Hallucinations.”
I’d say, “depends on whose hallucinations you’re talking about”.
I agree, though, that it should not be lawmaker hallucinations driving it.
Re:
Example number eleventy-billion of government regulation of software being a shitshow.
Ahhh the every popular…
See we did something!!!
So what if it can’t be done, we’ll just claim they are resisting our laws & not just pointing out the emperor has no clothes and no understanding of technology.
—- government rulemakers acting rationally is uncommon, especially in California
a rational citizen recognizes that political reality
What specific knowledge/experience/skills would qualify a person to “Regulate” AI ?
The article highlights California SB 942’s attempt to regulate generative AI, emphasizing the bill’s oversights and impracticalities. While proposing visible disclosures and detection tools for AI-generated content, it fails to address key challenges. The analogy likening AI content to a pack of gum in disclosure transparency simplifies the issue. Mandating disclosures on AI-generated text raises enforceability concerns and stifles creativity. Similarly, reliance on AI detection tools is problematic due to their unreliability. While the bill’s author acknowledges the need for balance, significant revisions are necessary to ensure effectiveness and practicality. As we navigate this evolving landscape, policymakers must understand the technology’s nuances for effective regulation.
Re:
Why even bother, thanks.
Re: Re:
Is that sarcasm, fellow human?
Re:
Gosh. If only we had a special AI detector to determine that the above comment was created by AI.
Re: Re:
What, like eyes?
Re: Re: Re:
Uh, yeah. Because blind people have no way of detecting AI-generated content. 🤦♂️
Re: Re: Re:2
What, like screen readers?
Re: Re: Re:3
*whooooosh!*
Re: Re: Re:2
I never knew Mike was blind.
Re: Re: Re:3
*whooooosh!*
The people who should be in the room...
…when this bill is crafted — are not in the room. There are experts in AI, linguistics, security, privacy, etc. (to name a few: Emily Bender, Steve Bellovin, Matthew Green, Timnit Gebru, Jill Fain Lehman, Matt Blaze, Sherrod DeGrippo, Bruce Schneier) who are not working for AI/LLM companies AND who have vastly superior expertise. Those are the people who need to be heavily involved in this — the AI/LLM companies may safely be assumed to be acting completely in their own self-interest and thus their input can be discarded.
Re:
You mean Timnit Gebru the conspiracy theorist?
Re: Re:
To be clear, you’re right that the companies themselves are 100% untrustworthy. But I don’t think people who just post random garbage on Twitter are any better.
Honestly I don’t see the issue with that.
Besides complying with the law/future laws?
It seems like the question here is whether it erodes trust more than doing nothing. And it’s not clear that it does.
It’s clearly no magic bullet, but these seem pretty reasonable steps that’ll help on the margins.
For text you could use embed some kind of tags with nonprinting unicode points, which would definitely hamper casual abuse. Yes it would not stop anyone who does even slighted diligence, but most people would not bother searching and removing such tags the same way most burglars carry their phones on ‘job’ and don’t use counterfeit license plates.
What we can do
From Prof. Eric Goldman’s address Generative AI is Doomed
“It would help to rebrand Generative AI to distance it from “AI.” If we were to more expressly acknowledge the Content Generation Function and Research Functions of Generative AI, it might reduce public fear and make the Constitution’s applicability more obvious.
“On that front, I encourage you to critically scrutinize every effort to regulate Generative AI. Don’t assume those efforts are being advanced for your benefit, or for legitimate reasons, or in a Constitutional manner. Once you notice how often such efforts are illegitimate, you will be better positioned to hold the advocates more accountable.”