Court Quickly Rejects California’s Deepfake Law As Blatantly Unconstitutional

from the well-that-was-fast dept

Well, things sure move fast in this world of AI regulations. Just last week, we noted that California Governor Gavin Newsom signed a pretty obviously unconstitutional set of laws regarding the use of deepfake imagery around “election communications.” Then, just hours later, he was sued for it.

This week, federal judge John Mendez has already put one law on hold, AB 2839*, noting that it seems quite clearly to be unconstitutional. Oftentimes, these kinds of rulings on preliminary injunctions will talk about “likelihood of success” and highlight how it will probably be found unconstitutional if it goes through the full legal process.

However, here, Judge Mendez just said point blank that the law is unconstitutional.

AB 2839 does not pass constitutional scrutiny because the law does not use the least restrictive means available for advancing the State’s interest here. As Plaintiffs persuasively argue, counter speech is a less restrictive alternative to prohibiting videos such as those posted by Plaintiff, no matter how offensive or inappropriate someone may find them. ‘“Especially as to political speech, counter speech is the tried and true buffer and elixir,” not speech restriction.’….

While California has a valid interest in protecting the integrity and reliability of the electoral process, AB 2839 is unconstitutional because it lacks the narrow tailoring and least restrictive alternative that a content based law requires under strict scrutiny.

As you’ll recall, Newsom made all this particularly easy for the plaintiff, Christopher Kohls, by directly stating that he was signing the law in an attempt to force Elon Musk and ExTwitter to remove a parody video that Kohls had created.

Image

The court cites the recent Supreme Court Moody rulings as enabling it to find that state laws targeting online speech violate the First Amendment. In response, California had argued that the law only covered unprotected “false” speech. Except, of course, it’s only a very narrow set of false speech that falls outside of First Amendment protections, and this law is hardly narrowly tailored to focus on just that speech… as Newsom himself confirmed in claiming that the parody video was “illegal” under the law.

The judge isn’t buying California’s argument at all:

While Defendants attempt to analogize AB 2839 to a restriction on defamatory statements, the statute itself does not use the word “defamation” and by its own definition, extends beyond the legal standard for defamation to include any false or materially deceptive content that is “reasonably likely” to harm the “reputation or electoral prospects of a candidate.” Cal. Elec. Code § 20012(b) (emphasis added). At face value, AB 2839 does much more than punish potential defamatory statements since the statute does not require actual harm and sanctions any digitally manipulated content that is “reasonably likely” to “harm” the amorphous “electoral prospects” of a candidate or elected official….

Moreover, all “deepfakes” or any content that “falsely appear[s] to a reasonable person to be an authentic record of the content depicted in the media” are automatically subject to civil liability because they are categorically encapsulated in the definition of “materially deceptive content” used throughout the statute. Id. § 20012(f)(8). Thus, even artificially manipulated content that does not implicate reputational harm but could arguably affect a candidate’s electoral prospects is swept under this statute and subject to civil liability.

The statute also punishes such altered content that depicts an “elections official” or “voting machine, ballot, voting site, or other property or equipment” that is “reasonably likely” to falsely “undermine confidence” in the outcome of an election contest. Id. § 20012(b)(1)(B), (D). On top of these provisions lacking any objective metric and being difficult to ascertain, there are many acts that can be “do[ne] or [words that can be] sa[id]” that could harm the “electoral prospects” of a public official or “undermine confidence” in an election…. Almost any digitally altered content, when left up to an arbitrary individual on the internet, could be considered harmful. For example, AI-generated approximate numbers on voter turnout could be considered false content that reasonably undermines confidence in the outcome of an election under this statute. On the other hand, many “harmful” depictions when shown to a variety of individuals may not ultimately influence electoral prospects or undermine confidence in an election at all. As Plaintiff persuasively points out, AB 2839 “relies on various subjective terms and awkwardly-phrased mens rea,” which has the effect of implicating vast amounts of political and constitutionally protected speech….

Thus, it’s no surprise that this law, which clearly impacts speech, must pass strict scrutiny if it’s to be found to be constitutional. This law and strict scrutiny are in different zip codes. The court finds that it is not narrowly tailored. Indeed, the Court rightly notes that any attempt to regulate political speech is particularly fraught and must be very clearly narrowly tailored, which this law was not:

The political context is one such setting that would be especially “perilous” for the government to be an arbiter of truth in. AB 2839 attempts to sterilize electoral content and would “open[] the door for the state to use its power for political ends.” Id. “Even a false statement may be deemed to make a valuable contribution to public debate, since it brings about ‘the clearer perception and livelier impression of truth, produced by its collision with error.’” Id. (quoting New York Times Co., supra, at 279, n. 19). When political speech and electoral politics are at issue, the First Amendment has almost unequivocally dictated that Courts allow speech to flourish rather than uphold the State’s attempt to suffocate it.

Upon weighing the broad categories of election related content both humorous and not that AB 2839 proscribes, the Court finds that AB 2839’s legitimate sweep pales in comparison to the substantial number of its applications, as in this case, which are plainly unconstitutional. Therefore, the Court finds that Plaintiff is likely to succeed on a First Amendment facial challenge to the statute.

Separately, the court finds that the “compelled disclosure” of manipulated images is also unconstitutional, as it is unduly burdensome compelled speech:

For parody or satire videos, AB 2839 requires a disclaimer to air for the entire duration of a video in text that is no smaller than the largest font size used in the video. Cal. Elec. Code § 20012(b). In Plaintiff Kohls’ case, this requirement renders his video almost unviewable, obstructing the entirety of the frame. Compl. ¶ 98. The obstructiveness of this requirement is concerning because parody and satire have relayed creative and important messages in American politics. As the Supreme Court has noted, “[d]espite their sometimes caustic nature, from the early cartoon portraying George Washington as an ass down to the present day, graphic depictions and satirical cartoons have played a prominent role in public and political debate.” Hustler Magazine v. Falwell, 485 U.S. 46, 54 (1988).

Defendants do not argue that Plaintiff Kohls’ video qualifies as commercial speech and the Court does not find Plaintiff’s parody to be an actual advertisement. While an argument could be made that some parodies or satire are in effect commercial speech, a vast majority of these creations are simply humorous artistic endeavors which are not subject to commercial speech regulations. In a non-commercial context like this one, AB 2839’s disclosure requirement forces parodists and satirists to “speak a particular message” that they would not otherwise speak, which constitutes compelled speech that dilutes their message.

The conclusion is clear: yes, the concern about deepfakes may be real, but you don’t get to just outlaw them under the First Amendment.

The Court acknowledges that the risks posed by artificial intelligence and deepfakes are significant, especially as civic engagement migrates online and disinformation proliferates on social media. Against this backdrop, the Court does not enjoin the state statute at issue in this motion lightly, even on a preliminary basis. However, most of AB 2839 acts as a hammer instead of a scalpel, serving as a blunt tool that hinders humorous expression and unconstitutionally stifles the free and unfettered exchange of ideas which is so vital to American democratic debate.

Just as the Court is mindful that legislative leaders enacted AB 2839 and that the State may have a legitimate interest in protecting election integrity, it is equally mindful that the First Amendment was designed to protect citizens against prior restraints and encroachments of speech by State governments themselves. “[W]hatever the challenges of applying the Constitution to ever-advancing technology, the basic principles” of the First Amendment “do not vary” and Courts must ensure that speech, especially political or electoral speech, is not censored for its ideas, subject matter, or content.

This is a good, clean, and clear ruling on a law that was shockingly unconstitutional. We can agree that deepfakes are something to be worried about. And we can even agree that Kohls’ brand of “humorous” deepfakes are not funny or compelling. But the First Amendment says that he should be free to make them, and California cannot outlaw such political speech.

I imagine that California Attorney General Rob Bonta will appeal this decision, just like he’s appealed multiple other decisions in recent years over unconstitutional First Amendment-violating laws from the California Assembly. But it sure would be nice if, rather than wasting our taxpayer money, he focused on educating both Newsom and the California legislature how the First Amendment works.

If he’d like to set up a “First Amendment 101 CLE,” I know some folks who could help.

* You may recall that the lawsuit is also challenging AB 2655. However, that law doesn’t go into effect for a while, so the immediate focus was on 2839, which went into effect immediately.

Filed Under: , , , , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Court Quickly Rejects California’s Deepfake Law As Blatantly Unconstitutional”

Subscribe: RSS Leave a comment
31 Comments
Arianity says:

As Plaintiffs persuasively argue, counter speech is a less restrictive alternative to prohibiting videos such as those posted by Plaintiff

It’s hard to see why counter speech would accomplish the goal here. It seems like less that it’s the least restrictive means, and more just the perilous argument (and quoting a dissent here is not great). That said:

The safe harbor carveouts of the statute attempt to implement labelling requirements, which if narrowly tailored enough, could pass constitutional muster.

Seems reasonable. Sounds like the text size thing was a bit off

Separately, the court finds that the “compelled disclosure” of manipulated images is also unconstitutional, as it is unduly burdensome compelled speech:

It didn’t find that compelled disclosure itself was unconstitutional, just the particular version was too burdensome. It actually said subject to some changes it could survive.

Also, you missed a part. The audio warning was severed and survived:

HEREBY ENJOINED from enforcing AB 2839 except for the audio only severed portion of the statute.

Also pretty reasonable.

Anonymous Coward says:

Re:

the dissent, if you read the words you pasted, highlights that the statute is written too broadly, and captures too much, so it fails the least restrictive means test.

The dissent’s requirement that labeling be ‘narrowly tailored enough’ to ‘pass constitutional muster’ indicates the law as it existed was not narrowly tailored.

Arianity says:

Re: Re:

the dissent, if you read the words you pasted, highlights that the statute is written too broadly, and captures too much, so it fails the least restrictive means test.

I know, which is why my issue is not the content of the dissent, but the fact that it’s a dissent to begin with. If you read what I wrote, I made that pretty clear.

The dissent’s requirement that labeling be ‘narrowly tailored enough’ to ‘pass constitutional muster’ indicate

Dissents aren’t binding/relevant requirements. That’s why they’re dissents and not the majority. Which is exactly why you shouldn’t be citing a dissent as if it’s a binding requirement to begin with. Especially when there are perfectly good majority opinions to cite that make the exact same point.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

'My feet are fine, ignore the hole I just shot in them.'

Imagine that, when you walk into court claiming that your law is strictly and narrowly defined to only go after very specific speech, after making a public post exposing that it very much is not, the courts aren’t willing to just take you at your word…

Sok Puppette says:

The First Amendment also doesn’t contain exceptions for actual defamation, obscenity, commercial speech, or basically anything else. All of the exceptional categories that you seem to accept are pure judicial asspulls.

You want to distinguish this from unprotected defamation as defined by case law. Well, fine, you can do that, but why should defamation be unprotected to begin with under any definition? Why don’t similar reasons apply to new classes of exceptions?

The hard, cold fact is that, although the US generally does unusually well on free speech, none of the tricky parts of US First Amendment jurisprudence have either any textual basis or any basis in any articulable, generally applicable principles.

Sure, there are long lists of detailed rules within each of the laundry list of random asspulled exceptional categories, but that doesn’t mean there’s anything to tie them all together, let alone anything legitimate to tie them all together.

From a practical point of view, the rule in effect is that the FA is limited whenever SCOTUS of the moment thinks, or pretends to think, that “they can’t have meant that“. Which is pretty concerning given who’s on the current SCOTUS.

It does kind of have to be that way because the FA, like most of the rest of the US Constitution, is so vaguely and sloppily drafted. The natural, obvious facial reading would permit tons of speech that almost nobody is actually prepared to tolerate even if stopping it means a revolution.

But you still look silly hyperventilating about people looking for exceptions while accepting a bunch of other random exceptions that have no better justification.

Stephen T. Stone (profile) says:

Re:

The First Amendment also doesn’t contain exceptions for actual defamation, obscenity, commercial speech, or basically anything else. All of the exceptional categories that you seem to accept are pure judicial asspulls.

So what?

The First Amendment was drafted more than 200 years ago by a bunch of men who could never have conceived of all the avenues of communication we have now. It was meant to be broad so that as much speech as possible was permissible under the law. Every “exceptional category” of speech not covered by 1A is defined by one specific notion: Speech from those categories infringes upon someone else’s rights or causes harm to someone else. CSAM isn’t covered by 1A for reasons that I hope are obvious to you. Defamation isn’t covered by 1A because it causes reputational harm that might hurt the defamed person’s ability to make a living (or even live in peace). Incitement is meant to inspire imminent lawless action that often results in harm to people, property, or both.

The Founding Fathers made possible the changing of the First Amendment through the constitutional amendment process. But they also left room open for future legislation and court precedent to define the borders of 1A⁠—but only in such a way that permits more speech than it disallows. Without those borders, we would see more rulings like Schenck remain precedential, which would give the government more power to clamp down on protests and dissenting speech without risking “hey, so, this violated my civil rights” lawsuits. I would rather have a broadly permissive First Amendment with narrowly drawn exceptions to its protections than a tightly restrictive First Amendment that protects only speech favored by whoever runs the government in a given moment.

Arianity says:

Re: Re:

So what?

If they’re all asspulls, you have to actually defend why one isn’t ok, and one is, without just invoking an appeal to authority/history/the text.

I would rather have a broadly permissive First Amendment with narrowly drawn exceptions to its protections

The issue is how you (you in the general sense, not you personally) justify those exceptions, and not others. You mention harm to others, but stuff like misinformation (or Schenck, honestly. Never mind stuff like hate speech) can obviously be harmful. So that alone isn’t the bar. We allow a lot of harmful speech.

Further, SCOTUS has said new categories can’t be created. And not because no harm can come from them. Our decisions in Ferber and other cases cannot be taken as establishing a freewheeling authority to declare new categories of speech outside the scope of the First Amendment.

Stephen T. Stone (profile) says:

Re: Re: Re:

If they’re all asspulls, you have to actually defend why one isn’t ok, and one is, without just invoking an appeal to authority/history/the text.

And that’s generally easy to do⁠—which I did, per the examples of unprotected speech I listed in my prior comment.

You mention harm to others, but stuff like misinformation … can obviously be harmful. So that alone isn’t the bar. We allow a lot of harmful speech.

Some harm can be considered an “acceptable loss” in the protection of free speech. In re: misinformation, the potential harm of spreading a lie on social media is far less than the potential harm from the government banning misinformation on social media. Who would have the right to determine what is or isn’t a fact, and how badly would you want that power in the hands of lawmakers and politicians with whom you disagree?

I’m not saying that misinformation can’t be harmful. Of course it can. What I’m saying is that trying to ban misinformation outright is the most restrictive attempt to advance the state’s interests, which means it runs counter to the spirit (if not the letter) of the First Amendment.

SCOTUS has said new categories can’t be created.

By lower courts, maybe. SCOTUS could create a new category if it wanted because that court holds the reins in terms of interpreting the Constitution. And the federal legislature could create new categories by passing a constitutional amendment.

Anonymous Coward says:

Re: Re: Re:2

Some harm can be considered an “acceptable loss” in the protection of free speech.

I would love to know what, and who, you would count as “acceptable losse” for the sake of the protection of free speech. COVID-19 has killed far too many people because of lies spread about the vaccines. Are those people “acceptable loss” in your eyes?

Stephen T. Stone (profile) says:

Re: Re: Re:3

COVID-19 has killed far too many people because of lies spread about the vaccines. Are those people “acceptable loss” in your eyes?

Okay, yes, I will admit that my wording was clumsy and insensitive there. My bad.

But consider the following: Donald Trump was the president when the pandemic rocked the United States. Would you want him in charge of determining whether social media companies could label certain kinds of speech as “misinformation”⁠—or, more to the point, determining the exact kind of speech that deserves to be banished from social media (at a bare minimum)? Because I could imagine him demanding that any speech saying “Donald Trump is mishandling the pandemic” be considered “misinformation” and thus barred by law from being posted on Twitter or Facebook. And that action would have saved about as many lives as did his suggestion that bleach injections could wipe out COVID-19.

My point is that the government should be looking for the least restrictive way to handle speech it doesn’t like. A blanket ban or onerous restrictions on such speech isn’t it. If I had to put my spin on things, I’d go with a regulation that would require any video or audio that is generated or manipulated with AI/LLMs be labeled as such⁠—though in a less restrictive way than California’s law. Any violation of that law would result in a hefty fine. It’s not a perfect solution, and hell, it could even be declared unconstitutional like California’s law. But it’s one of the least restrictive ways to handle misinformation through government action that I can think of. If you have any better ideas, feel free to share them.

Arianity says:

Re: Re: Re:2

And that’s generally easy to do⁠—which I did, per the examples of unprotected speech I listed in my prior comment.

Your examples don’t really distinguish why one harm is acceptable while one isn’t, though.

Who would have the right to determine what is or isn’t a fact, and how badly would you want that power in the hands of lawmakers and politicians with whom you disagree?

Sure, but you could make that argument for defamation, which literally has to make that same judgement call. The first part of the test for defamation is a false statement purporting to be fact link. Worse, not only is it predicated on statements of fact, there’s a subjective standard with the actual malice/negligence portion (fault amounting to at least negligence), as well.

Some harm can be considered an “acceptable loss” in the protection of free speech.

I think what they’re trying to pin down is what is “acceptable” loss, and what isn’t. Because it’s pretty nebulous. There’s no clear standard for one type of disinformation vs another. Stuff like defamation hits basically every concern a new restriction would, the main difference seems to be it already exists. Similarly disinformation around say election time/locations.

By lower courts, maybe. SCOTUS could create a new category if it wanted because that court holds the reins in terms of interpreting the Constitution. And the federal legislature could create new categories by passing a constitutional amendment.

SCOTUS could, but as Mike pointed out yesterday, it made it clear it isn’t interested (right now). That could change, of course (either the Court changing it’s mind faced with something, or the composition of the court changing). But outside of that or an amendment, it isn’t happening. And it was very clear it wasn’t resting that analysis on harms or balancing tests.

Anonymous Coward says:

Re:

The example given, defamation, already existed long before the Bill of Rights was enacted. No one imagined it would change the idea that true defamation is wrong and can be redressed in civil action. What we did get there was the right to “defame” with the truth or opinion, although that was still a bit of a journey.

That some of the Constitution is not clearly written: No argument there.

There may be other items which are more asspulls, but frequently they are illegal onother grounds, and 1A simply is the weaker governing principle, largely with good, well articulated, reason. Not always, certainly.

Anonymous Coward says:

a disclaimer to air for the entire duration of a video in text that is no smaller than the largest font size used in the video

It’s immaterial now, that that sounds exploitable: just say: “my largest font doesn’t fit a full 5 letter word on the image. I put the notice in a font 5 times larger (and thus it’s mostly off-screen. you cant see the left of the first letter in the background at all times”

So not only an unconstitutional law, but one that probably couldn’t even effect it’s out desired outcome.

Anonymous Coward says:

You see that AI of donald wading through some flood waters?

lol, that would never happen.
Oh, and have a look at his hand, wth happened there? Looks like he has a finger where a thumb should be.

Then there is the one where donald is up a telephone pole fixing some sort of doohickey .. for the people – Aahhhhahahaha. He has difficulty climbing stairs.

buttwipinglord (profile) says:

I fail to see how stealing someone’s likeness to spread disinformation is somehow protected speech by just labeling it as “parody”. It’s also not even remotely like a political cartoon. Not a damn single person would mistake an actual parody played out by actors or drawings as the real person saying those things. The same cannot be said about convincing deep fakes tailor made to steal someone’s exact likeness in looks and voice.

Especially when the right-wing o sphere is constantly claiming anything and everything to their disliking is made by AI when they are the ones actually doing it.

And the answer is more speech? Seriously? With so many easily convinced idiots who believe anything Mush and Orangeman spew or send their way and any “speech” to show that it’s false with facts, chains of evidence just gets dismissed as “fake news” or a “conspiracy”?

Does no one remember all the Elon Musk Twitter parodies and how that went when “parody is now allowed” came back? Do you think if someone made a convincing deep fake about Musk without labeling it as parody and passing it around Twitter would go without being banned or turned around in the same way?

People are already using AI deep fakes the world over to scam and deceive people and we are just supposed to roll over and say “nah this is ok, it is protected expression to simply steal someone’s likeness and claim it’s parody or whatever justification”

I think there is a fine line we can’t cross and allowing this is it. Stealing someone’s likeness to the point of being indistinguishable should not be protected speech. It’s not a cartoon.

pgwerner says:

Re: Clear parody

And I can’t believe you don’t understand parody or what the constitutional protections of it actually are. I suggest, first off, reading up on Hustler Magazine v. Falwell, which establishes parody as clearly protected speech. Second, just because this ad uses someone’s image the novel technology of AI generation does not change anything. It would not even need to be labelled parody, since the parodic nature of the ad is obvious from the context. If someone tried to pass this off as actual Kamala Harris statements, you’d be dealing with something different, although I will note, though, there’s no First Amendment exception for ‘misinformation’ either, even if a fraudulent video like that would be clearly unethical. But in this case, where it’s entirely obvious from both the context of the ad and the express statements of the creator that this is parody, it is well within long-existing First Amendment protections.

Anonymous Coward says:

It’s utterly insane to me that we’re pretending that using someone else’s voice and likeness to say things they never said is not fraud. If I made a video like this to show Tim Cook saying my iPhone case is the best product ever designed and he’s giving me all his Apple stock, I’d likely be charged and convicted on stock manipulation and fraud charges. But make it for an election and it’s ok?

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...