California legislators have begun debating a bill (A.B. 412) that would require AI developers to track and disclose every registered copyrighted work used in AI training. At first glance, this might sound like a reasonable step toward transparency. But it’s an impossible standard that could crush small AI startups and developers while giving big tech firms even more power.
A Burden That Small Developers Can’t Bear
The AI landscape is in danger of being dominated by large companies with deep pockets. These big names are in the news almost daily. But they’re far from the only ones – there are dozens of AI companies with fewer than 10 employees trying to build something new in a particular niche.
This bill demands that creators of any AI model—even a two-person company or a hobbyist tinkering with a small software build—identify copyrighted materials used in training. That requirement will be incredibly onerous, even if limited just to works registered with the U.S. Copyright Office. The registration system is a cumbersome beast at best—neither machine-readable nor accessible, it’s more like a card catalog than a database—that doesn’t offer information sufficient to identify all authors of a work, much less help developers to reliably match works in a training set to works in the system.
Even for major tech companies, meeting these new obligations would be a daunting task. For a small startup, throwing on such an impossible requirement could be a death sentence. If A.B. 412 becomes law, these smaller players will be forced to devote scarce resources to an unworkable compliance regime instead of focusing on development and innovation. The risk of lawsuits—potentially from copyright trolls—would discourage new startups from even attempting to enter the field.
A.I. Training Is Like Reading And It’s Very Likely Fair Use
A.B. 412 starts from a premise that’s both untrue and harmful to the public interest: that reading, scraping or searching of open web content shouldn’t be allowed without payment. In reality, courts should, and we believe will, find that the great majority of this activity is fair use.
The U.S. copyright system is meant to balance innovation with creator rights, and courts are still working through how copyright applies to AI training. In most of the AI cases, courts have yet to consider—let alone decide—how fair use applies. A.B. 412 jumps the gun, preempting this process and imposing a vague, overly broad standard that will do more harm than good.
Importantly, those key court cases are all federal. The U.S. Constitution makes it clear that copyright is governed by federal law, and A.B. 412 improperly attempts to impose state-level copyright regulations on an issue still in flux.
A.B. 412 Is A Gift to Big Tech
The irony of A.B. 412 is that it won’t stop AI development—it will simply consolidate it in the hands of the largest corporations. Big tech firms already have the resources to navigate complex legal and regulatory environments, and they can afford to comply (or at least appear to comply) with A.B. 412’s burdensome requirements. Small developers, on the other hand, will either be forced out of the market or driven into partnerships where they lose their independence. The result will be less competition, fewer innovations, and a tech landscape even more dominated by a handful of massive companies.
If lawmakers are able to iron out some of the practical problems with A.B. 412 and pass some version of it, they may be able to force programmers to research—and effectively, pay off—copyright owners before they even write a line of code. If that’s the outcome in California, Big Tech will not despair. They’ll celebrate. Only a few companies own large content libraries or can afford to license enough material to build a deep learning model. The possibilities for startups and small programmers will be so meager, and competition will be so limited, that profits for big incumbent companies will be locked in for a generation.
If you are a California resident and want to speak out about A.B. 412, you can find and contact your legislators through this website.
During peak COVID lockdowns, New York State passed a law requiring that ISPs (with more than 20,000 subscribers) offer low-income state residents (and low income residents only) a 25 Mbps broadband tier for $15. Big Telecom didn’t much like that, but their multi-year effort to kill the law, first passed in 2021, recently fell apart when the Trump Supreme Court refused to hear their challenge.
Now telecom giants, long fat and comfortable thanks to regional monopolies, are worried by the fact that other states are following suit. Vermont, California and Massachusetts recently proposed their own versions of New York’s law requiring ISPs make broadband affordable for poor people.
For a generation, the U.S. government largely looked the other way as big telecom companies like AT&T and Comcast crushed all competition underfoot, then lobbied or literally bribed lawmakers to look the other way. The result: Americans pay significantly more for patchy, slower broadband than in most developed nations. With terrible customer service to match.
Telecom lobbyists have long insisted that having government do anything to address this competitive logjam is radical overreach. Whether net neutrality, privacy oversight, or most notably price caps. Yet at the same time, they’re on the cusp of a generational victory that has the Trump administration effectively destroying the entirety of what’s left of federal consumer protection.
So not too surprisingly, states are rushing to fill that void federal regulators are leaving. And telecom giants are whining about a problem they created; first by dismantling competition and jacking up consumer rates, second by dismantling federal oversight:
“Any attempt by individual states to regulate prices or other parts of the broadband market will undermine all of the connectivity progress we have made, discourage investment, and end up hurting consumers.”
When New York passed its law, AT&T lobbyists put on a little performance where they pretended they were leaving New York state due to a “hostile business environment.” In reality, the company barely did business in the state in the first place; its home 5G service in question having extremely limited availability.
AT&T engaged in the ploy in the hopes that the Supreme Court would reconsider its refusal to hear the case. But, apparently busy doing other favors to AT&T (like eviscerating the entirety of U.S. federal consumer protection oversight), the Supreme Court again this week refused to hear the case. That opens the door to other states following suit, much to Comcast, AT&T, Verizon, and Charter’s chagrin.
As with everything (net neutrality, privacy, basic transparency requirements), telecom will insist that any government action to lower broadband prices is radical overreach. But requiring they provide a cheap, slow tier to poor people isn’t a huge ask. In the gigabit era, providing a 25 Mbps tier costs big providers a tiny pittance of their fat, captured revenues.
It’s also worth noting that companies like AT&T are massively politically powerful in state legislatures, and the Vermont, California and Massachusetts bills haven’t passed yet. And despite kicking this all off with its own law mandating affordable broadband to the poor, New York has yet to actually enforce its own law, so the full scope and impact of this will be nowhere as dramatic as telecom lobbyists will claim.
California taxpayers are now on the hook for $345,576 in legal fees to… Elon Musk. Why? Because Governor Gavin Newsom and Attorney General Rob Bonta ignored warnings about the obvious Constitutional problems with AB 587, their social media “transparency” law. The law, which Google and Meta actually supported (knowing full well that they could comply while competitors would struggle), has now been partially struck down — exactly as we predicted back in 2022.
While positioned as a transparency bill (who could be against that?), the reality is that it would create a huge hassle for smaller companies, give instructions to malicious actors, and make it harder for content moderation to work well. And, it would effectively enable the California Governor/AG to demand certain types of content moderation.
Look, here’s the thing about content moderation: Companies make editorial decisions all the time about what content to allow, what to remove, what to promote, what to bury. (This is basically their job!) The government generally stays out of these decisions because, well, the First Amendment.
And yet California decided it would be fine to demand that social media companies explain exactly how they make these decisions. Not just in general terms, mind you, but with detailed data about how often they take down posts about “extremism” or “disinformation” or “hate speech.” And also revealing how many people saw that (very loosely defined!) content.
Think about how absurd this would be in any other context. Imagine California passing a law requiring the LA Times to file quarterly reports detailing every story they killed in editorial meetings, with specific statistics about how many articles about “misinformation” they chose not to run. Or demanding the San Francisco Chronicle explain exactly how many letters to the editor about “foreign political interference” they rejected. The First Amendment violation would be so obvious that newspapers’ lawyers would probably hurt themselves rushing to file the lawsuit.
But somehow, when it comes to social media, California convinced itself this was fine. (Narrator: It wasn’t fine.)
Now California has agreed to settle most of the case, conceding two crucial points: the core reporting requirements were unconstitutional, and California taxpayers need to cover Musk’s legal bills. The stipulated agreement makes clear just how thoroughly the state’s position collapsed:
IT IS HEREBY DECLARED that subdivisions (a)(3), (a)(4)(A), and (a)(5) of California Business and Professions Code section 22677 violate the First Amendment of the United States Constitution facially and as applied to Plaintiff.
IT IS HEREBY ORDERED that Defendant, as defined, shall be permanently enjoined from enforcing subdivisions (a)(3), (a)(4)(A), and (a)(5) of California Business and Professions Code section 22677. Defendant shall also be permanently enjoined from enforcing Section 22678 insofar as that section applies to violations of subdivisions (a)(3), (a)(4)(A), and (a)(5) of California Business and Professions Code section 22677.
[….]
It is ORDERED that Plaintiff shall recover from Defendant the amount of $345,576 in full compensation for the attorneys’ fees and costs incurred by Plaintiff in connection with this action and the related preliminary injunction appeal.
The invalidated sections of the law would have required social media companies to define nebulous terms like “hate speech,” “extremism,” and “disinformation,” then provide detailed reports about how they enforced these categories. Companies would have had to reveal not just their moderation practices, but specific data about content flagging, enforcement actions, and user exposure to this content.
Let’s be clear: this outcome was entirely predictable. California’s leadership wasted time and resources pushing through a law that was constitutionally dubious from the start. Now they’re spending taxpayer money to pay legal fees to the world’s wealthiest man — all because they wouldn’t listen to basic First Amendment concerns.
So here’s a modest proposal for Governor Newsom and AG Bonta: next time we warn you about constitutional problems with your tech regulation plans, maybe take those warnings seriously? It’ll save everyone time and money — and bonus, you won’t have to cut checks to Elon Musk.
I know that Mark Zuckerberg no longer likes fact-checking, but it’s not going to stop me from continuing to fact-check him. I’m going to rate his claimed plans of moving trust & safety and content moderation teams away from California to Texas as not just an obnoxiously stupid political suck-up, but also something that increasingly appears to be just a flat out lie.
As you may recall, as part of Mark Zuckerberg’s decision to do away with fact-checking, enable more hatred, and just generally suck up to the Trump administration, there was the weird promise that because California content moderation and trust & safety teams were too “biased,” they would be moved to Texas.
Texas is, apparently, famous for its unbiased, neutral residents, as compared to California, where it is constitutionally impossible to be unbiased. Or something.
Former Facebook employees say, however, that the move-to-Texas announcement rings hollow. That’s because Meta already has major content moderation and trust and safety operations in the state. They say the move is nothing more than a blatant appeal to Donald Trump. Facebook’s former head of content standards said he helped set up those teams in Texas more than a decade ago.
“They made a lot of hay of: ‘Oh, we’re worried about bias, we’re moving all these content moderation teams to other places,’” Dave Willner said during a Lawfare panel last week. “As far as I’ve been able to figure out, that is mostly fake.”
Three other former Facebook employees who worked on the trust and safety teams in Texas told the Guardian the same. One said many people across Meta’s various divisions did trust and safety work in the company’s Austin offices. Another said that many content moderators, including those allocated to the trust and safety teams, have been in Austin for a long time.
So many of the people were already in Texas. What about the folks in California who were told they’d have to move? According to Wired, most have been told the mandate doesn’t actually apply to them.
Last Thursday during a town hall call for Meta employees working under Guy Rosen, the company’s chief information security officer, executives said that no one in Rosen’s organization would have to move to Texas, according to two people in attendance. This exempts from relocation employees who work on Meta’s safety, operations, and integrity teams, which collectively help enforce the company’s content policies.
The changes also do not affect a portion of Meta’s US-based content policy team, which works under chief global affairs officer Joel Kaplan, because many members are already located outside of California, including in Washington, DC, New York City, and Austin, Texas, the employees say. That includes key decisionmakers such as Neil Potts, vice president of public policy. Many of the company’s content moderators are contractors based out of hubs beyond California such as San Antonio, Texas.
So it sure sounds like the big announcement of how content moderation and trust & safety were moving to Texas was a load of garbage. Many of those people are already there.
The whole thing, as expected, was about making a fake public concession to Donald Trump in an attempt to curry political favor.
While Zuckerberg’s motivations here seem transparently political, the broader implications remain concerning. It’s especially worrying given how a ton of people are going around falsely claiming Zuckerberg caved to pressure from Biden, while everyone seems to be ignoring the much more blatant act of him actually caving to Trump.
Moving critical trust & safety functions to appease partisan interests sets a troubling precedent. It’s a short-sighted move that prioritizes political expediency over principled policymaking. But that’s the world Mark Zuckerberg has chosen to embrace.
This one is from a couple months ago, but I finally had a chance to catch up on some older stories. In late 2023, we wrote about one of the most egregious SLAPP suits we’d ever seen. In a case that seems to defy both law and basic human decency, King Vanga, a Stanford student, got into a car accident that resulted in the deaths of Pamela and Jose Juarez. But that was just the start of a legal saga that would leave any reasonable person scratching their head in disbelief.
You see, Vanga later sued members of the Juarez family for… speaking out angrily about the accident that left their loved ones dead.
Talk about adding insult to injury.
It’s a move so brazen, so devoid of compassion, that it almost defies belief. But believe it, because it happened, and it’s a stark reminder of the ways in which our legal system can be weaponized against the very people it’s meant to protect.
And, thankfully (if too late in the process), it’s also a stark reminder of the importance of a strong anti-SLAPP law, like California’s, that has now righted this wrong. This case is not just an affront to decency, it’s a textbook example of why we need robust anti-SLAPP protections to prevent the legal system from being abused to silence and intimidate victims.
Here’s how the local news reported on the original accident:
The California Highway Patrol says Pam, 56, and Joe, 57, were driving west on Santa Fe Avenue approaching Spaceport Entry in Atwater.
They were just minutes away from their son’s house.
Officials say that’s when 20-year-old King Vanga collided into the back of their car at a high rate of speed.
The Juarez’s spun out and their vehicle caught fire.
Vanga overturned into a fence.
The Juarez’s died at the scene.
Vanga had minor injuries was booked into the Merced County Jail for driving under the influence of drugs and/or alcohol and vehicular manslaughter.
The police report on the matter suggested that Vanga was driving under the influence:
Vanga later sued the police, claiming he never drinks. And, a later blood test did not show any traces of alcohol in his blood. While this casts some doubt on the initial police assessment, it doesn’t change the tragic outcome of the accident.
Based on the police report and local news reporting, some of Juarez’s extended family sent letters to Stanford, understandably upset and repeating some of the claims in the news and police reports to alert the school to what one of their students was accused of doing. There is no indication that Stanford did anything at all in response.
Yet, somewhere along the way, Vanga requested his student records, found the letters, and then (shockingly) sued some of the family members, claiming that their letters to Stanford were defamatory.
Yes, let’s repeat that for emphasis: this student got into a car accident that left a husband and wife dead… and then when he found out that some of their grieving family members had sent letters with publicly reported details about the accident, he sued them for defamation. It’s hard to imagine a more callous response in the wake of such a tragedy.
That seems like a quintessential SLAPP. And yet… the California court that heard the case did not grant the anti-SLAPP motion. Fortunately, on appeal, a California state appeals court has reversed that. The court rightly found that the letter sent by Priscilla Juarez (a daughter-in-law of the deceased couple) was clearly not defamatory. The court noted that the comments were clearly her opinion based on disclosed facts from sources like the media and the police report.
This is a crucial distinction. If simply repeating already public information in an angry letter or email opened people up to defamation suits, it would have a massive chilling effect on speech, especially speech by crime victims and their families. The appeals court recognized this and rightly concluded that Vanga’s suit was a SLAPP.
Juarez’s pro bono lawyer in all this was Ken White of Popehat fame, who has written up his own thoughts on this mess of a case. It includes that Vanga’s lawyers had effectively demanded that the Juarez family remove any public conversation about Vanga at all:
Mr. Vanga will not pursue a lawsuit against your for defamation if you agree to the following terms:
1. You agree to identify all written statements that you have made that refer to Mr. Vanga (whether you published those statements under your name or anonymously);
2. You agree to remove any online statements that you have published that refer to Mr. Vanga;
3. You agree not to make or publish any disparaging statements about Mr. Vanga in the future, subject to certain required public policy exceptions;
4. You agree not to encourage, assist, or advise others to make or publish disparaging statements about Mr. Vanga in the future, subject to certain required public policy exceptions;
5. You agree not to encourage the criminal prosecution of Mr. Vanga, including by communicating with government officers or protesting at any conference, hearing, or trial involving Mr. Vanga, except as necessary for you to provide evidence, to provide testimony, to assist with a government investigation, or subject to other required public policy exceptions.
Can you imagine? This guy gets into a car accident that kills a beloved couple in your family, and then you get threatened by the guy (and eventually sued) for… talking about what happened.
It’s nuts.
As White notes, this is why anti-SLAPP laws are so important:
On November 19th, 2024, the California Court of Appeal reversed in one of the most strongly-worded anti-SLAPP appellate rulings I’ve seen, linked above. The Court noted that Priscilla Juarez’ letter expressly based her statements on the criminal complaint, statements from law enforcement officers, and press coverage that she had seen, and that she did not suggest she had some personal knowledge or undisclosed basis for the statements. The Court examined the context, concluding that Stanford was unlikely to interpret the letter as asserting facts rather than the victims’ relative’s angry reaction to events in the news. “Accordingly, considering both the language and the context of Defendant’s email, we find the assertions that Plaintiff murdered the decedents, drove while intoxicated, and violated Stanford’s Code of Conduct to be opinions based on disclosed facts. The opinions are therefore actionable only if those facts are false.” (Attached Order at 15.) Moreover, Plaintiff’s claim that the police and witnesses were wrong is irrelevant — the key is that it’s undisputed that the police and witnesses reported those things and Ms. Juarez based her opinions on those reports. The Court found that Vanga had not offered any evidence that he suffered any pain or suffering from another statement, and therefore didn’t carry his anti-SLAPP burden of showing he could prevail.
It’s easy to see why this is important. Under King Vanga’s theory — which the lower court accepted — it would be impossibly dangerous for crime victims to speak to the press — or to anybody. If a defendant in a criminal case can sue alleged victims for making statements based explicitly on police reports and on the charges against the defendant, then criminal defendants can silence their victims by threat of defamation lawsuits. In fact defendants will be able to use the threat of lawsuits to attack witnesses and disrupt their prosecution. The danger is not abstract or a slippery slope. It was directly presented here. King Vanga’s lawyers demanded that, as a price for not being sued, Priscilla Juarez not only stop talking in public about King Vanga, but not “encourage the criminal prosecution of Mr. Vanga, including by communicating with government officers or protesting at any conference, hearing, or trial involving Mr. Vanga.” I remain shocked that an attorney would do such a grotesque thing. I submit that these facts show that the lawsuit was not motivated by any actual harm suffered by Vanga, but was a naked attempt to bully a grieving family into silence through abuse of the legal system.
Allowing lawsuits like this would have a severe chilling effect on the speech of crime victims and their families. It would enable perpetrators to bully victims into silence through legal intimidation.
This case, while egregious, is not an isolated incident. It’s part of a disturbing trend of the legal system being weaponized to silence and harass, which is exactly why strong anti-SLAPP protections are so essential.
Cases like this underscore the vital importance of robust anti-SLAPP protections. Without such laws, those who cause harm can exploit the legal system to compound the suffering of those they’ve already victimized. It’s a perverse outcome that laws like California’s anti-SLAPP statute aim to prevent.
When the NY Times declared in September that “Mark Zuckerberg is Done With Politics,” it was obvious this framing was utter nonsense. It was quite clear that Zuckerberg was in the process of sucking up to Republicans after Republican leaders spent the past decade using him as a punching bag on which they could blame all sorts of things (mostly unfairly).
Now, with Trump heading back to the White House and Republicans controlling Congress, Zuck’s desperate attempts to appease the GOP have reached new heights of absurdity. The threat from Trump that he wanted Zuckerberg to be jailed over a made-up myth that Zuckerberg helped get Biden elected only seemed to cement that the non-stop scapegoating of Zuck by the GOP had gotten to him.
Since the election, Zuckerberg has done everything he can possibly think of to kiss the Trump ring. He even flew all the way from his compound in Hawaii to have dinner at Mar-A-Lago with Trump, before turning around and flying right back to Hawaii. In the last few days, he also had GOP-whisperer Joel Kaplan replace Nick Clegg as the company’s head of global policy. On Monday it was announced that Zuckerberg had also appointed Dana White to Meta’s board. White is the CEO of UFC, but also (perhaps more importantly) a close friend of Trump’s.
Some of the negative reactions to the video are a bit crazy, as I doubt the changes are going to have that big of an impact. Some of them may even be sensible. But let’s break them down into three categories: the good, the bad, and the stupid.
The Good
Zuckerberg is exactly right that Meta has been really bad at content moderation, despite having the largest content moderation team out there. In just the last few months, we’ve talked about multiple stories showcasing really, really terrible content moderation systems at work on various Meta properties. There was the story of Threads banning anyone who mentioned Hitler, even to criticize him. Or banning anyone for using the word “cracker” as a potential slur.
It was all a great demonstration for me of Masnick’s Impossibility Theorem of dealing with content moderation at scale, and how mistakes are inevitable. I know that people within Meta are aware of my impossibility theorem, and have talked about it a fair bit. So, some of this appears to be them recognizing that it’s a good time to recalibrate how they handle such things:
In recent years we’ve developed increasingly complex systems to manage content across our platforms, partly in response to societal and political pressure to moderate content. This approach has gone too far. As well-intentioned as many of these efforts have been, they have expanded over time to the point where we are making too many mistakes, frustrating our users and too often getting in the way of the free expression we set out to enable. Too much harmless content gets censored, too many people find themselves wrongly locked up in “Facebook jail,” and we are often too slow to respond when they do.
Leaving aside (for now) the use of the word “censored,” much of this isn’t wrong. For years it felt that Meta was easily pushed around on these issues and did a shit job of explaining why it did things, instead responding reactively to the controversy of the day.
And, in doing so, it’s no surprise that as the complexity of its setup got worse and worse, its systems kept banning people for very stupid reasons.
It actually is a good idea to seek to fix that, and especially if part of the plan is to be more cautious in issuing bans, it seems somewhat reasonable. As Zuckerberg announced in the video:
We used to have filters that scanned for any policy violation. Now, we’re going to focus those filters on tackling illegal and high-severity violations, and for lower-severity violations, we’re going to rely on someone reporting an issue before we take action. The problem is that the filters make mistakes, and they take down a lot of content that they shouldn’t. So, by dialing them back, we’re going to dramatically reduce the amount of censorship on our platforms. We’re also going to tune our content filters to require much higher confidence before taking down content. The reality is that this is a trade-off. It means we’re going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down.
Zuckerberg’s announcement is a tacit admission that Meta’s much-hyped AI is simply not up to the task of nuanced content moderation at scale. But somehow that angle is getting lost amidst the political posturing.
Some of the other policy changes also don’t seem all that bad. We’ve been mocking Meta for its “we’re downplaying political content” stance from the last few years as being just inherently stupid, so it’s nice in some ways to see them backing off of that (though the timing and framing of this decision we’ll discuss in the latter sections of this post):
We’re continually testing how we deliver personalized experiences and have recently conducted testing around civic content. As a result, we’re going to start treating civic content from people and Pages you follow on Facebook more like any other content in your feed, and we will start ranking and showing you that content based on explicit signals (for example, liking a piece of content) and implicit signals (like viewing posts) that help us predict what’s meaningful to people. We are also going to recommend more political content based on these personalized signals and are expanding the options people have to control how much of this content they see.
Finally, most of the attention people have given to the announcement has focused on the plan to end the fact-checking program, with a lot of people freaking out about it. I even had someone tell me on Bluesky that Meta ending its fact-checking program was an “existential threat” to truth. And that’s nonsense. The reality is that fact-checking has always been a weak and ineffective band-aid to larger issues. We called this out in the wake of the 2016 election.
This isn’t to say that fact-checking is useless. It’s helpful in a limited set of circumstances, but too many people (often in the media) put way too much weight on it. Reality is often messy, and the very setup of “fact checking” seems to presume there are “yes/no” answers to questions that require a lot more nuance and detail. Just as an example of this, during the run-up to the election, multiple fact checkers dinged Democrats for calling Project 2025 “Trump’s plan”, because Trump denied it and said he had nothing to do with it.
But, of course, since the election, Trump has hired on a bunch of the Project 2025 team, and they seem poised to enact much of the plan. Many things are complex. Many misleading statements start with a grain of truth and then build a tower of bullshit around it. Reality is not about “this is true” or “this is false,” but about understanding the degrees to which “this is accurate, but doesn’t cover all of the issues” or deal with the overall reality.
So, Zuck’s plan to kill the fact-checking effort isn’t really all that bad. I think too many people were too focused on it in the first place, despite how little impact it seemed to actually have. The people who wanted to believe false things weren’t being convinced by a fact check (and, indeed, started to falsely claim that fact checkers themselves were “biased.”)
Indeed, I’ve heard from folks at Meta that Zuck has wanted to kill the fact-checking program for a while. This just seemed like the opportune time to rip off the band-aid such that it also gains a little political capital with the incoming GOP team.
On top of that, adding in a feature like Community Notes (née Birdwatch from Twitter) is also not a bad idea. It’s a useful feature for what it does, but it’s never meant to be (nor could it ever be) a full replacement for other kinds of trust & safety efforts.
The Bad
So, if a lot of the functional policy changes here are actually more reasonable, what’s so bad about this? Well, first off, the framing of it all. Zuckerberg is trying to get away with the Elon Musk playbook of pretending this is all about free speech. Contrary to Zuckerberg’s claims, Facebook has never really been about free speech, and nothing announced on Tuesday really does much towards aiding in free speech.
I guess some people forget this, but in the earlier days, Facebook was way more aggressive than sites like Twitter in terms of what it would not allow. It very famously had a no nudity policy, which created a huge protest when breastfeeding images were removed. The idea that Facebook was ever designed to be a “free speech” platform is nonsense.
Indeed, if anything, it’s an admission of Meta’s own self-censorship. After all, the entire fact-checking program was an expression of Meta’s own position on things. It was “more speech.” Literally all fact-checking is doing is adding context and additional information, not removing content. By no stretch of the imagination is fact-checking “censorship.”
Of course, bad faith actors, particularly on the right, have long tried to paint fact-checking as “censorship.” But this talking point, which we’ve debunked before, is utter nonsense. Fact-checking is the epitome of “more speech”— exactly what the marketplace of ideas demands. By caving to those who want to silence fact-checkers, Meta is revealing how hollow its free speech rhetoric really is.
Also bad is Zuckerberg’s misleading use of the word “censorship” to describe content moderation policies. We’ve gone over this many, many times, but using censorship as a description for private property owners enforcing their own rules completely devalues the actual issue with censorship, in which it is the government suppressing speech. Every private property owner has rules for how you can and cannot interact in their space. We don’t call it “censorship” when you get tossed out of a bar for breaking their rules, nor should it be called censorship when a private company chooses to block or ban your content for violating its rules (even if you argue the rules are bad or were improperly enforced.)
The Stupid
The timing of all of this is obviously political. It is very clearly Zuckerberg caving to more threats from Republicans, something he’s been doing a lot of in the last few months, while insisting he was done caving to political pressure.
I mean, even Donald Trump is saying that Zuckerberg is doing this because of the threats that Trump and friends have leveled in his direction:
Q: Do you think Zuckerberg is responding to the threats you've made to him in the past?TRUMP: Probably. Yeah. Probably.
I raise this mainly to point out the ongoing hypocrisy of all of this. For years we’ve been told that the Biden campaign (pre-inauguration in 2020 and 2021) engaged in unconstitutional coercion to force social media platforms to remove content. And here we have the exact same thing, except that it’s much more egregious and Trump is even taking credit for it… and you won’t hear a damn peep from anyone who has spent the last four years screaming about the “censorship industrial complex” pushing social media to make changes to moderation practices in their favor.
Turns out none of those people really meant it. I know, not a surprise to regular readers here, but it should be called out.
Also incredibly stupid is this, which we’ll quote straight from Zuck’s Threads thread about all this:
That’s Zuck saying:
Move our trust and safety and content moderation teams out of California, and our US content review to Texas. This will help remove the concern that biased employees are overly censoring content.
There’s a pretty big assumption in there which is both false and stupid: that people who live in California are inherently biased, while people who live in Texas are not. People who live in both places may, in fact, be biased, though often not in the ways people believe. As a few people have pointed out, more people in Texas voted for Kamala Harris (4.84 million) than did so in New York (4.62 million). Similarly, almost as many people voted for Donald Trump in California (6.08 million) as did so in Texas (6.39 million).
There are people with all different political views all over the country. The idea that everyone in one area believes one thing politically, or that you’ll get “less bias” in Texas than in California, is beyond stupid. All it really does is reinforce misguided stereotypes.
The whole statement is clearly for political show.
It also sucks for Meta employees who work in trust & safety, who want access to certain forms of healthcare or want net neutrality, or other policies that are super popular among voters across the political spectrum, but which Texas has decided are inherently not allowed.
Finally, there’s this stupid line in the announcement from Joel Kaplan:
We’re getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate. It’s not right that things can be said on TV or the floor of Congress, but not on our platforms.
I’m sure that sounded good to whoever wrote it, but it makes no sense at all. First off, thanks to the Speech and Debate Clause, literally anything is legal to say on the floor of Congress. It’s like the one spot in the world where there are no rules at all over what can be said. Why include that? Things could literally be said on the floor of Congress that would violate the law on Meta platforms.
Also, TV stations literally have restrictions known as “standards and practices” that are way, way, way more restrictive than any set of social media content moderation rules. Neither of these are relevant metrics to compare to social media. What jackass thought that using examples of (1) the least restricted place for speech and (2) a way more restrictive place for speech made this a reasonable argument to make here?
In the end, the reality here is that nothing announced this week will really change all that much for most users. Most users don’t run into content moderation all that often. Fact-checking happens but isn’t all that prominent. But all of this is a big signal that Zuckerberg, for all his talk of being “done with politics” and no longer giving in to political pressure on moderation, is very engaged in politics and a complete spineless pushover for modern Trumpist politicians.
DOJ enters agreement with California police department whose officers allegedly exchanged racist messages
… I was sure I knew which California police department this referred to. After all, I’d written about it back in 2021. In that post, dozens of felony cases were being swiftly unraveled because the testifying officers had spent their time texting each other racist memes and other derogatory material. Here’s some of it:
The caption read “hanging with the homies.”The picture above it showed several Black men who had been lynched.
Another photo asked what someone should do if their girlfriend was having an affair with a Black man. The answer, according to the caption, was to break “a tail light on his car so the police will stop him and shoot him.”
Someone else sent a picture of a candy cane, a Christmas tree ornament, a star for the top of the tree and an “enslaved person.”
“Which one doesn’t belong?” the caption asked.
“You don’t hang the star,” someone wrote back.
That was just the tip of the textberg. Records showed texts comparing President Obama to a monkey and references to African American infants as “pet niguanas.”
Aha! A follow up, I thought! I’m familiar with the subject matter! Everything’s coming up Cushing!
The US Justice Department has reached an agreement with the Antioch, California, police department, resolving an investigation into racist text messages allegedlysent and received by some of its officers.
Under the agreement, the department will hire a consultant to review and update its policies, procedures and training on non-discriminatory policing and use of force, among other topics, the Justice Department said in a news release Friday. It also outlines commitments by the police department to ensure fair, non-discriminatory policing, as well as systems to report and investigate misconduct.
The announcement follows a 2023 report by the Contra Costa District Attorney’s Office, which named a raft of Antioch officers who allegedly sent or received racist text messages, including the use of the n-word and sharing pictures of gorillas in reference to Black people.
That’s not all, though. The officers — some of who faced criminal charges — also bragged about engaging in violence against people. Texts referred to sending police dogs after people as its own form of justice, as the bitten people were still “punished” even if the “soft DA” was just going to cut them loose. They also celebrated excessive force deployment, in a series of texts that almost reads like a confession:
The indictment says Rombough had a history of using a 40mm less lethal launcher, to the point Amiri said in a text to an unidentified officer that “rombough be doing some unnecessary a** 40s,” referencing the launcher. When an unnamed officer texted Rombough in November 2020 asking what the officers were up to, he responded “Violating civil rights,” the indictment says.
Different cops, same racism. And the same disregard for rights that cost Torrance cops lots of felony busts. It doesn’t seem like any criminal cases are on the line here, though. And the agreement, which can be read here, is better than nothing, though it really doesn’t mandate the Antioch PD do anything more than some periodic and perfunctory “training” — the sort of thing its worst cops will sleep through and mock later.
The better part of the agreement raises the standards for hiring new cops, introducing stuff that always should have been there, like comprehensive background checks that include securing complaint/use-of-force history from any law enforcement agencies new hires might have worked for previously.
A better approach would simply be to fire cops that further undermine the extremely tenuous relationships most agencies have with the communities they serve. Until PDs are willing to rid themselves of their worst, they’ll never be able to rise above being just another employer of bastards.
Now, it’s time to ask: What happened to the AI panic in 2024?
TL;DR – It was a rollercoaster ride: AI panic reached a peak and then fell down.
Two cautionary tales: The EU AI Act and California’s SB-1047.
Please note: 1. The focus here is on the AI panic angle of the news, not other events such as product launches. The aim is to shed light on the effects of this extreme AI discourse.
2. The 2023 recap provides context for what happened a year later. Seeing how AI doomers took it too far in 2023 gives a better understanding of why it backfired in 2024.
2023’s AI panic
At the end of 2022, ChatGPT took the world by storm. It sparked the “Generative AI” arms race. Shortly thereafter, we were bombarded with doomsday scenarios of an AI takeover, an AI apocalypse, and “The END of Humanity.” The “AI Existential Risk” (x-risk) movement has gradually, then suddenly, moved from the fringe to the mainstream. Apart from becoming media stars, its members also influenced Congress and the EU. They didn’t shift the Overton window; they shattered it.
“2023: The Year of AI Panic” summarized the key moments: The two “Existential Risk” open letters (first by the Future of Life Institute and second by the Center for AI Safety), the AI Dilemma and Tristan Harris’ x-risk advocacy (now known to be funded, in part, by the Future of Life Institute), the flood of doomsaying in traditional media, followed by numerous AI policy proposals that focus on existential threats and seek to surveil and criminalize AI development. Oh, and TIME magazine had a full-blown love affair with AI doomers (it still has).
– AI Panic Agents
Throughout the years, Eliezer Yudkowsky from Berkeley’s MIRI (Machine Intelligence Research Institute) and his “End of the World” beliefs heavily influenced a sub-culture of “rationalists” and AI doomers. In 2023, they embarked on a policy and media tour.
In a TED talk, “Will Superintelligent AI End the World?” Eliezer Yudkowsky said, “I expect an actually smarter and uncaring entity will figure out strategies and technologies that can kill us quickly and reliably and then kill us […] It could kill us because it doesn’t want us making other superintelligences to compete with it. It could kill us because it’s using up all the chemical energy on earth, and we contain some chemical potential energy.” In TIME magazine, he advocated to “Shut it All Down“: “Shut down all the large GPU clusters. Shut down all the large training runs. Be willing to destroy a rogue datacenter by airstrike.”
Max Tegmark from the Future of Life Institute said: “There won’t be any humans on the planet in the not-too-distant future. This is the kind of cancer that kills all of humanity.”
Next thing you know, he was addressing the U.S. Congress at the “AI Insight Forum.”
And successfully pushing the EU to include “General-Purpose AI systems” in the “AI Act” (discussed further in the 2024 recap).
Connor Leahy from Conjecture said: “I do not expect us to make it out of this century alive. I’m not even sure we’ll get out of this decade!”
Next thing you know, he appeared on CNN and later tweeted: “I had a great time addressing the House of Lords about extinction risk from AGI.” He suggested “a cap on computing power” at 10^24 FLOPs (Floating Point Operations) and a global AI “kill switch.”
Dan Hendrycks from the Center for AI Safety expressed an 80% probability of doom and claimed, “Evolutionary pressure will likely ingrain AIs with behaviors that promote self-preservation.”[1] He warned that we are on “a pathway toward being supplanted as the Earth’s dominant species.” Hendrycks also suggested “CERN for AI,” imagining “a big multinational lab that would soak up the bulk of the world’s graphics processing units [GPUs]. That would sideline the big for-profit labs by making it difficult for them to hoard computing resources.” He later speculated that AI regulation in the U.S “might pave the way for some shared international standards that might make China willing to also abide by some of these standards” (because, of course, China will slow down as well… That’s how geopolitics work!).
Next thing you know, he collaborated with Senator Scott Wiener of California to pass an AI Safety bill, SB-1047 (more on this bill in the 2024 recap).
The 2023 recap ended with this paragraph: “In 2023, EA-backed ‘AI x-risk’ took over the AI industry, AI media coverage, and AI regulation. Nowadays, more and more information is coming out about the ‘influence operation’ and its impact on AI policy. See, for example, the reporting on Rishi Sunak’s AI agenda and Joe Biden’s AI order. In 2024, this tech billionaires-backed influence campaign may backfire. Hopefully, a more significant reckoning will follow.”
2024: Act 1. The AI panic further flooded the zone
The Center for AI Policy (CAIP) outlined the goal: to “establish a strict licensing regime, clamp down on open-source models, and impose civil and criminal liability on developers.”
The “Narrow Path” proposal started with “AI poses extinction risks to human existence” (according to an accompanying report, The Compendium, “By default, God-like AI leads to extinction”). Instead of asking for a six-month AI pause, this proposal asked for a 20-year pause. Why? Because “two decades provide the minimum time frame to construct our defenses.”
Note that these “AI x-risk” groups sought to ban currently existing AI models.
The Future of Life Institute proposed stringent regulation on models with a compute threshold of 10^25 FLOPs, explaining it “would apply to fewer than 10 current systems.”
The International Center for Future Generations (ICFG) proposed that “open-sourcing of advanced AI models trained on 10^25 FLOP or more should be prohibited.”
Gladstone AI‘s “Action Plan”[4] claimed that these models “are considered dangerous until proven safe” and that releasing them “could be grounds for criminal sanctions including jail time for the individuals responsible.”
Beforehand, the Center for AI Safety (CAIS) proposed to ban open-source models trained beyond 10^23 FLOPs.
Llama 2 was trained with > 10^23 FLOPs and thus would have been banned.
The AI Safety Treaty and the Campaign for AI Safety wrote similar proposals, the latter spelling it out as “Prohibiting the development of models above the level of OpenAI GPT-3.”
Jeffrey Ladish from Palisade Research (also from the Center for Humane Technology and CAIP) said, “We can prevent the release of a LLaMA 2! We need government action on this asap.” Siméon Campos from SaferAI set the threshold on Llama-1.
It was ridiculous back then; it looks more ridiculous now.
“It’s always just a bit higher than where we are today,” venture capitalist Krishnan Rohit commented. “Imagine if we had done this!!”
In a report entitled “What mistakes has the AI safety movement made?”, it was argued that “AI safety is too structurally power-seeking: trying to raise lots of money, trying to gain influence in corporations and governments, trying to control the way AI values are shaped, favoring people who are concerned about AI risk for jobs and grants, maintaining the secrecy of information, and recruiting high school students to the cause.”
YouTube is flooded with prophecies of AI doom, some of which target children. Among the channels tailored for kids are Kurzgesagt and Rational Animations, both funded by Open Philanthropy.[5] These videos serve a specific purpose, Rational Animations admitted: “In my most recent communications with Open Phil, we discussed the fact that a YouTube video aimed at educating on a particular topic would be more effective if viewers had an easy way to fall into an ‘intellectual rabbit hole’ to learn more.”
“AI Doomerism is becoming a big problem, and it’s well funded,” observed Tobi Lutke, Shopify CEO. “Like all cults, it’s recruiting.”
Also, like in other doomsday cults, the stress of believing an apocalypse is imminent wears down the ability to cope with anything else. Some are getting radicalized to a dangerous level, playing with the idea of killing AI developers (if that’s what it takes to “save humanity” from extinction).
Both PauseAI and StopAI stated that they are non-violent movements that do not permit “even joking about violence.” That’s a necessary clarification for their various followers. There is, however, a need for stronger condemnation. The murder of the UHC CEO showed us that it only takes one brainwashed individual to cross the line.
2024: Act 2. The AI panic started to backlash
In 2024, AI panic reached the point of practicality and began to backfire.
– The EU AI Act as a cautionary tale
In December 2023, European Union (EU) negotiators struck a deal on the most comprehensive AI rules, the “AI Act.” “Deal!” tweeted European Commissioner Thierry Breton, celebrating how “The EU becomes the very first continent to set clear rules for the use of AI.”
Eight months later, a Bloomberg article discussed how the new AI rules “risk entrenching the transatlantic tech divide rather than narrowing it.”
Gabriele Mazzini, the EU AI Act Architect, and lead author, expressed regret and admitted that its reach has ended up being too broad: “The regulatory bar maybe has been set too high. There may be companies in Europe that could just say there isn’t enough legal certainty in the AI Act to proceed.”
How it started – How it’s going
In September, the EU released “The Future of European Competitiveness” report. In it, Mario Draghi, former President of the European Central Bank and former Prime Minister of Italy, expressed a similar observation: “Regulatory barriers to scaling up are particularly onerous in the tech sector, especially for young companies.”
In December, there were additional indications of a growing problem.
1. When OpenAI released Sora, its video generator, Sam Altman reacted about being unable to operate in Europe: “We want to offer our products in Europe … We also have to comply with regulation.”[6]
2. “A Visualization of Europe’s Non-Bubbly Economy” by Andrew McAfee from MIT Sloan School of Management exploded online as hammering the EU became a daily habit.
These examples are relevant to the U.S., as California introduced its own attempt to mimic the EU when Sacramento emerged as America’s Brussels.
– California’s bill SB-1047 as another cautionary tale
Senator Scott Wiener’s SB-1047 was supported by EA-backedAI safety groups. The bill included strict developer liability provisions, and AI experts from academia and entrepreneurs from startups (“little tech”) were caught off guard. It built a coalition against the bill. The headline collage below illustrates the criticism of the bill as it would strangle innovation, AI R&D (Research and Development), and the open-source community in California and around the world.
The bill was eventually rejected by Gavin Newsom’s veto. The governor explained that there’s a need for an evidence-based, workable regulation.
You’ve probably spotted the pattern by now. 1. Doomers scare the hell out of people. 2. It supports their call for a strict regulatory regime. 3. Those who listen to their fearmongering regret it.
Why? Because 1. Doomsday ideology is extreme. 2. The bills are vaguely written. 3. They don’t consider tradeoffs.
2025
– The vibe shift in Washington
The new administration seems less inclined to listen to AI doomsaying.
Donald Trump’s top picks for relevant positions prioritize American dynamism.
The Bipartisan House Task Force on Artificial Intelligence has just released an AI policy report stating, “Small businesses face excessive challenges in meeting AI regulatory compliance,” “There is currently limited evidence that open models should be restricted,” and “Congress should not seek to impose undue burdens on developers in the absence of clear, demonstrable risk.”
There will probably be a fight at the state level, and if SB-1047 is any indication, it will be intense.
– Is the AI panic going to be further backlashed?
This panic cycle is not yet at the point of reckoning. But eventually, society will need to confront how the extreme ideology of “AI will kill us all” became so influential in the first place.
——————————-
Dr. Nirit Weiss-Blatt, Ph.D. (@DrTechlash), is a communication researcher and author of “TheTECHLASHand Tech Crisis Communication” book and the “AI Panic” newsletter.
——————————-
Endnotes
Dan Hendryck’s tweet and Arvind Narayanan and Sayash Kapoor’s article in “AI Snake Oil”: “AI existential risk probabilities are too unreliable to inform policy.” The similarities = a coincidence 🙂↑
This estimation includes the revelation that Tegmark’s Future of Life Institute was no longer a $2.4-million organization but a $674-million organization. It managed to convert a cryptocurrency donation (Shiba Inu tokens) to $665 million (using FTX/Alameda Research). Through its new initiative, the Future of Life Foundation (FLF), FLI aims “to help start 3 to 5 new organizations per year.”This new visualization of Open Philanthropy’s funding shows that the existential risk ecosystem (“Potential Risks from Advanced AI” + “Global Catastrophic Risks” + “Global Catastrophic Risks Capacity Building,” different names to funding Effective Altruism AI Safety organizations/groups) has received ~ $780 million (instead of $735 million in the previous calculation). ↑
Altman was probably referring to a mixed salad of the new AI Act with previous regulations like GDPR (General Data Protection Regulation) and DMA (Digital Markets Act). ↑
Three years ago, the Fifth Circuit Appeals Court somehow arrived at the conclusion that tasing someone soaked in gasoline — an act of escalation that not only killed the suicidal person officers were supposed to rescuing but also burned the entire residence to the ground — was not excessive force. It was supposedly justified by the gasoline-soaked man’s threats that he would burn himself and the house down if officers kept advancing on him.
Robbing him of his life and his remaining autonomy, Arlington, Texas police officer Officer Guadrama discharged his Taser and made the man’s threats a reality. And it was still just considered to be the sort of thing cops should be doing by the Fifth Circuit court.
It went the other way here. In a California court, a federal judge has arrived at the opposite conclusion in a nearly identical incident. (via Courthouse News Service)
In this case, Paul Hall was despondent because his family refused to interact with him, apparently “fed up with him” for reasons that go unexplained. Feeling abandoned, Hall soaked himself in gasoline, sat on the floor in the middle of the house, and threatened to light himself on fire.
Officer John Gale of the Weed, California police department responded to the call. His actions, as well as those of Paul Hall, were captured by the officer’s body camera. To his credit, Officer Gale at least made some effort to defuse the situation by talking to Hall, who repeatedly reminded him he was covered in gasoline and ready to take his own life by igniting the lighter he held in one of his hands.
When that didn’t work, Gale tried to take the lighter by force by attempting to wrestle it out of Hall’s hands. When that didn’t work, Gale went back to his first tactic: yelling repeatedly for Hall to drop the lighter. This tactic didn’t work the first few dozen times, but according to the footage, Gale did this same thing more than 50 times, perhaps expecting he was due for a win.
Right before he set Hall on fire with his Taser, Officer Gale again ordered Hall to “drop the lighter” and to “put it down.” And right before his fired at Hall, Hall dropped his hands to his sides, possibly on his way to complying. But he never got a chance. That’s when Gale fired and that’s when Hall caught on fire.
Gale first insisted this wasn’t excessive force. The court says in some cases, these actions might not have been. But in this case, at best, that’s still an open question. And the reason it’s still a set of disputed facts is because the officer’s own body cam footage (arguably) contradicts his assertions. From the decision [PDF]:
Defendant Gale’s repeated assertion that Plaintiff Hall “appeared to be flicking the lighter to start” at the time Defendant Gale shot his taser is disputed by Plaintiff and arguably contradicted by the body camera footage […] Upon review of the body camera footage, it is not undisputedly apparent to the Court that Plaintiff Hall appeared to be flicking the lighter to start. Thus, a reasonable jury could conclude, during his interactions with Defendant Gale, Plaintiff Hall did not attempt to ignite the lighter such that he posed an immediate threat that warranted intermediate force.
Then there’s the fact it appears Hall was finally attempting to comply with Gale’s demands moments before Gale decided to deploy his Taser.
Second, Plaintiff Hall alleges he complied with Defendant Gale’s commands to put down the lighter by moving his hands down by his side, including the one holding the lighter. The body camera footage confirms, shortly before Defendant Gale tased Plaintiff Hall, Plaintiff Hall had dropped both hands, including the one holding the lighter. The body camera footage also shows Defendant Gale shot Plaintiff Hall with the taser after Plaintiff Hall had dropped both of his hands. A reasonable jury could conclude any threat related to the lighter dissipated the moment Plaintiff Hall put his hands down.
That’s strike two. Strike three is the undeniable fact Hall wasn’t threatening anyone other than himself. And there’s plenty of evidence on the record that Officer Gale couldn’t have reasonably believed he was a threat to others because the officer made no attempt to remove other people from the house, didn’t even bother to bring in the fire extinguisher he had in his squad car, or hold off on taking any action until the fire department arrived. If he really thought he needed to save others from the immediate threat of a fire, he would have taken those actions. In the end, he was the one to ignite the fire that threatened others, all while claiming this was the only way to prevent the man he set on fire from harming other people.
And here’s where the decision referenced in the opening of this post comes into play. Completely ridiculously, Officer Gale cited that decision in support of his qualified immunity request despite (1) the case was handled by a different circuit, (2) the decision issued by the Fifth was non-precedential, and (most importantly) (3) had been issued two years after he set Paul Hall on fire. As any plaintiff knows and every cop defendant should know, you can’t cite something as precedent when it happens after the incidents in dispute. The clue is in the goddamn word, which requires something to precede something else to be relevant, not arrive after the fact.
Immunity is denied because even if the court were inclined to treat a non-binding decision issued two years after Officer Gale set Paul Hall on fire with his taser, the facts of the cases are different enough Officer Gale couldn’t reasonably believe non-binding non-precedent put him in the clear for deciding setting someone on fire for the crime of threatening to set themselves on fire was justified.
It’s bad enough the body cam footage contradicted the officer’s claims. It’s even worse that his lawyer thought he could get some QI for his client by time-traveling to the future (so to speak) to find cases supporting his client’s actions.
Elon Musk’s ExTwitter has filed an important First Amendment lawsuit against California over its unconstitutional law regulating deepfakes. This follows Musk’s earlier successful challenge to the state’s social media “transparency” law. Yes, sometimes Elon Musk actually does file good First Amendment cases that help protect free speech. I’m just as amazed as anyone, but it’s worth calling it out when he does the right thing.
We similarly cheered on Elon Musk’s earlier lawsuit against California over its unconstitutional social media transparency law and were vindicated when the Ninth Circuit said the law violated the First Amendment.
The complaint filed by ExTwitter makes a compelling case that AB 2655 is unconstitutional on multiple fronts:
Like in that first lawsuit, ExTwitter has hired Floyd Abrams, one of the most well-known First Amendment lawyers out there, protesting one of California’s new anti-deepfake laws:
AB 2655 requires large online platforms like X, the platform owned by X Corp. (collectively, the “covered platforms”), to remove and alter (with a label) — and to create a reporting mechanism to facilitate the removal and alteration of — certain content about candidates for elective office, elections officials, and elected officials, of which the State of California disapproves and deems to be “materially deceptive.” It has the effect of impermissibly replacing the judgments of covered platforms about what content belongs on their platforms with the judgments of the State. And it imposes liability on the covered platforms to the extent that their judgments about content moderation are inconsistent with those imposed by the State. AB 2655 thus violates the First and Fourteenth Amendments of the United States Constitution; the free speech protections of Article I, Section 2, of the California Constitution; and the immunity provided to “interactive computer services” under Section 230 of the Communications Decency Act, 47 U.S.C. § 230(c).
Worse yet, AB 2655 creates an enforcement system that incentivizes covered platforms to err on the side of removing and/or labeling any content that presents even a close call as to whether it is “materially deceptive” and otherwise meets the statute’s requirements. This system will inevitably result in the censorship of wide swaths of valuable political speech and commentary and will limit the type of “uninhibited, robust, and wide-open” “debate on public issues” that core First Amendment protections are designed to ensure. New York Times v. Sullivan, 376 U.S. 254, 270 (1964). As the United States Supreme Court has recognized, our strong First Amendment protections for such speech are based on our nation’s “profound national commitment” to protecting such debate, even if it often “include[s] vehement, caustic, and sometimes unpleasantly sharp attacks on government and public officials.”
The complaint is strong and presents a clear explanation of the myriad problems with this law.
AB 2655 suffers from a compendium of serious First Amendment infirmities. Primary among them is that AB 2655 imposes a system of prior restraint on speech, which is the “most serious and the least tolerable infringement on First Amendment rights.” Nebraska Press Ass’n v. Stuart, 427 U.S. 539, 559 (1976). The statute mandates the creation of a system designed to allow for expedited “take downs” of speech that the State has targeted for removal from covered platforms in advance of publication. The government is involved in every step of that system: it dictates the rules for reporting, defining, and identifying the speech targeted for removal; it authorizes state officials (including Defendants here) to bring actions seeking removal; and, through the courts, it makes the ultimate determination of what speech is permissible. Rather than allow covered platforms to make their own decisions about moderation of the content at issue here, it authorizes the government to substitute its judgment for those of the platforms.
It is difficult to imagine a statute more in conflict with core First Amendment principles. As the United States Supreme Court has held, “it is a central tenet of the First Amendment that the government must remain neutral in the marketplace of ideas.” Hustler Magazine, Inc. v. Falwell, 485 U.S. 46, 56 (1988). Even worse, AB 2655’s system of prior restraint censors speech about “public issues and debate on the qualifications of candidates,” to which the “First Amendment affords the broadest protection” to ensure the “unfettered interchange of ideas for the bringing about of political and social changes desired by the people.” McIntyre v. Ohio Elections Comm’n, 514 U.S. 334, 346 (1995).
If challenging these deepfake laws sounds familiar, there already was one challenge to AB 2655 from a user whom California Governor Gavin Newsom directly called out as someone the law was designed to silence. In that case, two of the laws were challenged, and the court (very, very quickly) issued an injunction against the other one, AB 2839, which was set to go into effect immediately. The challenge to 2655 was put on the backburner, since it wasn’t set to go into effect until January 1st of next year.
Now ExTwitter is jumping in to challenge it as well, and hopefully it succeeds. The complaint is well done and makes good points, and I’m happy that Elon is challenging the law in this way. One hopes that perhaps the legal team representing him could do more to explain to him how the First Amendment actually works so he stops misrepresenting it in other contexts.
It’s also good to see that the complaint makes a big deal of how Section 230 protects ExTwitter from such laws, especially given how Elon’s best buddy, Donald Trump, has made noises about stripping Section 230 protections from websites.
AB 2655 directly contravenes the immunity provided to the covered platforms by 47 U.S.C. § 230(c)(1), which prohibits treating interactive computer service providers as the “publisher or speaker of any information provided by another information content provider.”
AB 2655’s Enforcement Provisions violate Section 230(c)(1) because they provide causes of action for “injunctive or other equitable relief against” the covered platform to remove or (by adding a disclaimer) alter certain content posted on the platform by its users. See §§ 20515(b), 20516. AB 2655 thus treats covered platforms “as the publisher or speaker of any information provided by another information content provider.” 47 U.S.C. § 230(c)(1).
Section 230(c)(1) bars such liability where the alleged duty violated derives from an entity’s conduct as a “publisher,” including “reviewing, editing, and deciding whether to publish or withdraw from publication third-party content.” See, e.g., Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1102 (9th Cir. 2009) (finding that Yahoo! was entitled to immunity under Section 230(c)(1) from claims concerning failure to remove offending profile), as amended (Sept. 28, 2009); Calise v. Meta Platforms, Inc., 103 F.4th 732, 744 (9th Cir. 2024) (finding that Meta was immune under Section 230(c)(1) from claims that would require Meta to “actively vet and evaluate third-party ads” in order to remove them).
The complaint also praises the Supreme Court’s good ruling in the Moody case about how social media sites have a First Amendment right to present content how they want:
Even if AB 2655 were not a prior restraint, it still violates the First Amendment because it runs counter to the United States Supreme Court’s recent decision in Moody v. NetChoice, LLC, in which the Court held, in no uncertain terms, that when a social media platform “present[s] a curated and ‘edited compilation of [third party] speech,’” that presentation “is itself protected speech.” 144 S. Ct. 2383, 2409 (2024) (quoting Hurley v. IrishAm. Gay, Lesbian & Bisexual Grp. of Boston, 515 U.S. 557, 570 (1995)); see also id. at 2401 (“A private party’s collection of third-party content into a single speech product (the operators’ ‘repertoire’ of programming) is itself expressive, and intrusion into that activity must be specially justified under the First Amendment.”); id. at 2405 (quoting Miami Herald Pub. Co. v. Tornillo, 418 U.S. 241, 258 (1974)) (“‘The choice of material,’ the ‘decisions made [as to] content,’ the ‘treatment of public issues’ — ‘whether fair or unfair’ — all these ‘constitute the exercise of editorial control and judgment.’ . . . For a paper, and for a platform too.”). Because AB 2655 impermissibly replaces the judgments of the covered platforms about what speech may be permitted on their platforms with those of the government, it cannot be reconciled with the Supreme Court’s decision in Moody.
AB 2655 disregards numerous significant First Amendment holdings by the Supreme Court in Moody — specifically, that (i) it is not a “valid, let alone substantial” interest for a state to seek “to correct the mix of speech” that “social-media platforms present,” id. at 2407; (ii) a “State ‘cannot advance some points of view by burdening the expression of others,’” id. at 2409 (quoting Pac. Gas & Elec. Co. v. Pub. Utilities Comm’n of California, 475 U.S. 1, 20 (1986)); (iii) the “government may not, in supposed pursuit of better expressive balance, alter a private speaker’s own editorial choices about the mix of speech it wants to convey,” id. at 2403; (iv) “it is no job for government to decide what counts as the right balance of private expression — to ‘un-bias’ what it thinks biased, rather than to leave such judgments to speakers and their audiences. That principle works for social-media platforms as it does for others,” id. at 2394; and (v) “[h]owever imperfect the private marketplace of ideas,” a “worse proposal” is “the government itself deciding when speech [is] imbalanced, and then coercing speakers to provide more of some views or less of others,” id. at 2403.
Again, this seems important, given that the ruling in Moody was shooting down problematic GOP-pushed bills to force social media companies to host speech they didn’t want to host.
All in all, this is a strong complaint that is completely consistent with strong First Amendment principles. I’m glad that Elon was willing to have ExTwitter step up and bring it, even if he’s doing so for purely selfish reasons.