Last month, we shared the details of a really good “Dear Colleague” letter that Senator Rand Paul sent around urging other Senators not to vote for KOSA. While the letter did not work and the Senate overwhelmingly approved KOSA (only to now have it stuck in the House), Paul has now expanded upon that letter in an article at Reason.
It starts out by pointing out how much good the internet can be for families:
Today’s children live in a world far different from the one I grew up in and I’m the first in line to tell kids to go outside and “touch grass.”
With the internet, today’s children have the world at their fingertips. That can be a good thing—just about any question can be answered by finding a scholarly article or how-to video with a simple search.
While doctors’ and therapists’ offices close at night and on weekends, support groups are available 24 hours a day, 7 days a week, for people who share similar concerns or have had the same health problems. People can connect, share information, and help each other more easily than ever before. That is the beauty of technological progress.
He correctly admits that the internet can also be misused, and that not all of it is appropriate for kids, but that’s no reason to overreact:
It is perhaps understandable that those in the Senate might seek a government solution to protect children from any harms that may result from spending too much time on the internet. But before we impose a drastic, first-of-its-kind legal duty on online platforms, we should ensure that the positive aspects of the internet are preserved. That means we have to ensure that First Amendment rights are protected and that these platforms are provided with clear rules so that they can comply with the law.
He points out that the law empowers the FTC to police content that could impact the mental health of children, but does not clearly define mental health disorders, and those could change drastically with no input from Congress.
What he doesn’t mention is that we’re living in a time when some are trying to classify normal behavior as a mental health disorder, and thus this law could be weaponized.
From there, he talks about the “duty of care.” That’s a key part of both KOSA and other similar bills and says that websites have a “duty of care” to make efforts to block their sites from causing various problems. As we’ve explained for the better part of a decade, a “duty of care” turns itself into a demand for censorship, as it’s the only way for companies to avoid costly litigation over whether or not they were careful enough.
Just last week, I got into a debate with a KOSA supporter on social media. They insisted that they’re not talking about content, but just about design features like “infinite scroll.” When asked about what kind of things they’re trying to solve for, I was told “eating disorders.” I pointed out that “infinite scroll” doesn’t lead to eating disorders. They’re clearly targeting the underlying content (and even that is way more complex than KOSA supporters realize).
Senator Paul makes a similar point in the other direction. Things like “infinite scroll” aren’t harmful if the underlying content isn’t harmful:
For example, if an online service uses infinite scrolling to promote Shakespeare’s works, or algebra problems, or the history of the Roman Empire, would any lawmaker consider that harmful?
I doubt it. And that is because website design does not cause harm. It is content, not design, that this bill will regulate.
As for stopping “anxiety,” Paul makes the very important point that there are legitimate and important reasons why kids may feel some anxiety today, and KOSA shouldn’t stop that information from being shared:
The world’s most well-known climate activist, Greta Thunberg, famously suffers from climate anxiety. Should platforms stop her from seeing climate-related content because of that?
Under this bill, Greta Thunberg would have been considered a minor and she could have been deprived from engaging online in the debates that made her famous.
Anxiety and eating disorders are two of the undefined harms that this bill expects internet platforms to prevent and mitigate. Are those sites going to allow discussion and debate about the climate? Are they even going to allow discussion about a person’s story overcoming an eating disorder? No. Instead, they are going to censor themselves, and users, rather than risk liability.
He also points out — as he did in his original letter — that the KOSA requirements to block certain kinds of ads makes no sense in a world in which kids see those same ads elsewhere:
Those are not the only deficiencies of this bill. The bill seeks to protect minors from beer and gambling ads on certain online platforms, such as Facebook or Hulu. But if those same minors watch the Super Bowl or the PGA tour on TV, they would see those exact same ads.
Does that make any sense? Should we prevent online platforms from showing kids the same content they can and do see on TV every day? Should sports viewership be effectively relegated to the pre-internet age?
Even as I’ve quoted a bunch here, there’s way more in the article. It is, by far, one of the best explanations of the problems of KOSA and many other bills that use false claims of “regulating design” as an attempt to “protect the kids.” He also talks about the harms of age verification, how it will harm youth activism, and how the structure of the bill will create strong incentives for websites to pull down all sorts of controversial content.
There is evidence that kids face greater mental health challenges today than in the past. Some studies suggest this is more because of society’s openness to discussing and diagnosing mental health challenges. But there remains no compelling evidence that the internet and social media are causing it. Even worse, as Paul’s article makes abundantly clear, there is nothing out there suggesting that censoring the internet will magically fix those problems. Yet, that’s what KOSA and many other bills are designed to do.
Two bits of news came out of the letter Mark Zuckerberg sent to Rep. Jim Jordan this week (and how people responded to it), neither of which are what you’re likely to have heard about. First, Donald Trump seems to be accusing himself of rigging the 2020 election against himself.
And, second, Mark Zuckerberg has absolutely no spine when it comes to Republican pressure on Meta’s moderation practices. He falsely plays into their fundamentally misleading framing, all to win some temporary political favors by immediately caving to pressure from the GOP.
You may have seen a bunch of headlines in the past couple of days claiming that Mark Zuckerberg “admitted” that the Biden White House pressured him about “censoring” content and he wished he’d stood up to them more. It got plenty of coverage. Unfortunately, almost none of that coverage is accurately reporting what happened, what’s new, and what was actually said.
The reality is pretty straightforward: Mark Zuckerberg folded like a cheap card table, facing coercive pressure from Rep. Jim Jordan to modify Meta’s moderation practices. What he says misleadingly plays into Jordan’s mendaciously misleading campaign. In short, Zuckerberg’s claim that he would stand up to government pressure on moderation is undermined by the fact that he’s revealing this literally while caving to government pressure on moderation.
First, it’s necessary to understand the history. It’s no secret that the White House sought to persuade social media companies to adjust their content moderation practices. They said so publicly. Hell, there was just a big, giant, massive Supreme Court case about that, where the details of government requests to social media were on full display.
But, as the Supreme Court Justices themselves made clear during the oral arguments, the White House reaching out to media providers and trying to persuade them on editorial decisions is nothing new, nor is it problematic. The only thing that matters is if the government uses coercive techniques, in which it threatened the company or punished the company if it failed to comply.
Justices Kavanaugh and Kagan were talking about this during the oral arguments:
JUSTICE KAVANAUGH: You’re speaking on behalf of the United States. Again, my experience is the United States, in all its manifestations, has regular communications with the media to talk about things they don’t like or don’t want to see or are complaining about factual inaccuracies.
[….]
JUSTICE KAGAN: I mean, can I just understand because it seems like an extremely expansive argument, I must say, encouraging people basically to suppress their own speech. So, like Justice Kavanaugh, I’ve had some experience encouraging press to suppress their own speech.
You just wrote about editorial. Here are the five reasons you shouldn’t write another one. You just wrote a story that’s filled with factual errors. Here are the 10 reasons why you shouldn’t do that again.
I mean, this happens literally thousands of times a day in the federal government.
And just the fact that the Supreme Court did not see any evidence of this being coercive should say something.
Nothing in what Zuckerberg said changes any of that. He simply repeats what was already known and already public: that, yes, White House officials sought to persuade Meta in how it handled some moderation elements. Much of that pressure was public, and even the pressure that was private has been revealed before.
Remember, Jim Jordan has spent the last couple of years weaponizing the House Judiciary Committee to misleadingly claim that the government was “weaponized” to suppress conservative speech. He’s sent dozens upon dozens of subpoenas, almost all of which misleadingly demand responses or data based on his false belief that basic, fundamental trust & safety work is somehow an attack on free speech rights.
But make no mistake about Jordan’s end goal here: it is to prevent websites from ever doing anything to try to counter the spread of disinformation. We’re not even talking about removing or blocking content. He doesn’t want there to be any effort to fact check or debunk nonsense. And that’s because the party that he is a part of is the largest producer and purveyor of complete and utter bullshit. And having people point that out is seen as an attack.
So Jordan has framed any attempt to refute nonsense as “an attack on free speech.” Tragically, much of the media (and plenty of tech execs) have fallen into this trap and accepted Jordan’s framing.
Finally, that brings us to Zuckerberg’s letter from this week. In it, he admits (again) what has been widely known and widely reported on, and was central to the Murthy v. Missouri Supreme Court case: that some people in the White House sought to persuade Meta to take Covid misinfo more seriously.
In 2021, senior officials from the Biden Administration, including the White House, repeatedly pressured our teams for months to censor certain COVID-19 content, including humor and satire, and expressed a lot of frustration with our teams when we didn’t agree. Ultimately, it was our decision whether or not to take content down, and we own our decisions, including COVID-19-related changes we made to our enforcement in the wake of this pressure. I believe the government pressure was wrong, and I regret that we were not more outspoken about it. I also think we made some choices that, with the benefit of hindsight and new information, we wouldn’t make today. Like I said to our teams at the time, I feel strongly that we should not compromise our content standards due to pressure from any Administration in either direction and we’re ready to push back if something like this happens again.
So what is actually revealed here? Literally nothing new at all. It was already widely reported that the White House tried to persuade Meta to be more responsive. And there were reasons for this. People were dying from Covid, and internal documents show that Zuckerberg himself was hoping that Facebook would be helpful in getting people vaccinated. But the platform was being bombarded with conspiracy theories, lies, and nonsense that was misleading people into putting lives at risk.
So, yes, of course the White House would reach out to Meta and suggest that the platform should do better in stopping the flood of misleading, dangerous info. None of that should be revelatory or even noteworthy.
And if you read what Zuckerberg says here, he still says that they didn’t do anything because of pressure from the White House: “Ultimately, it was our decision whether or not to take content down, and we own our decision, including COVID-19-related changes we made to our enforcement…”
But then he says, “I believe the government pressure was wrong, and I regret that we were not more outspoken about it.”
And what does that even mean? First of all, Meta was pretty fucking outspoken. When Joe Biden accused Meta of “killing people,” Meta went all out in calling that claim crazy. They said that the Biden administration was “looking for scapegoats for missing their vaccine goals” and “we will not be distracted by accusations which aren’t supported by the facts.”
So, what’s new here? It was widely known that the White House wanted Meta to be more responsible about Covid and vaccine misinfo. They said so publicly and privately. The private emails were widely reported on and subject to a landmark Supreme Court case that was just decided less than two months ago. None of that is new.
Zuckerberg also says that they made their own decisions and it wasn’t due to White House pressure, which confirms what was said during the Supreme Court case.
The only “new” thing here is Zuck suggesting he regrets not being more aggressive in… what…? In making sure more people saw misinformation that might lead them to make bad decisions and get sick and possibly die? And again, it’s not even that Meta didn’t push back. They pushed back hard.
And yet, Jim Jordan and the House Judiciary are claiming that this was some big revelation:
So, again, neither of the first two points are new or even meaningful. It was public knowledge that the White House spoke to Meta. And, of course Meta moderated (not censored) the speech of Americans, because those Americans violated Meta’s policies. And, as a private entity, they’re free to do that. That’s American freedom, something Jim Jordan seems unable to comprehend.
Regarding the Hunter Biden laptop story, that’s the next paragraph of Zuckerberg’s letter:
In a separate situation, the FBI warned us about a potential Russian disinformation operation about the Biden family and Burisma in the lead up to the 2020 election. That fall, when we saw a New York Post story reporting on corruption allegations involving then-Democratic presidential nominee Joe Biden’s family, we sent that story to fact-checkers for review and temporarily demoted it while waiting for a reply. It’s since been made clear that the reporting was not Russian disinformation, and in retrospect, we shouldn’t have demoted the story. We’ve changed our policies and processes to make sure this doesn’t happen again for instance, we no longer temporarily demote things in the U.S. while waiting for fact-checkers.
Again, literally nothing in this is new. All of this was known at the time. Indeed, Meta admitted it at the time and admitted that it had probably been too quick to limit the spread of the story (just as Twitter had done, admitting the very next day that the policy was a bad one and needed to change). We’ve covered all this in great detail before.
Furthermore, Zuckerberg said this exact same thing on Joe Rogan two years ago. This also led people to falsely claim that he admitted that they blocked the spreading of that NY Post story due to pressure from the White House, even though he said no such thing.
Both times, he said that the FBI gave general warnings about “hack and leak” operations that the Russians were working on, which is no surprise given that the Russians did exactly that during the 2016 election in releasing the DNC emails. The FBI (unsurprisingly!) also said that there were a number of potential targets, including Hunter Biden. And that was also obvious. Anyone in the President’s family and political circle would be obvious targets. At no point has anyone suggested that the FBI said that they should suppress this particular story.
And, remember, the original Hunter Biden story was weakly sourced. Multiple news organizations, including Fox News, had turned down the story. That was because there were all sorts of questions about its legitimacy. And given what had happened in the past, it seemed wise to be cautious.
Indeed, these days Republicans seem oddly quiet about news organizations still holding back on reporting on the documents that were hacked from top Republicans like Roger Stone by the Iranians in this election cycle. Is Jim Jordan going to accuse companies of illegally interfering in the election because they won’t publish those documents that are embarrassing to Trump? Why the silence Jim? Oh right.
Even more to the point, at the time of that NY Post story, the Trump administration was in charge. It was October of 2020, a month before the 2020 election. So, this “truth” from Donald Trump is absolutely insane, because he appears to be accusing himself of “rigging” the election against himself:
If you can’t see that, it’s Donald Trump posting on Truth Social:
“Zuckerberg admits that the White House pushed to SUPPRESS HUNTER BIDEN LAPTOP STORY (& much more!). IN OTHER WORDS, THE 2020 PRESIDENTIAL ELECTION WAS RIGGED. FoxNews, New York Post, Rep. Laurel Lee, House Judiciary Committee.
Again, that is Trump saying “the White House” in 2020 “rigged” the Presidential election. So far, the only reporter I’ve seen call this out is Philip Bump at the Washington Post. This is Trump being so confused, he’s accusing himself of rigging the election.
Finally, Zuckerberg’s letter concludes with even more nonsense.
Apart from content moderation, I want to address the contributions I made during the last presidential cycle to support electoral infrastructure. The idea here was to make sure local election jurisdictions across the country had the resources they needed to help people vote safely during a global pandemic. I made these contributions through the Chan Zuckerberg Initiative. They were designed to be non-partisan spread across urban, rural, and suburban communities. Still, despite the analyses I’ve seen showing otherwise, I know that some people believe this work benefited one party over the other. My goal is to be neutral and not play a role one way or another or to even appear to be playing a role. So I don’t plan on making a similar contribution this cycle.
Why is he even bringing up his personal donations if this is about Meta? And, furthermore, is he really saying that he won’t do any more donations simply because “some people believe” that donations for safe voting benefit one party or another?
This is the most spineless response to a mendacious, targeted campaign by a politician who is weaponizing the power of the government to pressure a media company over its editorial policies. And Zuck folds like a cheap card table. And it’s doubly ironic, because part of that folding is claiming he won’t fold again (something he didn’t even do in the first place, but is doing now).
Oh, and of course, Elon jumps in to say this “sounds like a First Amendment violation.”
Dude, the Supreme Court literally just covered this in a case that talked quite a bit about your own site and said (pretty fucking clearly) that the record did not support any claim of a First Amendment violation.
All of this is stupid. That letter is written in the worst possible way. While it does not state anything fundamentally false, it makes it sound like things that have been public knowledge for years are somehow a new admission. It further directly enables idiots like Trump, Jordan, and Musk to claim false things about what happened. And, finally, it just contributes to a totally unnecessary news cycle.
The only actual “news” out of all this is (1) Zuckerberg has no spine and simply cannot stand up to bad faith government pressure to change his moderation practices when it comes from Republicans (he was fine doing so when it came from Democrats) and (2) Donald Trump has accused himself of rigging his own election against himself.
Zuckerberg has to know how this would play out. After all, the same misleading reaction happened two years ago when he went on Rogan’s podcast. The only reasonable interpretation of this is that he sent this letter, knowing how it would be interpreted, to give Jordan/Trump red meat to continue believing their own false and misleading claims in case Trump wins in the fall. It’s stupid and cynical, but that’s the kind of politics Meta seems to play these days.
More and more politicians have decided the only way to make America great is to abandon the things that made America great in the first place. The free and open exchange of ideas is getting shut down. The First Amendment is being remade and remodeled to only protect the ideas those in power are in favor of.
As is always the case when aspiring fascist take the law into their own hands, the first people against the wall are those whose job it is to pass information on to others. That’s why when rulers seek to control the public’s perception, they go after journalists and teachers first. An educated and informed voting bloc won’t vote the “right” way. And if they’re going to hold onto their power, they need to make sure the next generation of voters and public servants only know what their current leaders want them to know.
This is America and yet it feels like something else entirely in far too many states. Between the incessant book bans, anti-LGBTQ laws, and the constant portrayal of the First Amendment as only protective of speech government leaders like, no one insisting on travelling in Donald Trump’s orbit is actually trying to make America “great.” Instead, it appears they’re trying to make America WWII Germany. Or Russia under Lenin/Stalin/Putin.
Oklahoma’s education board has revoked the license of a former teacher who drew national attention during surging book-ban efforts across the U.S. in 2022 when she covered part of her classroom bookshelf in red tape with the words “Books the state didn’t want you to read.”
The decision Thursday went against a judge who had advised the Oklahoma Board of Education not to revoke the license of Summer Boismier, who had also put in her high school classroom a QR code of the Brooklyn Public Library’s catalogue of banned books.
For that, the board pulled her license. And All Boismier did was highlight the ridiculousness of the state government’s actions and provide opportunities for students to discover for themselves the content Oklahoma legislators unilaterally decided they should no longer have access to.
There’s nothing more American than someone fighting censorship, especially when that person is tasked with educating and informing developing minds. But the Oklahoma government doesn’t want Americans teaching in its schools. And it clearly doesn’t want anyone — no matter what age or position — to question the status quo being imposed by the censors in its midst. All it wants is unquestioning equivocation — the sort of thing that, if adopted by the founding fathers, would have resulted in us never referring to them as “founding fathers.” This would still be a British colony, or failing that, a former British colony now presided over by China or strip-mined of anything of value periodically by successive European nations with their own takes on colonialism before settling in for a long run of corrupt governments and military coups.
But we’re not that. We’re the land of liberty. Or supposed to be. Firing teachers for pushing back against censorship isn’t an American ideal. It’s apparently an Oklahoman ideal, although it would be a stretch to think all Oklahomans support this sort of thing.
But one politician has managed to convert a personal vendetta into a chilling effect that will harm teachers, librarians, and other government employees across the state. State Superintendent Ryan Walters has been trying to get Bosmier fired for two years. And now he’s finally succeeded.
Walters, who was a candidate for Oklahoma’s top education office when Boismier was teaching, had called on the board in 2022 to revoke her teaching license in a letter he shared on social media.
“There is no place for a teacher with a liberal political agenda in the classroom,” Walters had wrote. He accused her of providing “banned and pornographic material” to students.
Strong words from the state sup. Strong words indeed. Of course, one feels Walters would be completely supportive of a teacher with a “conservative political agenda” presiding over a classroom of young impressionables. His follow-up statement — that Bosier was fired for “breaking the law” that prohibits instruction on topics related to race and gender (and let’s just take a beat to recognize just how amazingly shitty that law is) — doesn’t add anything that would alter the immediate reaction to this news. That reaction being: the state will always fire people who don’t align themselves with the bigots running the government.
He also added this:
He accused her of providing “banned and pornographic material” to students.
This is just as patently false as it is technically false. Bosmier didn’t provide any material to anyone. In fact, she taped off a section of her own classroom bookshelf to prevent access. Handing the kids a QR code to a list of banned books doesn’t “provide” this material to anyone. All it gives them is a list of books. It’s up to the students to do what they will with that information.
But discussing this in rational terms is useless. These are not rational people. These are people steeped in hatred who have the power to inflict misery on anyone who pushes back against their agenda. The state superintendent doesn’t seem to like any political agenda other than his own and the ones pushed by the party he clearly supports. That’s not how America is supposed to work. There’s supposed to be free and open discussion of competing views — something that encourages younger minds to consider all the facts and draw their own conclusions.
For all the whining about “indoctrination” of students with “liberal agendas,” state officials clearly aren’t opposed to indoctrinating kids. They just want to be the ones doing the indoctrinating. It’s sickening and, unfortunately, it’s not the political career-killer it once was. Now, going as far right as you can — even past the Constitutional confines these people claim to support — is encouraged and rewarded. Many more people will be up against the wall before America (hopefully) moves past this embrace of the worst people to ever hold public office.
To recap: in November, Media Matters released an article written by Eric Hananoki. The article detailed an investigation in which they found ads from big name advertisers like Apple, IBM, and Oracle showing up next to pro-Nazi content. This happened a day after Elon had tweeted “You have said the absolute truth” in response to someone who had tweeted an antisemitic conspiracy theory.
Those two things together caused many advertisers to pause their advertising on ExTwitter.
In response, Musk sued Media Matters in Texas (and in Ireland). He also (following a stupid tweet from Trump advisor Steven Miller) appeared to encourage state Attorneys General to “investigate” Media Matters as well. This resulted in Missouri’s AG Andrew Bailey promising his team was “looking into this matter.”
Both Bailey and Texas’ Ken Paxton launched investigations, issuing Civil Investigatory Demands (CIDs) to Media Matters. Media Matters went to court in both cases to challenge the CIDs, noting correctly that they appeared to be vexatious attacks for its constitutionally protected speech.
In April, Judge Amit Mehta in the federal district court in Washington DC blocked Ken Paxton’s CID, noting that it was clearly retaliation for Media Matters’ speech. Last week, Judge Mehta did the same thing for Andrew Bailey’s CID.
The Bailey part is a bit strange, because Bailey rushed to a state court in Missouri to seek to enforce the CID before Media Matters had even seen it. Shortly after that, Media Matters amended its complaint against Paxton, in Mehta’s courtroom, to also include the Bailey CID.
Not surprisingly, Mehta sees through Bailey’s nonsense just as he had with Paxton. Bailey tried to claim that Younger abstention applies because he filed his case in state court before Media Matters added him to their lawsuit. That’s where federal courts aren’t supposed to involve themselves in matters still before state courts. We’ve talked about that in other cases before.
But here, Mehta says that Younger abstention is limited to “only exceptional circumstances,” and he doesn’t find this exceptional. Indeed, Mehta’s analysis of the request to apply Younger abstention repeatedly points out how obvious it is that Bailey is just trying to rack up pain on Media Matters, rather than filing a legitimate case against them.
After a few other procedural points, Mehta gets down to the heart of the matter. It’s obvious that Bailey is doing exactly what Paxton was before him: trying to suck up to Elon Musk with a censorial, vexatious “investigation” into Media Matters for its protected speech. While Paxton didn’t even put up much of a defense to what he was doing, Bailey at least tried to claim that if Media Matters defamed ExTwitter, then it’s no longer protected speech.
But, of course, as we and others pointed out, the lawsuit Elon filed did not even accuse Media Matters of defamation. Instead, it flat out admitted that Media Matters did find exactly what it claimed to find. Judge Mehta points that out in response to Bailey’s argument:
But in his reply brief, for the first time, Defendant raises the possibility that Media Matters’ reporting is not protected expression. In a section addressing the causation element, he argues, “if in fact Media Matters defamed X by knowingly posting false information, then its activity was not protected by the First Amendment, and so no ‘retaliation’ is even possible.”…
… Even if preserved, on the present record, Plaintiffs have likely shown that their reporting was not defamatory and therefore was protected speech under New York Times v. Sullivan. Hr’g Tr. at 75:22-23 (“We absolutely stand by the defense that everything in Media Matters[’] reporting was accurate.”). Hananoki has averred in these proceedings that his “November 16 article contains screenshots of X feeds, which include at least nine organic posts from X users and six advertisements from major corporate entities.” Pls.’ Mot. for TRO & Prelim. Inj., ECF No. 4, Decl. of Eric Hananoki in Supp. of Pls.’ Mot., ECF No. 4-3, ¶ 15 [hereinafter Hananoki Decl.]. In its public response to Hananoki’s article, X did not deny that advertising in fact had appeared next to the extremist posts on the day in question. X stated that it had served “less than 50 total ad impressions” next to the “organic content featured in the Media Matters article” (a mere fraction of the 5.5 billion ad impressions served that day), and it conceded that Hananoki and one other person had seen advertisements of two of the brands identified in the article next to the extremist content. See Compl., ECF No. 1, ¶ 44 [hereinafter Compl.]; Elon Musk (@elonmusk), X.COM (Nov. 18, 2023, 2:01 AM), https://perma.cc/X4HN-PLJ4. X called these “contrived experiences,” but did not deny the basic premise of the article: that X’s platform was delivering ads of major brands next to extremist content. Many other media outlets, as recently as April 2024, have published similar findings. Compl. ¶ 34; Third Hananoki Decl. ¶¶ 13–14. These other stories corroborate Hananoki’s reporting and Plaintiffs’ belief in its accuracy.
From there, it’s easy for Mehta to point out that Bailey’s CID serves to chill speech.
The court already has held that Defendant Paxton’s announcement of an investigation and issuance of a CID demanding records relating to Media Matters’ organization, funding, and journalism would sufficiently deter a news organization or journalist “of ordinary firmness” from speaking again about X-related matters. Media Matters, 2024 WL 1773197, at *18 (explaining why Defendant Paxton’s actions satisfied the second element and describing chilling effects of the Texas CID). Defendant Bailey has gone one step further. He has filed suit not only to enforce the Missouri CID, but he has asked a state court to sanction Media Matters with a civil penalty. Such action chills speech.
And that speech chilling has already happened:
Further, Plaintiffs’ “actual response” demonstrates the chilling effects of Defendant Bailey’s conduct. See Constantine v. Rectors & Visitors of George Mason Univ., 411 F.3d 474, 500 (4th Cir. 2005). Plaintiff Hananoki avers that the CID and Petition, as well as Defendant Bailey’s public attacks, “have had an extremely negative effect on my work and on me personally.” Third Hananoki Decl. ¶ 10. He continues to limit his communications with other journalists and his editor, and self-censors research and writing on X. Id. ¶ 12. Hananoki’s editor, Benjamin Dimiero, confirms that Hananoki and other Media Matters’ journalists have self-censored out of fear that “certain topics will lead to backlash, and perhaps even imperil the organization’s future operations.” Second Dimiero Decl. ¶ 7. He also attests that Defendant’s actions have adversely impacted Media Matters’ editorial process, resulting in slowed output and hampering efforts to issue timely reporting. Id. “[E]xtreme[] caution[]” remains pervasive within the organization. Id. ¶ 6; see also Suppl. Padera Decl. ¶¶ 12 (describing “chilling [of] employees’ willingness to speak, research, or report on topics related to the subjects of the investigations and the Petition”), 13–14 (describing impaired collaboration with other groups).
Bailey also argued that because Media Matters couldn’t directly show that his actions were retaliatory, it can’t claim that they are. The judge isn’t buying any of that, especially the fact that the basis for Bailey to claim that is a criminal law precedent, which likely doesn’t even apply here.
The court is not convinced that Nieves applies here. Defendant cites no case in which a court has applied Nieves in the civil context. To the contrary, courts have declined to extend Nieves beyond the retaliatory arrest setting.
Also, the judge calls out that Bailey’s investigation appears to have come directly from Stephen Miller’s request, which is another indication that it is retaliatory:
Plaintiffs have shown that many other news outlets published similar stories about advertising appearing next to antisemitic and extremist posts on X, yet none received a CID from Defendant Bailey. Compl. ¶ 34 (e.g., Washington Post, Business Insider, Kansas City Star) (links to articles verified by court); Third Hananoki Decl. ¶ 13 (NBC News). But only one drew the condemnation of former Trump aide, Stephen Miller. See Compl. ¶ 46. It is no coincidence that both Defendants Paxton and Bailey announced investigations of Media Matters shortly after Miller implicitly called on “conservative state Attorneys General[s]” to do so
But, more to the point, Bailey’s own words and actions revealed that the case was retaliatory against Media Matters’ speech:
First, Defendant’s public statements are direct evidence of retaliatory intent. Two days after Media Matters published the November 16 Article, Musk posted a tweet threatening “a thermonuclear lawsuit against Media Matters” for its “fraudulent attack on our company,” accusing Media Matters of manipulating X’s algorithm to artificially force placement of the ads next to extremist content. See Compl. ¶¶ 44–45; Elon Musk (@elonmusk), X.COM (Nov. 18, 2023, 2:01 AM), https://perma.cc/X4HN-PLJ4. In response, Stephen Miller tweeted, “Fraud is both a civil and criminal violation. There are 2 dozen+ conservative state Attorneys General.” See id. ¶ 46; Stephen Miller (@StephenM), X.COM (Nov. 19, 2023, 11:48 AM, https://perma.cc/9E6LFJGY (emphasis added). Only hours later, Defendant Bailey took up Miller’s call, responding “[m]y team is looking into this matter,” even when there was no apparent connection to Missouri. See id. ¶ 47; Andrew Bailey (@AGAndrewBailey), X.COM (Nov. 19, 2023, 4:46 PM), https://perma.cc/J463-656K. Defendant’s investigation thus began with a political bent.
From that point forward, Defendant consistently characterized Media Matters in ideological terms. When publicly announcing his issuance of the document preservation notice, Defendant referred to Plaintiffs as “radicals” and called them “progressive tyrants masquerading as [a] news outlet[].” Pls.’ Ex. 4 at 4. He also claimed Media Matters had acted to “wipe out free speech.” Id. Months later, when simultaneously issuing the CID and filing the Petition, in a press release he called Media Matters a “political activist organization” and “‘progressive’ activists masquerading as [a] news outlet[],” which had “pursued an activist agenda in its attempt to destroy X.” Suppl. Compl., Ex. 7, ECF No. 46-5, at 3. These statements are at odds with Defendant’s concession in these proceedings that Media Matters is a “media company” that is “absolutely” entitled to “core” First Amendment protections.
It would appear that Bailey’s eagerness to join in the culture war fun helped sink his own case.
Then, on June 3, 2024, Defendant Bailey said out loud the true purpose of his investigation. During an online interview with Donald Trump Jr., Defendant Bailey was asked “what’s the end game” of his investigation of Media Matters. Defendant Bailey responded: “It’s a new front in the war against the First Amendment . . . We’ve seen a direct assault by the deep state and President Biden’s Administration.” He accused Media Matters of “rigging the system to take down X.” He continued:
They don’t want us to have a medium of communication and they will bend and break the rules through any means necessary . . . My office was one of the first in the nation joined by my colleague Ken Paxton in Texas to file an investigation, launch an investigation, into Media Matters . . . We’re not going to let them destroy free speech in America[.]
Revealingly, Defendant Bailey expressly tied the investigation to the upcoming election: “This is absolutely a new front in the fight for the war for free speech. This investigation is really critical and again especially as we move into an election cycle in 2024.”6 (Emphasis added.) Finally, on June 5, 2024, Defendant Bailey once again reiterated on a podcast that Media Matters is a “radical progressive advocacy group masquerading as a 501(c)(3) . . . when in reality what they really want to do is want to silence conservative voices.”
Although tough talk is not foreign to the law enforcement arena, such overt political messaging is atypical. A reasonable factfinder is likely to interpret Defendants’ words as targeting Media Matters not for legitimate law enforcement purposes but instead for its protected First Amendment activities.
Furthermore, the judge points out that while Bailey’s office keeps claiming the investigation is to investigate potential “fraud” by Media Matters in soliciting donations, nothing they’ve done substantiates that that’s what they care about. Rather, it’s almost entirely about Media Matters’ speech about ExTwitter.
Even accepting this as evidence that Media Matters’ reporting about X was misleading if not defamatory, Reed nowhere explains how the publication constitutes “fraud to solicit donations from Missourians.” Pls.’ Ex. 6 at 2. He never identifies what suspected fraudulent statements or omissions Media Matters made to Missourians for the purpose of soliciting donations. If he means to say that Media Matters’ defamatory reporting itself is the fraud, he nowhere links that content to Media Matters’ fundraising efforts. He does not claim, for example, that Media Matters used its reporting on X to solicit donations. In fact, the webpage on which the November 16 Article appeared made no express fundraising appeal. Nor did it include a donation link. Defamation is not fraud. It is thus likely that the false reporting-as-fraudulent fundraising justification for the investigation is pretext for retaliation.
Even worse, Bailey’s office told the court that it had “come into possession of internal Media Matters documents” that reveal “plans by the organization to use solicited funds for activities contrary to those publicly disclosed to its Missouri donors.” But, you know, the judge gets to see those documents too. And the judge isn’t stupid. Because the documents are, for the most part, neither “internal” nor are they “contrary” to what Media Matters has long stated publicly:
The court has reviewed the documents. See Minute Order, June 7, 2024. It is perplexed by Reed’s representations.
The records are internal strategic action plans that identify Media Matters as one of multiple organizations aligned to resist President Trump. Media Matters, the strategy materials say, plans to “disarm[] right-wing misinformation,” “lead[] the fight against the next generation of conservative misinformation,” coordinate opposition research, push news stories and research, and fight for ethical standards in government. Pls.’ Resp. to the Court’s June 7 Minute Orders, ECF No. 65 [hereinafter Pls.’ Resp.], Ex. A, ECF No. 65-1 [hereinafter Pls.’ Ex. A], at 1, 7–8. The documents appear to be published around 2016, seven years before Hananoki’s reporting on X, a fact Defendant Bailey appeared to confirm on a podcast. One of the documents has been publicly available on the internet since 2017.
[….]
The court does not understand how a publicly available document written in 2016, years before Elon Musk acquired X, is proof that Media Matters “used fraud to solicit donations from Missourians in order to trick advertisers into removing their advertisements from X” in 2023. Pls.’ Ex. 6 at 2. The “targets” referenced in 2016 document were small “alt-right” online publications that Media Matters believed pushed “fake news,” not large social media platforms like thenTwitter, now X, that host content created by others. And, the way Media Matters would accomplish its objective, according to the 2016 document, was by working with large social media companies like Google and Facebook, not against one like X, then-Twitter
What’s more, Defendant’s declaration is incomplete. Reed accurately quotes from a portion of Media Matters’ website to claim that it misrepresented its mission to donors. Second Reed Decl. ¶ 6. The website does say that Media Matters “posts rapid-response items as well as longer research and analytic reports documenting conservative misinformation throughout the media” and “works daily to notify activists, journalists, pundits, and the general public about instances of misinformation.”…. But Reed omits a key statement: that Media Matters’ work includes “providing [activists, journalists, pundits, and the general public] with the resources to rebut false claims and to take direct action against offending media institutions.” Id. Defendant’s selective quotation of Media Matters’ website undermines the credibility of his representations. Alongside his overselling of a years-old, publicly available document, it is proof of pretext.
Mehta also points out that Bailey’s rush to the courthouse reveals how this was retaliatory rather than legitimate:
According to the Petition, the reason Defendant filed it before Media Matters even received the CID was that Media Matters “has failed or will fail to comply with” it. Pls.’ Ex. 6 ¶ 25 (emphasis added). Defendant’s assertion that Media Matters “will fail” to comply was based on its earlier lawful filing of this suit against Defendant Paxton. Id. ¶¶ 19–20. But MMPA § 407.090, which Defendant cites as the source of his authority to enforce the CID, does not authorize filing suit in anticipation of noncompliance. It provides that the Attorney General may request a court order “[w]henever a person fails to comply with any civil investigative demand[.]” Mo. Rev. Stat. § 407.090. Media Matters, of course, had not yet “fail[ed]” to comply with the Missouri CID as of March 25, 2024, because the CID was still en route. See Suppl. Compl. ¶ 23. Defendant’s filing of a preemptive suit not authorized by Missouri law is evidence of retaliatory intent.
In addition, Bailey demanded that Media Matters pay up for not responding to his CID, which leads the judge to say this is even more evidence of retaliation:
Then, Defendant Bailey sought to tilt the playing field in his favor. On May 24, 2024, before filing his reply in support of his motion to dismiss, Defendant returned to Missouri state court to amend his Petition. He added a second count, “Request for Civil Penalty,” which sought a $1,000 civil penalty for Media Matters’ failure to produce records by the April 15 deadline, which had passed 40 days earlier. Def.’s Ex. A at 11. Based on his reformulated Petition, Defendant argued to this court: Media Matters’ assertion that the Petition does not seek to punish wrongdoing “is easily dispensed with” because the “operative complaint seeks a $1,000 civil penalty” and for that reason Smith & Wesson is “starkly different.” Def.’s Reply at 2–3. He also asserted that, regardless of the civil penalty request, Plaintiffs’ failure to timely respond to the Missouri CID was itself a violation of Missouri law, making the enforcement action one seeking to sanction an “unlawful act.”…
From this timeline, it is apparent that Defendant Bailey amended the Petition to request the $1,000 civil penalty to distinguish this case from Smith & Wesson. After all, Defendant waited 40 days after April 15 to seek the civil penalty, and he did so only after Plaintiffs cited to a case that is nearly on all fours with this one. Defendant offers no reason for this sequencing of events other than the one drawn by the court. It is yet more proof that retaliation for protected expression was likely his true motive for investigating Media Matters.
So that makes both CIDs blocked. And, yet, as we’ve explained elsewhere, in the ExTwitter case against Media Matters, Judge Reed O’Connor in Texas has no problem letting the case move on to discovery, even when there’s clearly no jurisdictional reason for his Texas court to take on the case.
It’s well-established that students still have Constitutional rights, even if some school administrators clearly believe otherwise. But they are limited, especially when on school grounds. There’s a lot of nuance in play. When those nuances are ignored, lawsuits get filed.
There’s plenty of nuance in this case, but not a lot of subtext. We all know what “Let’s Go Brandon” really means. The students in this case knew it too. And that collective knowledge — borne out of a TV broadcaster’s unfortunate attempt to recast the chants of “Fuck Joe Biden” that filled the air as he interviewed NASCAR driver Brandon Brown following his win in Talladega, Alabama — factors heavily into this decision [PDF] issued by a Michigan federal court. (h/t Volokh Conspiracy)
A few months after this poor reporter turned “Let’s Go Brandon” into shorthand for “Fuck Joe Biden,” two students (X.A. and D.A.) were given “Let’s Go Brandon” shirts by their mother as gifts. They both wore these shirts to school. They were both told to remove them. They both complied. In D.A.’s case, he wore another shirt under his “Let’s Go Brandon” sweatshirt with the same phrase on it, resulting in him being sent to the school social worker to get a more acceptable shirt to wear.
Both students were well-aware of the double meaning. Both admitted as much when deposed. Their parents knew what the phrase meant as well. So did the teachers and administrators who ordered the students to remove their shirts. However, the family (all of which are plaintiffs) argued the shirts were political speech, and even if they weren’t, at face value they contained no words that violated school policies against profanity.
The court says this doesn’t matter as much as the plaintiffs want it to. Schools are free to ban profanity from students’ clothing, even when the printed words aren’t actually profanities themselves. It helpfully serves up a selection of similar incidents either the court or the school’s employees had encountered in the past.
If schools can prohibit students from wearing apparel that contains profanity, schools can also prohibit students from wearing apparel that can reasonably be interpreted as profane. Removing a few letters from the profane word or replacing letters with symbols would not render the message acceptable in a school setting. School administrators could prohibit a shirt that reads “F#%* Joe Biden.” School officials have restricted student from wearing shirts that use homophones for profane words. See, e.g., Mercer v. Harr, No. Civ. A. H-04-3454, 2005 WL 1828581 (S.D. Tex. Aug. 2, 2005) (granting summary judgment in favor of the middle school when the school forbid a student from wearing a shirt that read “Somebody Went to HOOVER DAM And All I Got Was This ‘DAM’ Shirt.”). Defendant Bradford recalled speaking to one student who was wearing a hat that said “Fet’s Luck”. She thought the hat was inappropriate for school because rearranging the first letters of the two words resulted in a lewd message. Defendant Buikema testified that he asked a student to change out of a hoodie that displayed the words “Uranus Liquor” because the message was lewd. School officials could likely prohibit students from wearing concert shirts from the music duo LMFAO (Laughing My F***ing A** Off) or apparel displaying “AITA?” (Am I the A**hole?).
Context matters. There’s plenty of it here. No one on either side of this litigation was confused about the real meaning of this phrase. Pretending these three words only mean literally what they say isn’t a credible argument, especially when the plaintiffs themselves have already admitted otherwise.
The phrase means something other than the dictionary definition of the three words. D.A. and X.A. are not enthusiastic supporters of someone named Brandon. Albeit using different words, Let’s Go Brandon, means F*** Joe Biden, a personal insult containing a swear word. Defendants both interpreted the phrase as having a profane meaning. Both D.A. and X.A. thought the phrase was funny because it meant a profanity. When students use language in a school setting that can reasonably be interpreted as inappropriate, courts have permitted schools to discourage students from using that language.
The First Amendment covers some speech by students. It does not, however, cover the shirts that were only allowed to cover the plaintiffs for part of their school day.
In school settings, profanity does not enjoy First Amendment protection. Directing profanity toward a political figure does not transform the utterance to protected speech. Indisputably, the phrase Let’s Go Brandon originated as a profane personal insult directed at President Joe Biden. When Plaintiffs wore sweatshirts bearing the phrase to their middle school, school officials reasonably interpreted the phrase as having a profane meaning. The school officials then enforced the dress code and had Plaintiffs change their attire. The school’s actions did not violate Plaintiffs’ First Amendment rights.
And there it is. Students will either have to deploy phrases administrators haven’t figured out yet or transform other innocuous words into incognito profanities if they want to get away with swearing in school. The school can still ban “Let’s Go Brandon” from its campus without violating the Constitution… or at least until someone brings a better case than this to the same judicial circuit.
Over the last few years, we’ve seen a bunch of politicians trying to frame their regulation of the internet as not being about regulating speech, but about “design” or “process” or some such. But when you scratch beneath the surface, they’re always really about regulating speech. Whether it’s KOSA or California’s Age Appropriate Design Code (AADC) in the US, or the DSA in the EU, there is plenty of ink spilt to defend the claim that they’re not really about censorship.
Just recently we wrote about the Ninth Circuit seeing through California’s AADC. The politicians behind the law insisted it wasn’t about regulating content, only conduct. But the court recognized that was obviously not true. Then, over in the EU, we have the DSA, which European officials insist is never supposed to be used for moderating content, but where the guy in charge of enforcing it seems to think that of course he should be using it for that.
Daphne Keller, over at Stanford, recently came across a custom ChatGPT instance, designed to act as a “trust & safety regulation expert.” The custom tool was created by Inbal Goldberger, a long-time trust & safety executive. Whether or not the tool is any good is not the point. What’s really fascinating is that when Daphne used the tool to explore how websites should best comply with the various regulatory regimes they’re facing, over and over again, it suggested removing speech.
For platforms planning their compliance with laws like KOSA and the DSA, the most basic question is “what do I need to do?” ChatGPT has a lot of answers. One of the main answers is that platforms need to take down more lawful speech.
You can read the full transcripts that Daphne generated exploring both EU and US law and how companies should respond. They’re quite interesting, and I’m actually somewhat impressed by the ChatGPT responses.
For example, she asks the custom GPT how to comply with KOSA, and it immediately tells her that “the following categories of content must be prevented from being accessed by children,” and it includes plenty of First Amendment-protected speech including “violent content” and “cyberbullying.” Whether or not it’s good policy (or, even possible) to block such content, just the fact that this is the first instinct of this custom GPT says something.
And yes, some people will dismiss this by saying that you can’t trust the GPT in the first place. But if it’s reading these laws and finding that’s the best way to comply, it’s pretty clearly saying something about them. Back in July, we had that guest post by Matthew Lane, rightly pointing out that companies are lazy and they will seek to take the path of least resistance for compliance. And that least resistance will often be just pulling down any content that might, conceivably, be deemed problematic under the law, just as the custom GPT recommends.
KOSA wasn’t the only law that the GPT interpreted this way:
ChatGPT gave very similar answers about California’s Age Appropriate Design Code (AADC), telling me that as a platform operator “you need to ensure that your platform protects children from encountering” a list of “key types of content” including depictions of violence, substance abuse, and misinformation. Coincidentally, the same day that ChatGPT said this, the Ninth Circuit Court of Appeals said pretty much the same thing, ruling that the law’s clearly content-related provisions violated the First Amendment.
The answers I got about European law were to my mind equally blunt, though they may require a little more translation for readers not steeped in EU policy debates. I asked a question that deliberately reused language from a recent Commission letter demanding that X mitigate risks arising from the interview it hosted with former President Trump. (That letter, from Commissioner Thierry Breton, prompted outcry from civil society groups and rare publiccensure from other Commission authorities.) The question was, “What must I do to mitigate risks of detrimental effects on civic discourse and public security under the DSA?”
ChatGPT’s answer went awry in a way that really matters for small and mid-sized platforms: it described obligations that won’t apply unless a platform has over 45 million users in the EU, without mentioning that these simply don’t affect everyone else.
Importantly for the rights of Internet users, ChatGPT’s advice also crossed a number of important red lines in EU law that exist to protect freedom of expression and information. First, it instructed me to act not only against illegal content but also “harmful content,” through changes such as “adjusting your content moderation policies.” Using the word “harmful” is a big deal. Throughout the DSA legislative process that term was used to refer to lawful but awful expression, or else to a mixed category that includes both legal and illegal material. For example, the Commission’s explanatory memorandum for the DSA said
There is a general agreement among stakeholders that ‘harmful’ (yet not, or at least not necessarily, illegal) content… should not be subject to removal obligations, as this is a delicate area with severe implications for the protection of freedom of expression.
ChatGPT’s advice to remove disinformation has a version of the same problem since in the EU, as in the US, not all disinformation is illegal.
Also, Daphne notes that if this is just AI “hallucinations,” it’s still notable that it always hallucinated in the same way (“censor more legal content.”).
Several people suggested that the repeated and consistent answers I got were just AI hallucinations. It would be pretty odd for ChatGPT to happen to hallucinate the same interpretation of the DSA that Commissioner Breton did, or the same interpretation of the AADC that a federal court of appeals did, but… I guess it’s possible
But, as she notes, the mere plausibility of these results from the custom GPT also suggests that humans will find these recommendations plausible as well:
The real reason ChatGPT’s answers should ring alarm bells is that they were so plausible. These are the answers that humans may well arrive at—whether or not they ask an AI what to do—in parsing confusing legal obligations. They are answers we can find in sloppy secondary materials purporting to explain the laws, and in news coverage that uncritically repeat politicians’ exaggerated claims about their legal authority. ChatGPT’s answers may not correctly answer questions about the law. But its “take it all down” directives may be the ones that both humans and machines get trained to apply all the same.
These points are really key. When we talk about these laws, people keep pointing to little caveats and phrases in the law to say “but it’s not supposed to be interpreted that way.” But they are interpreted that way over and over again. This is why the Ninth Circuit was right to call out California’s AADC as inherently censorial in how it works. Because that’s exactly how most companies would interpret it.
And this impacts other things as well. One point that we’ve raised repeatedly in our articles about “age verification” is that when politicians try to hide the ball and call for “age estimation” or “age assurance” rather than outright age verification, it doesn’t matter. It will still lead to age verification as the tool that companies will feel most drawn to simply to avoid any risk of liability.
That also appears to be what the Custom GPT felt about some of these laws. The transcript Daphne posted regarding KOSA gives a really good example of this. Supporters of KOSA insist that it doesn’t require age verification, and technically it doesn’t. Not directly. But, as the Custom GPT explains, if you want to avoid a legal fight, you probably should use age verification anyway. So while it might not mandate it, it certainly is going to create pressure for companies to adopt it anyway, just to avoid legal liability.
PROMPT: Do I need to use age assurance or verification tools to comply with KOSA?
RESPONSE:
To comply with the Kids Online Safety Act (KOSA), you are not explicitly required to implement age assurance or age verification tools. However, it is strongly recommended to ensure that age-appropriate content is provided to users, particularly to prevent children from accessing harmful content.
The KOSA includes provisions for the Federal Trade Commission (FTC) and other bodies to conduct studies and provide guidance on effective methods for age verification. These may include systems at the device or operating system level, but KOSA does not mandate affirmative collection of personal data solely for age verification purposes.
If your platform already collects certain user data, you can utilize that for age-related purposes, but you are not obligated to implement new age gating or age verification functionalities if they aren’t part of your normal operations. However, if your platform includes sensitive content such as pornography or extreme violence, using robust age verification methods is strongly encouraged to ensure compliance and protect children from inappropriate content.
Daphne even pushed it further, asking if it violated the law to not use age verification, and the GPT suggested (reasonably under the law) that not using age verification might be deemed as taking insufficient measures:
PROMPT: I know it is recommended, but will I violate the law if I do not do age verification?
RESPONSE:
Under the Kids Online Safety Act (KOSA), while the use of age verification or age assurance tools is strongly recommended, failing to implement such tools does not automatically constitute a violation of the law. However, the key legal requirement is that you must take “reasonable measures” to protect children from harmful content. If you fail to implement any effective mechanism to prevent children from accessing such content, and this results in harm, you could be found non-compliant with KOSA.
The law emphasizes that platforms must make a good-faith effort to restrict access to harmful content by minors. Age verification is one of the methods to demonstrate compliance, but it is not the only method. If your platform is found to allow minors to access harmful content without taking sufficient steps to prevent this, it could lead to legal action under KOSA.
Thus, while not using age verification tools doesn’t directly violate the law, it might leave your platform vulnerable to being deemed non-compliant if other protective measures are not sufficiently robust.
There’s a lot more in these docs, and they really do a nice job of highlighting how, for all the winks, nods, and denials from the lawmakers, the simplest and most obvious explanations for how to comply with these laws is to take down First Amendment-protected speech and implement privacy-harming age verification policies.
This is one of the reasons we spend so much time digging into the nuances and details. People sometimes complain that all I do is criticize these laws, but it’s important to understand how they will actually be implemented and how that implementation could do more harm than good.
We’ve written a ton about Taylor Swift’s various adventures in intellectual property law and the wider internet. Given her sheer popularity and presence in pop culture, that isn’t itself particularly surprising. What has been somewhat interesting about her as a Techdirt subject, though, has been how she has straddled the line between being a victim of overly aggressive intellectual property enforcement as well as being a perpetrator of the same. All of this is to say that Swift is not a stranger to negative outcomes in the digital realm, nor is she a stranger to being the legal aggressor.
Which is why the point of this post is to be something of an open letter to Her Swiftness to not listen to roughly half the internet that is clamoring for her to sue Donald Trump for sharing some AI-generated images on social media falsely implying that Swift had endorsed him. First, the facts.
Taylor Swift has yet to endorse any presidential candidate this election cycle. But former President Donald Trump says he accepts the superstar’s non-existent endorsement.
Trump posted “I accept!” on his Truth Social account, along with a carousel of (Swift) images – at least some of which appear to be AI-generated.
One of the AI-manipulated photos depicts Swift as Uncle Sam with the text, “Taylor wants you to vote for Donald Trump.” The other photos depict fans of Swift wearing “Swifties for Trump” T-shirts.
As the quote notes, not all of the images were AI generated “fakes.” At least one of them was from a very real woman, who is very much a Swift fan, wearing a “Swifties for Trump” shirt. There is likewise a social media campaign for supporters from the other side of the aisle, too, “Swifties for Kamala”. None of that is really much of an issue, of course. But the images shared by Trump on Truth Social implied far more than a community of her fans that also like him. So much so, in fact, that he appeared to accept an endorsement that never was.
In case you didn’t notice, immediately below that top left picture is a label that clearly marks the article and associated images as “satire.” The image of Swift doing the Uncle Sam routine to recruit people to back Trump is also obviously not something that came directly from Swift or her people. In fact, while she has not endorsed a candidate in this election cycle (more on that in a moment), Swift endorsed Biden in 2020 with some particularly biting commentary around why she would not vote for Trump.
Now, Trump sharing misleading information on social media is about as newsworthy as the fact that the sun will set tonight. But it is worth noting that social media exploded in response, with a ton of people online advocating Swift to “get her legal team involved” or “sue Trump!” And that is something she absolutely should not do. Some outlets have even suggested that Swift should sue under Tennesse’s new ELVIS Act, which both prohibits the use of people’s voice or image without their authorization, and which has never been tested in court.
Trump’s post might be all it takes to give Swift’s team grounds to sue Trump under Tennessee’s Ensuring Likeness Voice and Image Security Act, or ELVIS Act. The law protects against “just about any unauthorized simulation of a person’s voice or appearance,” said Joseph Fishman, a law professor at Vanderbilt University.
“It doesn’t matter whether an image is generated by AI or not, and it also doesn’t matter whether people are actually confused by it or not,” Fishman said. “In fact, the image doesn’t even need to be fake — it could be a real photo, just so long as the person distributing it knows the subject of the photo hasn’t authorized the use.”
Please don’t do this. First, it probably won’t work. Suing via an untested law that is very likely to run afoul of First Amendment protections is a great way to waste money. Trump also didn’t create the images, presumably, and is merely sharing or re-truthing them. That’s going to make making him liable for them a challenge.
But the larger point here is that all Swift really has to do here is respond, if she chooses, with her own political endorsement or thoughts. It’s not as though she didn’t do so in the last election cycle. If she’s annoyed at what Trump did and wants to punish him, she can solve that with more speech: her own. Hell, there aren’t a ton of people out there who can command an audience that rivals Donald Trump’s… but she almost certainly can!
Just point out that what he shared was fake. Mention, if she wishes, that she voted against him last time. If she likes, she might want to endorse a different candidate. Or she can merely leave it with a biting denial, such as:
“The images Donald Trump shared implied that I have endorsed him. I have not. In fact, I didn’t authorize him to use my image in any way and request that he does not in the future. On the other hand, Donald Trump has a history of not minding much when it comes to getting a woman’s consent, so I won’t get my hopes up too much.”
Like far too many legislators in far too many states, Tennessee’s lawmakers have jumped on the book banning bandwagon. For years, public libraries and school libraries were stocked at the discretion of librarians and largely operated without a lot of interference from state governments. While attempts to ban certain books happened now and then, there was never a concerted effort to remove wide swaths of literature from public library shelves.
Now, it’s just the sort of thing that happens multiple times on a daily basis. And the number of book challenges and book ban attempts continue to increase exponentially as idiots push their personal agendas using the government’s power to control what content the public has access to.
The law passed by the state legislature doesn’t actually ban books from school libraries. But no matter what the text says, that’s obviously the end goal. (h/t BookRiot)
Passed earlier this year, the bill amended the state’s “Age-Appropriate Materials Act,” signed into law by Republican Gov. Bill Lee in 2022, which, according to the ACLU of Tennessee, requires schools to maintain and post lists of the materials in their libraries and to evaluate challenged materials to determine whether or not they are “age-appropriate.”
So, now every Tom, Dick, and Karen can simply challenge a book and force librarians to review the content to see whether or not it’s “age-appropriate.” The initial bill didn’t even bother to define the few terms it bothered to use to describe the age-appropriateness of content, much less provide librarians with guidelines for handling challenges and/or eventual book removals.
The “fixed” version isn’t much better. While it does provide a list of things legislators think are inappropriate for all students (including those in their senior year of high school, where they’re often treated legally as adults when charged with crimes), the laundry list of inappropriate things is still far too vague.
H.B. 843 clarifies that books containing “nudity, or descriptions or depictions of sexual excitement, sexual conduct, excess violence, or sadomasochistic abuse” are not appropriate for K–12 students, regardless of the context in which those descriptions or depictions appear in the material.
How much violence is “excessive?” Will health textbooks depicting nudity, sexual conduct, and “sexual excitement” be removed from classrooms? Will no one under the age of 18 be able to access content they’re legally allowed to access anywhere else but in a public library?
Perhaps more importantly, what of the Bible?
During debate on the Tennessee Senate floor, state Sen. Jeff Yarbro (D) noted that the bill’s definition of what is “inappropriate” applies to the Bible. “You cannot read the book of Samuel or Kings or Chronicles, much less much of the first five books of the Bible, without significant discussions of rape, sexual excitement, multiple wives, bestiality — numerous things. That’s before you get in just to, you know, very express and explicit descriptions of violence,” Yarbro argued, according to WKRN News 2.
If this point gets pressed, you can rest assured a carve-out will be created for “religious texts,” but… you know… only applied to one specific religion and its main text.
A Wilson County high school is warning teachers to skip classroom libraries and closed the school library over concerns surrounding a new state law.
Under the law, any brief mention of sex, nudity or excess violence can lead to a book ban.
The Wilson County Director of Schools says they are temporarily closing the library at Green Hill High School to sort through books to make sure they get rid of the those that are required to be banned.
So, as teachers and librarians follow the government’s orders to ensure they’re only exposed to content the legislative majority likes, students are going be struggling to comprehend the things they’re learning in civics classes about their fundamental rights.
And all the bill’s supporters have to offer are patently false assertions about how bad things have been for unprotected students prior to the institution of this law.
Senator Pody explains they are trying to protect children from pornography which they’ve found in the past to be available in public schools.
I guarantee you this isn’t true. Notably, Senator Pody offers no times, dates, locations, or any other verification of his claim “pornography” has been found in school libraries or classrooms. Unfortunately, he’s representative of the legislative majority and its ideals. It’s nothing but censorship propelled by bigotry and backed by lies. Caught in the crossfire are the kids and the public school employees who just want to give them the best education they can.
It’s never about the children. Supporters of age verification laws, book bans, drag show bans, and abortion bans always claim they’re doing these things to protect children. But it’s always just about themselves. They want to impose their morality on other adults. That’s all there is to it.
Abortion bans are just a way to strip women of bodily autonomy. If it was really about cherishing children and new lives, these same legislators wouldn’t be routinely stripping school lunch programs of funding, introducing onerous means testing to government aid programs, and generally treating children as a presumptive drain on society.
The same goes for book bans. They claim they want to prevent children from accessing inappropriate material. But you can only prevent children from accessing it by removing it entirely from public libraries, which means even adults will no longer be able to read these books.
The laws targeting drag shows aren’t about children. They’re about punishing certain people for being the way they are — people whose mere existence seems to be considered wholly unacceptable by bigots with far too much power.
The slew of age verification laws introduced in recent years are being shot down by courts almost as swiftly as they’re enacted. And for good reason. Age verification laws are unconstitutional. And they’re certainly not being enacted to prevent children from accessing porn.
Of course, none of the people pushing this kind of legislation will ever openly admit their reasons for doing so. But they will admit it to people they think are like-minded. All it takes is a tiny bit of subterfuge to tease these admissions out of activist groups that want to control what content adults have access to — something that’s barely hidden by their “for the children” facade.
As Shawn Musgrave reports for The Intercept, a couple of people managed to coax this admission out of a former Trump official simply by pretending they were there to give his pet project a bunch of cash.
“I actually never talk about our porn agenda,” said Russell Vought, a former top Trump administration official, in late July. Vought was chatting with two men he thought were potential donors to his right-wing think tank, the Center for Renewing America.
For the last three years, Vought and the CRA have been pushing laws that require porn websites to verify their visitors are not minors, on the argument that children need to be protected from smut. Dozens of states have enacted or considered these “age verification laws,” many of them modeled on the CRA’s proposals.
[…]
But in a wide-ranging, covertly recorded conversation with two undercover operatives — a paid actor and a reporter for the British journalism nonprofit Centre for Climate Reporting — Vought let them in on a thinly veiled secret: These age verification laws are a pretext for restricting access to porn more broadly.
“Thinly veiled” is right. While it’s somewhat amusing Vought was taken in so easily and was immediately willing to say the quiet part loud when he thought cash was on the line, he’s made his antipathy towards porn exceedingly clear. As Musgrave notes in his article, Vought’s contribution to Project 2025 — a right-wing masturbatory fantasy masquerading as policy proposals should Trump take office again — almost immediately veers into the sort of territory normally only explored by dictators and autocrats who relied heavily on domestic surveillance, forced labor camps, and torture to rein in those who disagreed with their moral stances.
Pornography, manifested today in the omnipresent propagation of transgender ideology and sexualization of children, for instance, is not a political Gordian knot inextricably binding up disparate claims about free speech, property rights, sexual liberation, and child welfare. It has no claim to First Amendment protection. Its purveyors are child predators and misogynistic exploiters of women. Their product is as addictive as any illicit drug and as psychologically destructive as any crime. Pornography should be outlawed. The people who produce and distribute it should be imprisoned. Educators and public librarians who purvey it should be classed as registered sex offenders. And telecommunications and technology firms that facilitate its spread should be shuttered.
Perhaps the most surprising part of this paragraph (and, indeed, a lot of Vought’s contribution to Project 2025) is that it isn’t written in all caps with a “follow me on xTwitter” link attached. These are not the words of a hinged person. They are the opposite — the ravings of a man in desperate need of a competent re-hinging service.
And he’s wrong about everything in this paragraph, especially his assertion that pornography is not a First Amendment issue. It is. That’s why so many of these laws are getting rejected by federal courts. The rest is hyperbole that pretends it’s just bold, common sense assertions. I would like to hear more about the epidemic of porn overdoses that’s leaving children parentless and overloading our health system. And who can forget the recent killing sprees of the Sinoloa Porn Cartel, which has led to federal intervention from the Mexican government?
But the most horrifying part is Vought’s desire to imprison people for producing porn and converting librarians to registered sex offenders just because their libraries carry some content that personally offends his sensibilities.
These are the words and actions of people who strongly support fascism so long as they’re part of the ruling party. They don’t care about kids, America, democracy, or the Constitution. They want a nation of followers and the power to punish anyone who steps out of line. The Center for Renewing America is only one of several groups with the same ideology and the same censorial urges. These are dangerous people, but their ideas and policy proposals are now so common it’s almost impossible to classify it as “extremist.” There are a lot of Americans who would rather see the nation destroyed than have to, at minimum, tolerate people and ideas they don’t personally like. Their ugliness needs to be dragged out into the open as often as possible, if only to force them to confront the things they’ve actually said and done.
For a while, we’ve been pointing out how terrible KOSA (the Kids Online Safety Act) is. Our main concern is that the bill would fundamentally lead to the suppression of all kinds of speech. That’s because the “duty of care” provision, while limited, would allow officials (mostly at the FTC, which can get partisan) to argue that certain types of results were due to a design failure, and companies would seek to suppress content, rather than face the potential liability.
For the most part, we’ve been highlighting how the law would be used by the GOP to suppress LGBTQ speech. They’ve come out and said that’s exactly what they want to use it for. We’ve been surprised that Democrats have been so eager to support it for that reason. But the unfortunate reality is that Democrats are just as censorial as Republicans, just on other issues.
A few weeks back, we were a little surprised when it came out that House Republicans didn’t have the votes to pass KOSA, meaning it wasn’t coming to the floor. I had hoped that it was for the many good reasons that Senator Rand Paul had laid out in his Dear Colleague letter regarding how problematic the law was, not for culture war reasons, but just for basic common sense reasons:
Should platforms stop children from seeing climate-related news because climate change is one of the leading sources of anxiety amongst younger generations? Should they stop children from seeing coverage of international conflicts because it could lead to depression? Should pro-life groups have their content censored because platforms worry that it could impact the mental well-being of teenage mothers? This bill opens the door to nearly limitless content regulation.
The bill contains a number of vague provisions and undefined terms. The text does not explain what it means for a platform to “prevent and mitigate” harm, nor does it define “addiction-like behaviors.” Additionally, the bill does not explicitly define the term “mental health disorder.” Instead, it references the Fifth Edition of the Diagnostic and Statistical Manual of Mental Health Disorders or “the most current successor edition.” As such, the definition could change without any input from Congress.
We do not impose these types of burdens on any other sector of the economy. For example, the bill seeks to protect minors from alcohol and gambling ads on certain online platforms. However, minors can turn on the TV to watch the Super Bowl or the PGA tour and see the exact same ads without any problem.
However, according to a new report in The Hill (which claims that there is still work going on behind the scenes to get KOSA moving again), the real reason is that the GOP has realized that the censorial portions of the bill could be used to suppress their own culture warrioring speech:
A House leadership aide told The Hill that concerns from across the House GOP about the Kids Online Safety Act (KOSA) — which passed the Senate 91-3 last month as part of a package that also included provisions such as the Children’s Online Privacy Protection Action Act (COPPA 2.0) — suggest it “cannot be brought in its current form.”
“It could lead to censorship of conservative speech, such as pro-life views, is almost certainly unconstitutional and grants sweeping new authority to unelected bureaucrats at the FTC,” the leadership aide said, referring to the Federal Trade Commission.
You could argue that the GOP was supportive of the bill when it thought it had the upper hand to be the next administration. But now, looking at the polls and the increasing likelihood that Harris will win, they’ve suddenly realized that maybe they don’t want a Lina Khan-run FTC determining what kinds of harm fail a duty of care…
But, really, this should be why both parties reject this approach. This approach is fundamentally built on the false idea that harms that happen to children can be magically stopped by just not letting kids see “bad” content. The law is bad no matter what kind of content it might enable to be targeted by the FTC. It shouldn’t move forward because of that simple fact.