In what may be the most legally absurd aftermath of a rap battle in hip-hop history, Drake’s preposterously silly lawsuit against Universal Music has met its predictable end. The artist sued his own record label—not Kendrick Lamar himself—for the crime of also distributing Lamar’s devastating diss track Not Like Us. The judge overseeing the case has now dismissed it entirely, delivering what amounts to a final judicial verse in this musical feud.
Judge Jeannette Vargas recognizes a killer song when she hears one:
This case arises from perhaps the most infamous rap battle in the genre’s history, the vitriolic war of words that erupted between superstar recording artists Aubrey Drake Graham (“Drake”) and Kendrick Lamar Duckworth (“Lamar” or “Kendrick Lamar”) in the spring of 2024. Over the course of 16 days, the two artists released eight so-called “diss tracks,” with increasingly heated rhetoric, loaded accusations, and violent imagery. The penultimate song of this feud, “Not Like Us” by Kendrick Lamar, dealt the metaphorical killing blow. The song contains lyrics explicitly accusing Drake of being a pedophile, set to a catchy beat and propulsive bassline. “Not Like Us” went on to become a cultural sensation, achieving immense commercial success and critical acclaim.
When you sue over a song, and the judge notes that the song has a catchy beat and a propulsive bassline, let alone “dealt the metaphorical killing blow,” I don’t think your lawsuit is going to survive. The court dumps it while noting that just because randos commenting on social media now call Drake a pedophile based on the song, that doesn’t make the song defamatory:
The Court holds, based upon a full consideration of the context in which “Not Like Us” was published, that a reasonable listener could not have concluded that “Not Like Us” was conveying objective facts about Drake. The views expressed by users @kaioken8026, @mrright8439, and @ZxZNebula, and the other YouTube and Instagram commentators quoted in the Complaint, Am. Compl., ¶¶ 73-74, do not alter the Court’s analysis. In a world in which billions of people are active online, support for almost any proposition, no matter how farfetched, fantastical or unreasonable, can be found with little effort in any number of comment sections, chat rooms, and servers. “[T]hat some readers may infer a defamatory meaning from a statement does not necessarily render the inference reasonable under the circumstances.” Jacobus, 51 N.Y.S.3d at 336.
The artists’ seven-track rap battle was a “war of words” that was the subject of substantial media scrutiny and online discourse. Although the accusation that Plaintiff is a pedophile is certainly a serious one, the broader context of a heated rap battle, with incendiary language and offensive accusations hurled by both participants, would not incline the reasonable listener to believe that “Not Like Us” imparts verifiable facts about Plaintiff
The judge actually does a fairly complete and detailed history of the war of words between Drake and Kendrick, even explaining the nature of the insults that pass back and forth between the two. Here’s just one paragraph of that section, but if you weren’t full up on the beef, now you can catch up:
Lamar fired back at Drake in “Euphoria,” which was released on April 30, 2024. Req. J. Not. at 3. In the track, Lamar claims, “I make music that electrify ‘em, you make music that pacify ‘em” and that he would “spare [Drake] this time, that’s random acts of kindness.” Req. J. Not., Ex. K. He accuses Drake of fabricating his claims: “Know you a master manipulator and habitual liar too/But don’t tell no lie about me and I won’t tell truths ‘bout you.” Id; see also Am. Compl., ¶¶ 14, 77. He insults Drake’s fashion sense, Req. J. Not., Ex. K (“I hate the way that you walk, the way that you talk, I hate the way that you dress”), further raps “I believe you don’t like women, it’s real competition, you might pop a** with ‘em,” and taunts Drake for being a coward with his responses, id. (“I hate the way that you sneak diss, if I catch flight, it’s gon’ be direct.”)
Of course, in any defamation case, there can be fights over whether or not statements are facts (which can be defamatory) or opinion (which can’t be defamatory). Drake’s legal team had tried to argue that the question of whether the statements in Not Like Us were fact or opinion was a question of fact for a jury. But that’s not how that works. It’s a question of law that judges decide in most cases:
Whether a challenged statement is fact or opinion is a legal question. Celle, 209 F.3d at 178. Plaintiff argues that it is inappropriate for the Court to determine, at the pleading stage, whether a reasonable listener would perceive the Recording as fact or opinion. Opp’n Br. at 13-14; Hr’g Tr. at 24:11-26:8. Yet, because this is a question of law, New York courts routinely resolve this question at the motion to dismiss stage. See, e.g., Brian v. Richardson, 87 N.Y.2d 46, 52 (1995) (holding, on a motion to dismiss, that challenged statement constitutes opinion); Dfinity Found. v. New York Times Co., 702 F. Supp. 3d 167, 174 (S.D.N.Y. 2023), aff’d, No. 23-7838- cv, 2024 WL 3565762 (2d Cir. July 29, 2024) (“Whether a statement is a “fact [or] opinion is ‘a question of law for the courts, to be decided based on what the average person hearing or reading the communication would take it to mean’ and is appropriately raised at the motion to dismiss stage.”); Greenberg v. Spitzer, 62 N.Y.S.3d 372, 385-86 (2d Dep’t 2017) (holding that, because whether a statement is defamatory “presents a legal issue to be resolved by the court,” defamation actions are particularly suitable for resolution on a motion to dismiss). “There is particular value in resolving defamation claims at the pleading stage, so as not to protract litigation through discovery and trial and thereby chill the exercise of constitutionally protected freedoms.” Dfinity Found., 702 F. Supp. 3d at 173 (cleaned up); accord Biro, 963 F. Supp. 2d at 279.
Also, in defamation cases, the context of the speech always matters quite a bit. And here, the context is a rap battle. The judge points out how silly it is to go to court just because you got dissed too hard:
This is precisely the type of context in which an audience may anticipate the use of “epithets, fiery rhetoric or hyperbole” rather than factual assertions. A rap diss track would not create more of an expectation in the average listener that the lyrics state sober facts instead of opinion than the statements at issue in those cases.
For example, in “Euphoria” Lamar calls Drake a “master manipulator and habitual liar” and “a scam artist.” Req. J. Not., Ex. K. Drake responds in “Family Matters” by heavily implying that Lamar is a domestic abuser. See id., Ex. M. He also raps that he “heard” that one of Lamar’s sons may not be biologically his. Id. (“Why you never hold your son and tell him, ‘Say cheese’?/We could’ve left the kids out of this, don’t blame me/. . . I heard that one of ‘em little kids might be Dave Free”).
In “Meet the Grahams,” Lamar takes issue with Drake involving his family members in their feud. Req. J. Not., Ex. N (“Dear Aubrey/I know you probably thinkin’ I wanted to crash your party/But truthfully, I don’t have a hatin’ bone in my body/This supposed to be a good exhibition within the game/But you f***ed up the moment you called out my family’s name/Why you had to stoop so low to discredit some decent people?”). In that same track, Lamar alleges that Drake uses the weight loss drug Ozempic. Id. (“Don’t cut them corners like your daddy did, f*** what Ozempic did/Don’t pay to play with them Brazilians, get a gym membership.”). Lamar also insinuates that Drake knowingly hires sexual offenders. See id. (“Grew facial hair because he understood bein’ a beard just fit him better/He got sex offenders on ho-VO that he keep on a monthly allowance.”).
While Drake argued that the judge should ignore the other songs in the battle, the judge knows that’s not how any of this works:
Plaintiff argues that the Court should ignore the songs that came before and assess “Not Like Us” as a “singular entity.” Hr’g Tr. at 39:14-15; see also Opp’n Br. at 15-17. Plaintiff argues that the average listener is not someone who is familiar with every track released as part of the rap battle before listening to the Recording. Hr’g Tr. at 32:17-33:2; 35:9-19. Because the Recording has achieved a level of “cultural ubiquity” far beyond the other seven songs, Plaintiff contends that Court should not consider those other tracks in assessing how the average listener of the Recording would perceive the allegations regarding Drake. Hr’g Tr. at 36:10- 19; id. at 39:11-17; see also Opp’n Br. at 15.
There are a number of flaws with this argument. “Not Like Us” cannot be viewed in isolation but must be placed in its appropriate factual context. Immuno AG. v. Moor-Jankowski, 77 N.Y.2d 235, 254 (1991) (“[S]tatements must first be viewed in their context in order for courts to determine whether a reasonable person would view them as expressing or implying any facts.”). Here, that factual context is the insults and trash talking that took place via these diss tracks in the days and weeks leading up to the publication of “Not Like Us.” The songs released during this rap battle are in dialogue with one another. They reference prior songs and then respond to insults and accusations made by the rival artist. See, e.g., Am. Compl., ¶ 63. The songs thus must be read together to fully assess how the general audience would perceive the statements in the Recording. See, e.g., Celle, 209 F.3d at 187 (holding that two newspaper articles had to be read together to understand full context).
Also, the judge points out that part of the reason the song was so famous in the first place was because of the wider rap battle:
Additionally, it was not just the Recording which gained a cultural ubiquity, but the rap battle itself. In deciding this motion to dismiss, the Court need not blind itself to the public attention garnered by this particular rap battle. The Court takes judicial notice of the extensive mainstream media reporting that surrounded the release of “Not Like Us” and the associated feud between Drake and Lamar.
Then there’s the incoherence of Drake claiming that UMG was liable (remember, Kendrick was not a defendant here) because it kept “republishing” the song as it got more and more popular. But, as the judge notes, the later popularity of the song should have no impact on whether or not the song is defamatory (and it’s not):
Plaintiff counters that, even if the Recording was protected opinion at the time of its initial publication, UMG’s republication of “Not Like Us” in the months following, after it achieved unprecedented levels of commercial success, exposes it to liability. Hr’g Tr. at 37:20-38:17. This argument is logically incoherent. If the Recording was nonactionable opinion at the time it was initially produced, then its republication would not expose UMG to liability. Republication cannot transform Lamar’s statement of opinion into UMG’s statement of fact
There were other arguments Drake made in there as well, but they all fared about as well as Drake did in his rap battle with Kendrick.
The end result is that the case is dismissed. And, I gotta say, when you lose a rap battle so hard that your lawsuit over it is dismissed with a judge praising the catchiness of the song that went viral… that seems like you’ve lost that rap battle harder than anyone has ever lost a rap battle.
Last November, the Anti-Defamation League (ADL) released Steam-Powered Hate, accusing Valve’s game launcher, Steam, of fostering extremism. The report dropped just before Senator Mark Warner, a SAFE TECH Act proponent, threatened Steam’s owner, raising concerns about the political motivations behind the ADL’s claims.
The ADL analyzed over one billion data points, flagging just 0.5% as “hateful.” Yet, they misrepresent Steam—primarily a game marketplace—as a social media hub overrun with extremism, despite offering no real expertise in online content moderation or gaming culture. Meanwhile, they give powerful figures like Elon Musk a pass while pushing for government intervention in digital spaces they don’t understand.
This isn’t new—the ADL has a history of advocating speech restrictions, from social media to video games. As an American Jew, I find their big-government approach to content moderation alarming. Regulators must reject pressure from advocacy groups that misrepresent online communities and threaten free expression in the name of fighting extremism.
The ADL Misunderstands Gaming’s Complex and Notoriously Edgy Environment
Gaming communities operate on a different wavelength than typical online spaces. Gamers are notorious for their dark humor, edgier memes, and a communication style that can seem alien to outsiders. The ADL, in its attempt to analyze a platform central to gaming culture, failed to grasp this, making sweeping generalizations about a community it clearly doesn’t understand.
Take their report’s biggest claim: the vast majority of so-called “hateful content” was Pepe the Frog—a meme that, while hijacked by extremists in recent years, remains widely used in mainstream gaming culture. Even the meme’s creator was outraged by its association with hate groups. Yet the ADL doesn’t distinguish between an actual extremist Pepe and a harmless, widely used gaming meme. Instead, they lump them together, inflating their numbers.
Their AI system, “HateVision,” identified nearly one million extremist symbols—over half of which were Pepe. The AI was trained on a limited dataset of images and keywords the ADL pre-selected as hateful, but it failed to differentiate between legitimate extremism and gaming’s irreverent meme culture. Worse, it didn’t distinguish between U.S.-based and international users, ignoring the fact that gaming communities operate under different cultural norms worldwide.
The AI’s failures didn’t stop at images. It also couldn’t tell the difference between actual hate speech and the tongue-in-cheek, often provocative style of gaming communities. While gaming culture can be abrasive, the vast majority of players understand the difference between in-game trash talk and real-world hostility. The ADL? Not so much.
The ADL also went after copypastas—blocks of text copied and pasted to provoke reactions—identifying 1.83 million “potentially harmful” ones without bothering to check context. Their keyword-based approach flagged terms like “boogaloo” and “amerikaner” without acknowledging their multiple meanings. “Boogaloo” is mostly a Gen-Z meme, not a secret alt-right code word in gaming. “Boogalo” does have alt-right connotations, but there are other connotations like the one listed above. “Amerikaner” can refer to a cookie, the German word for “American,” or even a famous YouTuber’s username. They also flagged “Goyim” as a slur, despite it being a common and sometimes affectionate term used by Jewish people themselves. In the in-group of Jewish people it is often non-offensive. Though the term can be used in an offensive manner by antisemitic people, the ADL made no distinctions.
Curious, I did a Steam keyword search for “Amerikaner.” The first result was a left-winger calling out racism. The second was someone mocking Americans in Counter-Strike. The third was a non-English post. None of the results, in my opinion, rose to the level of extremism. I also searched “Boogaloo” and found references to the classic “electric boogaloo” meme, a non-English speaker using the term, and a gaming forum name. The ADL didn’t bother with this level of nuance—they just scraped forums, pulled words out of context, and called it a day.
The ADL also attacked Garry’s Mod (G-Mod), a sandbox game known for its anything-goes creativity. They focused on one mod featuring maps of real-life mass shootings, citing comments with words like “based,” “Sigma,” and even “Subscribe to PewDiePie” as signs of extremism. But these are common ‘chronically online’ phrases with broad uses. “Based” is Gen-Z slang used by individuals on both the left and right. “Sigma” is a meme mocking “alpha male” tropes. And while the Christchurch shooter did mention PewDiePie, claiming the ADL is unfairly targeting him isn’t exactly a stretch. Yes, PewDiePie has had controversies, but painting him as a hate symbol is a major leap.
The report wraps up with the tragic white supremacist attack in Turkey, where the ADL notes that while there were red flags on the shooter’s Steam profile, there’s “no evidence” he was directly inspired by extremist content on the platform. Still, they use this tragedy to argue Steam isn’t doing enough to moderate content. But even their own research found Steam actively filters Swastikas into hearts—identifying only 11 profiles where this workaround failed. Eleven profiles. Out of millions. That’s an edge case, not a crisis.
To be fair, the study did identify a small number of fringe groups glorifying hate and violence. But the bigger question is whether the ADL’s findings actually reflect a serious problem—or if they’re simply misunderstanding an edgy, chaotic, but largely non-extremist gaming culture. And given what a small amount of extreme content that the ADL found worldwide, it looks like Steam is actually doing its job.
The ADL’s Steam Comparison is Hypocritical and Misguided
Still, the ADL reportedly takes issue with Steam’s so-called “ad hoc” approach to content moderation, claiming that despite Valve’s removal efforts, the platform still “fails to systematically address the issue of extremism and hate.” But this critique ignores the reality of gaming culture and Steam’s own policies.
Steam’s moderation reflects the nature of its community. Its content rules fall into two categories: one for games—allowing all titles except those that are illegal or blatant trolling—and another for user-generated content, which bans unlawful activity, harassment, IP violations, and commercial exploitation. The ADL criticizes Steam for not taking a stricter stance like Microsoft and Roblox, but that comparison is misleading at best.
Microsoft’s gaming history isn’t exactly a beacon of virtue. Xbox 360 live chats were infamous for racist slurs, and Call of Duty’s lobbies remain a toxic free-for-all. Meanwhile, Minecraft—the game the ADL seems to hold in high regard—was created by someone with a history of antisemitic remarks, and Microsoft itself has faced accusations of workplace discrimination. Yet, the ADL doesn’t seem nearly as concerned about these issues.
As for Roblox, while it does enforce stricter content moderation, it’s far from an extremist-free utopia. The Australian Federal Police have warned about the platform’s potential for radicalization, and NBC has reported extremist content explicitly targeting children. If anything, this suggests that heavy-handed moderation doesn’t necessarily eliminate bad actors—it just pushes them to adapt.
Steam’s approach may not align with the ADL’s ideal vision of content moderation, but pretending that Microsoft and Roblox represent the gold standard ignores their own deep-seated issues. It does not make sense for a platform like Steam to have policies identical or similar to XBox and Roblox. Both of those are fully live-service platforms, whereas Steam is primarily a consumption platform for games as opposed to a platform where users are constantly interacting with one another in-game, online through the platform.This creates market differentiation. Platform’s policies are a reflection of the services that they offer and if users feel the policies are problematic they can jump ship to another provider.
Regulators Must Beware of Overreach from Non-Trust & Safety Experts Like the ADL
In its report, the ADL calls for a national gaming safety task force, urging policymakers to create a federally backed group to “combat this pervasive issue” through a multi-stakeholder approach. On paper, this sounds like a noble goal. In practice, it’s a recipe for government overreach that could stifle the gaming industry’s creative and independent spirit.
Gaming has thrived because of its grassroots nature—built by passionate developers and players, not by bureaucrats or advocacy groups with no real understanding of gaming culture, online community norms, or trust and safety. A federal task force risks imposing rigid, top-down regulations that don’t fit the dynamic and ever-evolving gaming world. Worse, it could open the door to politically motivated interventions that prioritize appearances over real solutions.
The ADL also suggests Steam engage in multi-stakeholder moderation efforts. But who controls the conversation? When powerful corporations and activist organizations dominate these discussions, smaller developers and gaming communities get sidelined. That’s how you end up with policies shaped by corporate interests and advocacy agendas rather than solutions that actually work for gamers. And let’s be blunt—the ADL has no business dictating content moderation policies for gaming platforms.
The ADL is not an expert on content moderation, online community dynamics, or trust and safety. It has no meaningful experience navigating the complexities of digital spaces, algorithmic content regulation, or the unique cultural norms that define gaming communities. Instead, their report relies on anecdotal evidence, an oversimplified AI model, and out-of-context symbols, all of which lead to flawed conclusions and misleading claims.
Steam isn’t Microsoft or Disney. It’s a privately owned company run by Valve and Gabe Newell, without the vast political and financial clout of industry giants. Forcing broad content moderation mandates onto platforms like Steam sets a dangerous precedent, burdening smaller businesses that lack the infrastructure of the major tech companies. And let’s be clear: Steam’s primary function is to sell video games, not to serve as a social media watchdog.
The ADL’s concerns about extremism may be well-intended, but their lack of expertise, misinterpretation of gaming culture, and one-size-fits-all approach make them uniquely unqualified to weigh in on this issue. Their push for federal intervention aligns with the broader SAFE TECH Act’s concerning political and financial motivations, which could disproportionately harm platforms that aren’t backed by corporate lobbying power.
Yes, online extremism is a problem—but handing control to out-of-touch regulators and advocacy groups that don’t understand the space isn’t the answer. The gaming industry must stay free, innovative, and independent—not bogged down by heavy-handed government oversight that threatens to erase the very culture that makes online gaming communities thrive.
Elizabeth Grossman is a first-year law student at the University of Akron School of Law in the Intellectual Property program and with a goal of working in tech policy.
In the past couple of weeks, you might have heard about the association of Kamala Harris and coconuts. In short, in 2023, during some remarks at the White House, she said the following:
My mother used to — she would give us a hard time sometimes, and she would say to us, “I don’t know what’s wrong with you young people. You think you just fell out of a coconut tree?” (Laughs.)
You exist in the context of all in which you live and what came before you.
Some MAGA folks originally tried to take this clip and use it to make fun of Harris (one account claimed she was “obviously drunk” and dunking on her). Indeed, that video clip above is from the “GOP War Room” and was posted to mock Harris.
But something about the audio version of it resonated with people on TikTok. This included the mocking tone of Harris imitating her mother, followed by the laugh, and then the following “deep” sentence stated in a very serious tone. Many videos with Harris’ audio were made by people on TikTok. They often juxtaposed the pre-laugh mockery and the post-laugh seriousness, generally in support of the larger point that Harris was making. This point, ironically, the MAGA folks ignored regarding the importance of context.
Know Your Meme (as per usual) has the best summary of how it evolved from an attempted dunk on Harris to support for Harris. And, since Harris became the presumptive Democratic Presidential nominee, supporters have embraced the coconut (or the coconut tree) as a symbol of the campaign.
The media has tried repeatedly (and often poorly) to explain all this.
However, Harris supporters are all in on “coconuts” as a symbol of the campaign.
But, as that skeet from Pete notes above, historically, “coconut” has been used in a much more derogatory fashion towards people of Indian descent who some feel are too assimilated into western culture. And, indeed, just a couple of months ago, a woman in the UK was charged with a “hate crime” for referring to two UK politicians, then Prime Minister Rishi Sunak and then Home Secretary Suella Braverman, as “coconuts” on a sign at a protest.
And all of this gets to some of the difficulties not just in content moderation, but in trying to regulate “hate speech” as well.
Because, as always, context matters with a lot of this stuff. Or, I guess, as Harris says, “you exist in the context of all in which you live and what came before you.” And sometimes, a simple short phrase or a protest sign fails to take into account all of that context.
But, also, content moderation policies and “hate speech” laws may also fail to take into account that kind of context, or have any clear way of dealing with it. Indeed, it’s somewhat fascinating how the Harris quote was originally used (not necessarily in a bigoted fashion) to mock and attack Harris, yet was then embraced and adopted by her fans instead.
Context matters a lot. And context may change based on perspectives or who is speaking or why they’re speaking. Should anyone judging either of these situations have to know all of this history and context? For example, should content moderators be responsible for spending the necessary time to get up to speed with the history and context here? How would that possibly be scalable?
And all of that makes it a great example of how what many people always assume is straightforward (“just ban the hate speech”) always proves way more difficult in actual practice. Generally speaking, this is why enabling more free speech (including speech that condemns others for hateful speech) is so important, rather than demanding that it all be “policed” fully.
It’s totally reasonable to be concerned with how the word “coconut” is used in some contexts as a derogatory slur, but we should be careful about how we decide to respond to and handle that, there is… context in all of this.
We’re going to go slow on this one, because there’s a lot of background and details and nuance to get into in Friday’s 5th Circuit appeals court ruling in the Missouri v. Biden case that initially resulted in a batshit crazy 4th of July ruling regarding the US government “jawboning” social media companies. The reporting on the 5th Circuit ruling has been kinda atrocious, perhaps because the end result of the ruling is this:
The district court’s judgment is AFFIRMED with respect to the White House, the Surgeon General, the CDC, and the FBI, and REVERSED as to all other officials. The preliminary injunction is VACATED except for prohibition number six, which is MODIFIED as set forth herein. The Appellants’ motion for a stay pending appeal is DENIED as moot. The Appellants’ request to extend the administrative stay for ten days following the date hereof pending an application to the Supreme Court of the United States is GRANTED, and the matter is STAYED.
Affirmed, reversed, vacated, modified, denied, granted, and stayed. All in one. There’s… a lot going on in there, and a lot of reporters aren’t familiar enough with the details, the history, or the law to figure out what’s going on. Thus, they report just on the bottom line, which is that the court is still limiting the White House. But it’s at a much, much, much lower level than the district court did, and this time it’s way more consistent with the 1st Amendment.
The real summary is this: the appeals court ditched nine out of the ten “prohibitions” that the district court put on the government, and massively narrowed the only remaining one, bringing it down to a reasonable level (telling the U.S. government that it cannot coerce social media companies, which, uh, yes, that’s exactly correct).
But then in applying its own (perhaps surprisingly, very good) analysis, the 5th Circuit did so in a slightly weird way. And then also seems to contradict the [checks notes] 5th Circuit in a different case. But we’ll get to that in another post.
Much of the reporting on this suggests it was a big loss for the Biden administration. The reality is that it’s a mostly appropriate slap on the wrist that hopefully will keep the administration from straying too close to the 1st Amendment line again. It basically threw out 9.5 out of 10 “prohibitions” placed by the lower court, and even on the half a prohibition it left, it said it didn’t apply to the parts of the government that the GOP keeps insisting were the centerpieces of the giant conspiracy they made up in their minds. The court finds that CISA, Anthony Fauci’s NIAID, and the State Department did not do anything wrong and are no longer subject to any prohibitions.
The details: the state Attorneys General of Missouri and Louisiana sued the Biden administration with some bizarrely stupid theories about the government forcing websites to take down content they disagreed with. The case was brought in a federal court district with a single Trump-appointed judge. The case was allowed to move forward by that judge, turning it into a giant fishing expedition into all sorts of government communications to the social media companies, which were then presented to the judge out of context and in a misleading manner. The original nonsense theories were mostly discarded (because they were nonsense), but by quoting some emails out of context, the states (and a few nonsense peddlers they added as plaintiffs to have standing), were able to convince the judges that something bad was going on.
As we noted in our analysis of the original ruling, they did turn up a few questionable emails from White House officials who were stupidly trying to act tough about disinformation on social media. But even then, things were taken out of context. For example, I highlighted this quote from the original ruling and called it out as obviously inappropriate by the White House:
Things apparently became tense between the White House and Facebook after that, culminating in Flaherty’s July 15, 2021 email to Facebook, in which Flaherty stated: “Are you guys fucking serious? I want an answer on what happened here and I want it today.”
Except… if you look at it in context, the email has nothing to do with content moderation. The White House had noticed that the @potus Instagram account was having some issues, and Meta told the company that “the technical issues that had been affecting follower growth on @potus have been resolved.” A WH person received this and asked for more details. Meta responded with “it was an internal technical issue that we can’t get into, but it’s now resolved and should not happen again.” Someone then cc’d Rob Flaherty, and the quote above was in response to that. That is, it was about a technical issue that had prevented the @potus account from getting more followers, and he wanted details about how that happened.
So… look, I’d still argue that Flaherty was totally out of line here, and his response was entirely inappropriate from a professional standpoint. But it had literally nothing to do with content moderation issues or pressuring the company to remove disinformation. So it’s hard to see how it was a 1st Amendment violation. Yet, Judge Terry Doughty presented it in his ruling as if that line was about the removal of COVID disinfo. It is true that Flaherty had, months earlier, asked Facebook for more details about how the company was handling COVID disinfo, but those messages do not come across as threatening in any way, just asking for info.
The only way to make them seem threatening was to then include Flaherty’s angry message from months later, eliding entirely what it was about, and pretending that it was actually a continuation of the earlier conversation about COVID disinfo. Except that it wasn’t. Did Doughty not know this? Or did he pretend? I have no idea.
Doughty somehow framed this and a few other questionably out of context things as “a far-reaching and widespread censorship campaign.” As we noted in our original post, he literally inserted words that did not exist in a quote by Renee DiResta to make this argument. He claimed the following:
According to DiResta, the EIP was designed to “get around unclear legal authorities, including very real First Amendment questions” that would arise if CISA or other government agencies were to monitor and flag information for censorship on social media.
Except, if you read DiResta’s quote, “get around” does not actually show up anywhere. Doughty just added that out of thin air, which makes me think that perhaps he also knew he was misrepresenting the context of Flaherty’s comment.
Either way, Doughty’s quote from DiResta is a judicial fiction. He inserted words she never used to change the meaning of what was said. What DiResta is actually saying is that they set up EIP as a way to help facilitate information sharing, not to “get around” the “very real First Amendment questions,” and also not to encourage removal of information, but to help social media companies and governments counter and respond to disinformation around elections (which they did for things like misleading election procedures). That is, the quote here is about respecting the 1st Amendment, not “getting around” it. Yet, Doughty added “get around” to pretend otherwise.
He then issued a wide-ranging list of 10 prohibitions that were so broad I heard from multiple people within tech companies that the federal government canceled meetings with them on important cybersecurity issues, because they were afraid that any such meeting might violate the injunction.
So the DOJ appealed, and the case went to the 5th Circuit, which has a history of going… nutty. However, this ruling is mostly not nutty. It’s actually a very thorough and careful analysis of the standards for when the government steps over over the line in violating the 1st Amendment rights by pressuring speech suppression. As we’ve detailed for years, the line is whether or not the government was being coercive. The government is very much allowed to use its own voice to persuade. But when it is coercive, it steps over the line.
The appeals court analysis on this is very thorough and right on, as it borrows the important and useful precedents from other circuits that we’ve talked about for years, agreeing with all of them. Where is the line between persuasion and coercion?
Next, we take coercion—a separate and distinct means of satisfying the close nexus test. Generally speaking, if the government compels the private party’s decision, the result will be considered a state action. Blum, 457 U.S. at 1004. So, what is coercion? We know that simply “being regulated by the State does not make one a state actor.” Halleck, 139 S. Ct. at 1932. Coercion, too, must be something more. But, distinguishing coercion from persuasion is a more nuanced task than doing the same for encouragement. Encouragement is evidenced by an exercise of active, meaningful control, whether by entanglement in the party’s decision-making process or direct involvement in carrying out the decision itself. Therefore, it may be more noticeable and, consequently, more distinguishable from persuasion. Coercion, on the other hand, may be more subtle. After all, the state may advocate—even forcefully—on behalf of its positions
It points to the key case that all of these cases always lead back to, the important Bantam Books v. Sullivan case that is generally seen as the original case on “jawboning” (government coercion to suppress speech):
That is not to say that coercion is always difficult to identify. Sometimes, coercion is obvious. Take Bantam Books, Inc. v. Sullivan, 372 U.S. 58 (1963). There, the Rhode Island Commission to Encourage Morality—a state-created entity—sought to stop the distribution of obscene books to kids. Id. at 59. So, it sent a letter to a book distributor with a list of verboten books and requested that they be taken off the shelves. Id. at 61–64. That request conveniently noted that compliance would “eliminate the necessity of our recommending prosecution to the Attorney General’s department.” Id. at 62 n.5. Per the Commission’s request, police officers followed up to make sure the books were removed. Id. at 68. The Court concluded that this “system of informal censorship,” which was “clearly [meant] to intimidate” the recipients through “threat of [] legal sanctions and other means of coercion” rendered the distributors’ decision to remove the books a state action. Id. at 64, 67, 71–72. Given Bantam Books, not-so subtle asks accompanied by a “system” of pressure (e.g., threats and followups) are clearly coercive.
But, the panel notes, that level of coercion is not always present, but it doesn’t mean that other actions aren’t more subtly coercive. Since the 5th Circuit doesn’t currently have a test for figuring out if speech is coercive, it adopts the same tests that were recently used in the 2nd Circuit with the NRA v. Vullo case, where the NRA went after a NY state official who encouraged insurance companies to reconsider issuing NRA-endorsed insurance policies. The 2nd Circuit ran through a test and found that this urging was an attempt at persuasion and not coercive. The 5th Circuit also cites the 9th Circuit, which even more recently tossed out a case claiming that Elizabeth Warren’s comments to Amazon regarding an anti-vaxxer’s book were coercive, ruling they were merely an attempt to persuade. Both cases take a pretty thoughtful approach to determining where the line is, so it’s good to see the 5th Circuit adopt a similar test.
For coercion, we ask if the government compelled the decision by, through threats or otherwise, intimating that some form of punishment will follow a failure to comply. Vullo, 49 F.4th at 715. Sometimes, that is obvious from the facts. See, e.g., Bantam Books, 372 U.S. at 62–63 (a mafiosi-style threat of referral to the Attorney General accompanied with persistent pressure and follow-ups). But, more often, it is not. So, to help distinguish permissible persuasion from impermissible coercion, we turn to the Second (and Ninth) Circuit’s four-factor test. Again, honing in on whether the government “intimat[ed] that some form of punishment” will follow a “failure to accede,” we parse the speaker’s messages to assess the (1) word choice and tone, including the overall “tenor” of the parties’ relationship; (2) the recipient’s perception; (3) the presence of authority, which includes whether it is reasonable to fear retaliation; and (4) whether the speaker refers to adverse consequences. Vullo, 49 F.4th at 715; see also Warren, 66 F.4th at 1207.
So, the 5th Circuit adopts a strong test to say when a government employee oversteps the line, and then looks to apply it. I’m a little surprised that the court then finds that some defendants probably did cross that line, mainly the White House and the Surgeon General’s office. I’m not completely surprised by this, as it did appear that both had certainly walked way too close to the line, and we had called out the White House for stupidly doing so. But… if that’s the case, the 5th Circuit should really show how they did so, and it does not do a very good job. It admits that the White House and the Surgeon General are free to talk to platforms about misinformation and even to advocate for positions:
Generally speaking, officials from the White House and the Surgeon General’s office had extensive, organized communications with platforms. They met regularly, traded information and reports, and worked together on a wide range of efforts. That working relationship was, at times, sweeping. Still, those facts alone likely are not problematic from a First-Amendment perspective.
So where does it go over the line? When the White House threatened to hit the companies with Section 230 reform if they didn’t clean up their sites! The ruling notes that even pressuring companies to remove content in strong language might not cross the line. But threatening regulatory reforms could:
That alone may be enough for us to find coercion. Like in Bantam Books, the officials here set about to force the platforms to remove metaphorical books from their shelves. It is uncontested that, between the White House and the Surgeon General’s office, government officials asked the platforms to remove undesirable posts and users from their platforms, sent follow-up messages of condemnation when they did not, and publicly called on the platforms to act. When the officials’ demands were not met, the platforms received promises of legal regime changes, enforcement actions, and other unspoken threats. That was likely coercive
Still… here the ruling is kinda weak. The panel notes that even with what’s said above the “officials’ demeanor” matters, and that includes their “tone.” To show that the tone was “threatening,” the panel… again quotes Flaherty’s demand for answers “immediately,” repeating Doughty’s false idea that that comment was about content moderation. It was not. The court does cite to some other “tone” issues, but again provides no context for them, and I’m not going to track down every single one.
Next, the court says we can tell that the White House’s statements were coercive because: “When officials asked for content to be removed, the platforms took it down.” Except, as we’ve reported before, that’s just not true. The transparency reports from the companies show how they regularly ignored requests from the government. And the EIP reporting system that was at the center of the lawsuit, and which many have insisted was the smoking gun, showed that the tech companies “took action” on only 35% of items. And even that number is too high, because TikTok was the most aggressive company covered, and they took action on 64% of reported URLs, meaning Facebook, Twitter, etc., took action on way less than 35%. And even that exaggerates the amount of influence because “take action” did not just mean “take down.” Indeed, the report said that only 13% of reported content was “removed.”
So, um, how does the 5th Circuit claim that “when officials asked for content to be removed, the platforms took it down”? The data simply doesn’t support that claim, unless they’re talking about some other set of requests.
One area where the court does make some good points is calling out — as we ourselves did — just how stupid it was for Joe Biden to claim that the websites were “killing people.” Of course, the court leaves out that three days later, Biden himself admitted that his original words were too strong, and that “Facebook isn’t killing people.” Somehow, only the first quote (which was admittedly stupid and wrong) makes it into the 5th Circuit opinion:
Here, the officials made express threats and, at the very least, leaned into the inherent authority of the President’s office. The officials made inflammatory accusations, such as saying that the platforms were “poison[ing]” the public, and “killing people.”
So… I’m a bit torn here. I wasn’t happy with the White House making these statements and said so at the time. But they didn’t strike me as anywhere near going over the coercive line. This court sees it differently, but seems to take a lot of commentary out of context to do so.
The concern about the FBI is similar. The court seems to read things totally out of context:
Fourth, the platforms clearly perceived the FBI’s messages as threats. For example, right before the 2022 congressional election, the FBI warned the platforms of “hack and dump” operations from “state-sponsored actors” that would spread misinformation through their sites. In doing so, the FBI officials leaned into their inherent authority. So, the platforms reacted as expected—by taking down content, including posts and accounts that originated from the United States, in direct compliance with the request.
But… that is not how anyone has described those discussions. I’ve seen multiple transcripts and interviews of people at the platforms who were in the meetings where “hack and dump” were discussed, and the tenor was more “be aware of this, as it may come from a foreign effort to spread disinfo about the election,” coming with no threat or coercion — just simply “be on the lookout” for this. It’s classic information sharing.
And the platforms had reason to be on the lookout for such things anyway. If the FBI came to Twitter and said “we’ve learned of a zero day hack that can allow hackers into your back end,” and Twitter responded by properly locking down their systems… would that be Twitter “perceiving the messages as threats,” or Twitter taking useful information from the FBI and acting accordingly? Everything I’ve seen suggests the latter.
Even stranger is the claim that the CDC was coercive. The CDC has literally zero power over the platforms. It has no regulatory power over them and now law enforcement power. So I can’t see how it was coercive at all. Here, the 5th Circuit just kinda wings it. After admitting that the CDC lacked any sort of power over the sites, it basically says “but the sites relied on info from the CDC, so it must have been coercive.”
Specifically, CDC officials directly impacted the platforms’ moderation policies. For example, in meetings with the CDC, the platforms actively sought to “get into [] policy stuff” and run their moderation policies by the CDC to determine whether the platforms’ standards were “in the right place.” Ultimately, the platforms came to heavily rely on the CDC. They adopted rule changes meant to implement the CDC’s guidance. As one platform said, they “were able to make [changes to the ‘misinfo policies’] based on the conversation [they] had last week with the CDC,” and they “immediately updated [their] policies globally” following another meeting. And, those adoptions led the platforms to make moderation decisions based entirely on the CDC’s say-so—“[t]here are several claims that we will be able to remove as soon as the CDC debunks them; until then, we are unable to remove them.” That dependence, at times, was total. For example, one platform asked the CDC how it should approach certain content and even asked the CDC to double check and proofread its proposed labels.
So… one interpretation of that is that the CDC was controlling site moderation practices. But another, more charitable (and frankly, from conversations I’ve had, way more accurate) interpretation was that we were in the middle of a fucking pandemic where there was no good info, and many websites decided (correctly) that they didn’t have epidemiologists on staff, and therefore it made sense to ask the experts what information was legit and what was not, based on what they knew at the time.
Note that in the paragraph above, the one that the 5th Circuit uses to claim that the platform polices were controlled by the CDC, it admits that the sites were reaching out to the CDC themselves, asking them for info. That… doesn’t sound coercive. That sounds like trust & safety teams recognizing that they’re not the experts in a very serious and rapidly changing crisis… and asking the experts.
Now, there were perhaps reasons that websites should have been less willing to just go with the CDC’s recommendations, but would you rather ask expert epidemiologists, or the team who most recently was trying to stop spam on your platform? It seems, kinda logical to ask the CDC, and wait until they confirmed that something was false before taking action. But alas.
Still, even with those three parts of the administration being deemed as crossing the line, most of the rest of the opinion is good. Despite all of the nonsense conspiracy theories about CISA, which were at the center of the case according to many, the 5th Circuit finds no evidence of any coercion there, and releases them from any of the restrictions.
Finally, although CISA flagged content for social-media platforms as part of its switchboarding operations, based on this record, its conduct falls on the “attempts to convince,” not “attempts to coerce,” side of the line. See Okwedy, 333 F.3d at 344; O’Handley, 62 F.4th at 1158. There is not sufficient evidence that CISA made threats of adverse consequences— explicit or implicit—to the platforms for refusing to act on the content it flagged. See Warren, 66 F.4th at 1208–11 (finding that senator’s communication was a “request rather than a command” where it did not “suggest[] that compliance was the only realistic option” or reference potential “adverse consequences”). Nor is there any indication CISA had power over the platforms in any capacity, or that their requests were threatening in tone or manner. Similarly, on this record, their requests— although certainly amounting to a non-trivial level of involvement—do not equate to meaningful control. There is no plain evidence that content was actually moderated per CISA’s requests or that any such moderation was done subject to non-independent standards.
Ditto for Fauci’s NIAID and the State Department (both of which were part of nonsense conspiracy theories). The Court says they didn’t cross the line either.
So I think the test the 5th Circuit used is correct (and matches other circuits). I find its application of the test to the White House kinda questionable, but it actually doesn’t bother me that much. With the FBI, the justification seems really weak, but frankly, the FBI should not be involved in any content moderation issues anyway, so… not a huge deal. The CDC part is the only part that seems super ridiculous as opposed to just borderline.
But saying CISA, NIAID and the State Department didn’t cross the line is good to see.
And then, even for the parts the court said did cross the line, the 5th Circuit so incredibly waters down the injunction from the massive, overbroad list of 10 “prohibited activities,” that… I don’t mind it. The court immediately kicks out 9 out of the 10 prohibited activities:
The preliminary injunction here is both vague and broader than necessary to remedy the Plaintiffs’ injuries, as shown at this preliminary juncture. As an initial matter, it is axiomatic that an injunction is overbroad if it enjoins a defendant from engaging in legal conduct. Nine of the preliminary injunction’s ten prohibitions risk doing just that. Moreover, many of the provisions are duplicative of each other and thus unnecessary.
Prohibitions one, two, three, four, five, and seven prohibit the officials from engaging in, essentially, any action “for the purpose of urging, encouraging, pressuring, or inducing” content moderation. But “urging, encouraging, pressuring” or even “inducing” action does not violate the Constitution unless and until such conduct crosses the line into coercion or significant encouragement. Compare Walker, 576 U.S. at 208 (“[A]s a general matter, when the government speaks it is entitled to promote a program, to espouse a policy, or to take a position.”), Finley, 524 U.S. at 598 (Scalia, J., concurring in judgment) (“It is the very business of government to favor and disfavor points of view . . . .”), and Vullo, 49 F.4th at 717 (holding statements “encouraging” companies to evaluate risk of doing business with the plaintiff did not violate the Constitution where the statements did not “intimate that some form of punishment or adverse regulatory action would follow the failure to accede to the request”), with Blum, 457 U.S. at 1004, and O’Handley, 62 F.4th at 1158 (“In deciding whether the government may urge a private party to remove (or refrain from engaging in) protected speech, we have drawn a sharp distinction between attempts to convince and attempts to coerce.”). These provisions also tend to overlap with each other, barring various actions that may cross the line into coercion. There is no need to try to spell out every activity that the government could possibly engage in that may run afoul of the Plaintiffs’ First Amendment rights as long the unlawful conduct is prohibited.
The eighth, ninth, and tenth provisions likewise may be unnecessary to ensure Plaintiffs’ relief. A government actor generally does not violate the First Amendment by simply “following up with social-media companies” about content-moderation, “requesting content reports from social-media companies” concerning their content-moderation, or asking social media companies to “Be on The Lookout” for certain posts.23 Plaintiffs have not carried their burden to show that these activities must be enjoined to afford Plaintiffs full relief.
The 5th Circuit, thankfully, calls for an extra special smackdown Judge Doughty’s ridiculous prohibition on any officials collaborating with the researchers at Stanford and the University of Washington who study disinformation, noting that this prohibition itself likely violates the 1st Amendment:
Finally, the fifth prohibition—which bars the officials from “collaborating, coordinating, partnering, switchboarding, and/or jointly working with the Election Integrity Partnership, the Virality Project, the Stanford Internet Observatory, or any like project or group” to engage in the same activities the officials are proscribed from doing on their own— may implicate private, third-party actors that are not parties in this case and that may be entitled to their own First Amendment protections. Because the provision fails to identify the specific parties that are subject to the prohibitions, see Scott, 826 F.3d at 209, 213, and “exceeds the scope of the parties’ presentation,” OCA-Greater Houston v. Texas, 867 F.3d 604, 616 (5th Cir. 2017), Plaintiffs have not shown that the inclusion of these third parties is necessary to remedy their injury. So, this provision cannot stand at this juncture
That leaves just a single prohibition. Prohibition six, which barred “threatening, pressuring, or coercing social-media companies in any manner to remove, delete, suppress, or reduce posted content of postings containing protected free speech.” But, the court rightly notes that even that one remaining prohibition clearly goes too far and would suppress protected speech, and thus cuts it back even further:
That leaves provision six, which bars the officials from “threatening, pressuring, or coercing social-media companies in any manner to remove, delete, suppress, or reduce posted content of postings containing protected free speech.” But, those terms could also capture otherwise legal speech. So, the injunction’s language must be further tailored to exclusively target illegal conduct and provide the officials with additional guidance or instruction on what behavior is prohibited.
So, the 5th Circuit changes that one prohibition to be significantly limited. The new version reads:
Defendants, and their employees and agents, shall take no actions, formal or informal, directly or indirectly, to coerce or significantly encourage social-media companies to remove, delete, suppress, or reduce, including through altering their algorithms, posted social-media content containing protected free speech. That includes, but is not limited to, compelling the platforms to act, such as by intimating that some form of punishment will follow a failure to comply with any request, or supervising, directing, or otherwise meaningfully controlling the social-media companies’ decision-making processes.
And that’s… good? I mean, it’s really good. It’s basically restating exactly what all the courts have been saying all along: the government can’t coerce companies regarding their content moderation practices.
The court also makes it clear that CISA, NIAID, and the State Department are excluded from this injunction, though I’d argue that the 1st Amendment already precludes the behavior in that injunction anyway, so they already can’t do those things (and there remains no evidence that they did).
So to summarize all of this, I’d argue that the 5th Circuit got this mostly right, and corrected most of the long list of terrible things that Judge Doughty put in his original opinion and injunction. The only aspect that’s a little wonky is that it feels like the 5th Circuit applied the test for coercion in a weird way with regards to the White House, the FBI, and the CDC, often by taking things dramatically out of context.
But the “harm” of that somewhat wonky application of the test is basically non-existent, because the court also wiped out all of the problematic prohibitions in the original injunction, leaving only one, which it then modified to basically restate the crux of the 1st Amendment: the government should not coerce companies in their moderation practices. Which is something that I agree with, and which hopefully will teach the Biden administration to stop inching up towards the line of threats and coercion.
That said, this also seems to wholly contradict the very same 5th Circuit’s decision in the NetChoice v. Paxton case, but that’s the subject of my next post. As for this case, I guess it’s possible that either side could seek Supreme Court review. It would be stupid for the DOJ to do so, as this ruling gives them almost everything they really wanted, and the probability that the current Supreme Court could fuck this all up seems… decently high. That said, the plaintiffs might want to ask the Supreme Court to review for just this reason (though, of course, that only reinforces the idea that the headlines that claimed this ruling was a “loss” for the Biden admin are incredibly misleading).
There are multiple efforts under way in the US to pass laws that require social media sites to take down “medical misinformation.” As we’ve described repeatedly, these are really dangerous ideas. Bills like those from Senators Amy Klobuchar and Ben Ray Lujan seek to force social media to remove medical misinformation as declared by the Ministry of Truth… er… Secretary of Health & Human Services. Of course, it was not all that long ago that we had an administration that was actively anti-science, and wanted to declare anything that made the president look bad as “fake news.”
Also, in the midst of a pandemic, when the data and the science are rapidly evolving, what might seem reasonable at one point, may later turn out to be misinformation — and vice versa. Forcing down misinformation leads to all sorts of dangerous consequences. Hell, we saw this in China, where such a law was used to silence a doctor who tried to raise the alarm about COVID-19, and was forced to apologize for spreading “untruthful information online.”
But there’s another aspect of this which people rarely try to deal with: content moderation involves a lot of very gray areas and an awful lot of context, much of which may not be immediately obvious. An ongoing war of words between the former British Medical Journal (now just “The BMJ”) and Meta/Facebook demonstrates nicely just how impossible it is to claim that “medical misinformation” must be taken offline. There’s a bit of background here, and it’s a, well, touchy subject, so try to go through the whole thing before you react.
First off, the BMJ is not, in any way, anti-vaccine. Somewhat famously, the BMJ was a key player in exposing the fraudulent behavior of Dr. Andrew Wakefield, whose fraudulent study created the modern anti-vax movement. That said, in November, The BMJ published an investigative journalism piece, based on a supposed “whistleblower” suggesting that there was some data integrity issues with the way Pfizer’s vaccine was tested, specifically involving a research partner of Pfizer, Ventavia Research Group.
Ventavia responded to the allegations by noting that the supposed whistleblower in question had raised the issues a year earlier, and they were investigated and found to be unsubstantiated. That said, many reasonable people noted that this should be further investigated and worried that it might lead to further damaging the public’s trust in science.
But, of course, you can fully predict what happened next. It didn’t just “damage the public’s trust in science,” the BMJ article instead was instantly championed by all of the big anti-vax voices all over social media as “proof” that the COVID vaccine was dangerous and rushed into approval — key talking points among that crowd, repeated despite tons of evidence that the vaccine is both incredibly effective and incredibly safe.
This resulted in Lead Stories, a fact checking organization, to fact check the article, and slap it with a “missing context” label, and calling into question the way that people were interpreting the article:
Did the British Medical Association’s news blog reveal flaws that disqualify the results of a contractor’s field testing of Pfizer’s COVID-19 vaccine, and were the problems ignored by the Food & Drug Administration and by Pfizer? No, that’s not true: Pfizer and the FDA were made aware of the allegations about the contractor in 2020. Medical experts say the claims aren’t serious enough to discredit data from the clinical trials, which is also what Pfizer and the FDA say they concluded. The FDA says its position is unchanged: The benefits of the Pfizer vaccine far outweigh rare side effects and the clinical trial data are solid.
Because of this fact check and because of the way the article was being used in a misleading way by thousands of anti-vaxxers, users who tried to share The BMJ article were flagged with fact check warnings saying: “Missing context … Independent fact-checkers say this information could mislead people,” which is accurate, but incomplete, and very dependent on the context of who was sharing it and for what purpose.
The BMJ kinda flipped out about this and published an angry open letter to Mark Zuckerberg (who, I assure you, had nothing to do with the decision on the fact check and flagging). To be honest, I find the BMJ’s anger here completely disingenuous. They act like they don’t understand at all why Lead Stories highlighted the “missing context” point on their story, when — of anyone — the BMJ should be willing to acknowledge how their own article was being weaponized by ignorant anti-vaxxers.
But from November 10, readers began reporting a variety of problems when trying to share our article. Some reported being unable to share it. Many others reported having their posts flagged with a warning about ?Missing context … Independent fact-checkers say this information could mislead people.? Those trying to post the article were informed by Facebook that people who repeatedly share ?false information? might have their posts moved lower in Facebook?s News Feed. Group administrators where the article was shared received messages from Facebook informing them that such posts were ?partly false.?
Readers were directed to a ?fact check? performed by a Facebook contractor named Lead Stories.[2]
We find the ?fact check? performed by Lead Stories to be inaccurate, incompetent and irresponsible.
— It fails to provide any assertions of fact that The BMJ article got wrong
— It has a nonsensical title: ?Fact Check: The British Medical Journal Did NOT Reveal Disqualifying And Ignored Reports Of Flaws In Pfizer COVID-19 Vaccine Trials?
— The first paragraph inaccurately labels The BMJ a ?news blog?
— It contains a screenshot of our article with a stamp over it stating ?Flaws Reviewed,? despite the Lead Stories article not identifying anything false or untrue in The BMJ article
— It published the story on its website under a URL that contains the phrase ?hoax-alert?
We have contacted Lead Stories, but they refuse to change anything about their article or actions that have led to Facebook flagging our article.
The BMJ open letter also gets unnecessarily snarky (which also seems out of character for a prestigious medical journal):
Rather than investing a proportion of Meta?s substantial profits to help ensure the accuracy of medical information shared through social media, you have apparently delegated responsibility to people incompetent in carrying out this crucial task.
That’s ridiculous. Clearly this is a difficult situation. Even if the reporting was accurate — there is crucial context here. Did the revelations support the claims of anti-vaxxers who were using it as evidence that the Pfizer vaccine was not safe? The answer is no, it did not. And there’s a strong argument that The BMJ could have and should have made that point a lot clearer in their own reporting, recognizing how the article would be weaponized by grifters and fed to the ignorant.
Lead Stories then responded to the BMJ, in fairly great detail, more or less saying “you can’t honestly be that naïve.”
It is ironic to read that BMJ.com objects to the headline on Lead Stories’ fact check of a BMJ.com article when the original BMJ piece carries a scare headline that oversells the whistleblower and overstates the jeopardy. Their November 2, 2021, headline “Covid-19: Researcher blows the whistle on data integrity issues in Pfizer’s vaccine trial” is the reason BMJ.com’s article has appeared in hundreds of Facebook posts and tweets, many by anti-vaccine activists using it as “proof” the entire clinical trial was fraudulent and the vaccine unsafe.
Lead Stories also points out that The BMJ’s headline to its article is extremely misleading, as it can be read to say that there were data integrity issues with the entirety of the Pfizer vaccine test, rather than 3 sites out of 153, and then also highlights that the whistleblower in question is not a scientist who is an expert on this. It also notes that the whistleblower appears to have some… questionable beliefs and associations regarding vaccines:
The BMJ.com article eventually gets around to saying she worked at the lab for just two weeks. But BMJ’s open letter fails to mention important context: The Brook Jackson Twitter account agreed with leading COVID misinformation-spreader Robert F. Kennedy Jr.’s criticism of the “Sesame Street” episode in which Big Bird encourages kids to get a COVID-19 vaccine. “Shocking, actually.” she wrote in a November 9, 2021, response to a Kennedy tweet blasting Sesame Street (archived here). Elsewhere on Twitter, the Brook Jackson account wrote to a vaccine-hesitant person that vaccination makes sense if a person is in a high-risk category. When the U.S. 5th Circuit Court of Appeals ruled against a federal employee vaccine mandate, she tweeted “HUGE!” and not with a frowny emoji.
Lead Stories talked to Jackson, looked at available documents (after BMJ refused to permit us to see their basis for the story and did not make the documents available on a transparency site). Unlike BMJ.com, Lead Stories then tested Jackson’s assertions with Pfizer, with the lab contractor in question and with the FDA and then published their responses. It’s not at all clear yet whether there are data integrity issues if you ask the other stakeholders, and that’s the crucial missing context. We also talked to experienced medical researchers for perspective, one of whose credentials BMJ editorial staff demeaned for reasons we can only imagine.
The BMJ has thus far failed to document what is “inaccurate” in the Lead Stories fact check, but again oversells by using that and other name-calling to vent frustration at our documentation of obvious missing context
All of this involves an awful lot of judgment calls, understanding of context, and a lot more. But under a law that requires the pulling down of medical misinformation, how the hell would anyone handle this kind of scenario? The BMJ story isn’t wrong per se, but there is a lot of important context that seems like it’s missing (which Lead Stories highlighted above). On top of that, there’s all the important context around how people are using the article and stretching an already weaker-than-it-seems story to pretend to be a lot more damning on the overall vaccine.
In other words, how the article is being represented and used is an important piece of context as well. And this is frequently the case with medical misinformation. People will take something that is factual or accurate, and present it out of context or in a misleading light, in order to make an argument that doesn’t support it. So which part is the “misinformation” and how do you police that?
In an ideal world, we’d be able to see all the details and the back and forth, and figure it all out. Frankly, when I first heard about this — via The BMJ’s open letter — I initially thought that the details would support The BMJ, and that Facebook mislabeled something (which, of course, happens all the time because of the old Masnick Impossibility Theorem). It was only after reading multiple articles on both sides of this, and going through the details of Lead Stories’ process, that I realized that they had (to me) a much stronger argument, that there’s an awful lot of important context that is missing from The BMJ piece that you would hope a journal like that would have considered before publishing the article the way in which it did.
But to expect every social media platform to be able to determine this on every piece of medical sharing out there is next to impossible — and putting legal liability on top of it, as Senators Klobuchar and Lujan want to do — would be dangerously impossible.
Summary: In almost every country in which it offers its service, Facebook has been asked — sometimes via direct regulation — to limit the spread of “terrorist” content.
But moderating this content has proven difficult. It appears the more aggressively Facebook approaches the problem, the more collateral damage it causes to journalists, activists, and others studying and reporting on terrorist activity.
Because documenting and reporting on terrorist activity necessitates posting of content considered to be “extremist,” journalists and activists are being swept up in Facebook’s attempts to purge its website of content considered to be a violation of terms of service, if not actually illegal.
In the space of one day, more than 50 Palestinian journalists and activists had their profile pages deleted by Facebook, alongside a notification saying their pages had been deactivated for “not following our Community Standards.”
“We have already reviewed this decision and it can’t be reversed,” the message continued, prompting users to read more about Facebook’s Community Standards.
There appears to be no easy solution to Facebook’s over-moderation of terrorist content. With algorithms doing most of the work, it’s left up to human moderators to judge the context of the posts to see if they’re glorifying terrorists or simply providing information about terrorist activities.
Decisions to be made by Facebook:
How do you define ?terrorist? or ?extremist? content?
Does allowing terrorist content to stay up in the context of journalism or activism increase the risk it will be shared by those sympathetic/supportive of terrorists?
Should moderated accounts be allowed to challenge takedowns of terrorist content or the deactivation of their accounts?
Would providing more avenues for removal challenges and/or additional transparency about moderation decisions result in increased government scrutiny of moderation decisions?
Can this collateral damage be leveraged to push back against government demands for harsher moderation policies by demonstrating the real world harms of over-moderation?
Does this aggressive moderation allow the terrorists to “win” by silencing the journalists and activists who are exposing their atrocities?
Could Facebook face sanctions/fines for harming journalists and activists and their efforts to report on acts of terror?
Resolution: Facebook continues to struggle to eliminate terrorist-linked content from its platform. It appears to have no plan in place to reduce the collateral damage caused by its less-than-nuanced approach to a problem that appears — at least at this point — unsolvable. In fact, its own algorithms have generated extremist content by auto-generating “year in review” videos utilizing “terrorist” content uploaded by users, but apparently never removed by Facebook.
Facebook’s ongoing efforts with the Global Internet Forum to Counter Terrorism (GIFCT) probably aren’t going to limit the collateral damage to activists and journalists. Hashes of content designated “extremist” are uploaded to GIFCT’s database, making it easier for algorithmic moderation to detect and remove unwanted content. But utilizing hashes and automatic moderation won’t solve the problem facing Facebook and others: the moderation of extremist content uploaded by extremists and similar content uploaded by users who are reporting on extremist activity. The company continues to address the issue, but it seems likely this collateral damage will continue until more nuanced moderation options are created and put in place.
Famed law professor Alan Dershowitz is at it again. He’s now suing CNN for defamation in a SLAPP suit, because he’s upset that CNN did not provide an entire quote he made during the impeachment trial before the US Senate, claiming that because he was quoted out of context, it resulted in people believing something different than what he actually meant with a quote. Reading the lawsuit, the argument is not all that different from the defamation claim made by another Harvard Law professor, Larry Lessig, earlier this year, in which he accused the NY Times and a reporter there of defamation for taking his comments out of context. Lessig later dropped that lawsuit.
In both cases, these law professors are effectively arguing that when they make convoluted arguments, you must include all of the nuances and context, or you might face defamation claims. That’s incredibly chilling to free speech, and not how defamation law works. Dershowitz’s complaint is that during the trial, he made the following claim:
?The only thing that would make a quid pro quo unlawful is if the quo were somehow illegal. Now we talk about motive. There are three possible motives that a political figure could have. One, a motive in the public interest and the Israel argument would be in the public interest. The second is in his own political interest and the third, which hasn?t been mentioned, would be his own financial interest, his own pure financial interest, just putting money in the bank. I want to focus on the second one for just one moment. Every public official that I know believes that his election is in the public interest and, mostly you are right, your election is in the public interest, and if a president does something which he believes will help him get elected in the public interest, that cannot be the kind of quid pro quo that results in impeachment.”
Dershowitz is upset that CNN aired a segment that showed just that final sentence:
Every public official that I know believes that his election is in the public interest and, mostly you are right, your election is in the public interest, and if a president does something which he believes will help him get elected in the public interest, that cannot be the kind of quid pro quo that results in impeachment.
But here’s the thing: CNN also did air the full segment. And Dershowitz admits this. He’s just upset that at other times they only aired part of it, and that some commentators don’t paraphrase it the way he wanted them to. Here’s where he admits that CNN did, in fact, air the entire clip:
Immediately after Professor Dershowitz presented his argument, CNN employees, Wolf Blitzer and Jake Tapper, played the entire clip properly, so CNN knew for certain that Professor Dershowitz had prefaced his remarks with the qualifier that a quid pro quo could not include an illegal act. That portion then disappeared in subsequent programming.
It disappeared because the longer quote is long, and people were focused on the key part — that final sentence. Many people — including some on CNN — mocked Dershowitz for those remarks. Because they’re ludicrous. Even with the full paragraph. But the mockable part is the final sentence, and that’s why it’s news. And the CNN commentators who mocked it were commentators — people paid to give their opinion on what Dershowitz said.
But, as with Lessig’s lawsuit, the complaint from Dershowitz is that commentator’s opinions about what was said differs from what was meant. But opinions cannot be defamatory. And if people misinterpreted what Dershowitz said, that’s on Dershowitz for not explaining it clearly enough. We’re in a world of trouble if people get to sue for defamation every time someone misunderstands their poorly made argument.
I can understand why it’s frustrating for people to completely misunderstand your argument. It happens all the time to lots of people — including myself. It happens quite often when people try to make carefully nuanced arguments. But misunderstanding, or even misrepresenting, a more nuanced argument is not defamation. And nothing in Dershwotiz’s lawsuit changes that.
Dershowitz’s lawsuit hangs its hat on the Masson v. New Yorker Supreme Court ruling from 1991. Dershowitz’s complaint describes that ruling as follows:
… the Court held that a media organization can be held liable for damages when it engages in conduct that changes the meaning of what a public figure has actually said. While Masson involved the use of quotation marks to falsely attribute words to Jeffrey Masson, the law that the case created is broad, and unequivocally denies first amendment protections to a media organization that takes deliberate and malicious steps to change the meaning of what a public figure has said. That is exactly what CNN did when it knowingly omitted the portion of Professor Dershowitz?s words that preceded the clip it played time and time again.
This is… not an accurate portrayal of the Masson case or ruling. And, yes, I recognize that there’s some irony in Dershowitz claiming its defamation to misrepresent himself while his lawsuit then misrepresents a key Supreme Court case that it relies on. The Masson case is a fun one to read. In involves an article (and then a book made out of the article) about an academic where it appears that the author didn’t just selectively quote the academic, but made up quotes. The ruling compares the quotes in the article to the tape recordings of interviews to note just how different the quotes in the story are from what was actually said. That’s… not what is happening here. It is true that one of the quotes in the Masson case involved selectively excising some of a quote, but that was done in a truly egregious way. It wasn’t that they left out context, it was that they excised a middle portion, to make a later portion appear that it was referring to something much earlier, rather than what was excised.
That is… not what happened to Dershowitz. Indeed, the Masson ruling works against Dershowitz in many ways. It actually says that you have to expect the press to take your long rambling comments and tighten them up, because that’s part of journalism:
Even if a journalist has tape-recorded the spoken statement of a public figure, the full and exact statement will be reported in only rare circumstances. The existence of both a speaker and a reporter; the translation between two media, speech and the printed word; the addition of punctuation; and the practical necessity to edit and make intelligible a speaker’s perhaps rambling comments, all make it misleading to suggest that a quotation will be reconstructed with complete accuracy. The use or absence of punctuation may distort a speaker’s meaning, for example, where that meaning turns upon a speaker’s emphasis of a particular word. In other cases, if a speaker makes an obvious misstatement, for example by unconscious substitution of one name for another, a journalist might alter the speaker’s words but preserve his intended meaning. And conversely, an exact quotation out of context can distort meaning, although the speaker did use each reported word.
In all events, technical distinctions between correcting grammar and syntax and some greater level of alteration do not appear workable, for we can think of no method by which courts or juries would draw the line between cleaning up and other changes, except by reference to the meaning a statement conveys to a reasonable reader. To attempt narrow distinctions of this type would be an unnecessary departure from First Amendment principles of general applicability, and, just as important, a departure from the underlying purposes of the tort of libel as understood since the latter half of the 16th century. From then until now, the tort action for defamation has existed to redress injury to the plaintiff’s reputation by a statement that is defamatory and false.
In the Masson case, the Court did find that many of the changes to the text, including that one section, involved a “material” difference in meaning, and therefore could be found defamatory by a jury. But this case is very, very different than what Dershowitz is claiming about CNN. They didn’t quote his whole line, but there is no requirement they quote his entire argument.
Then there’s the whole damages bit. According to Dershowitz, his reputation was damaged to the tune of $300 million because some people made fun of him on CNN, and it’s all their fault that they didn’t understand his poorly made argument. The fucking entitlement of this guy.
The damage to Professor Dershowitz?s reputation does not have to be imagined. He was openly mocked by most of the top national talk show hosts and the comments below CNN?s videos show a general public that has concluded that Professor Dershowitz had lost his mind.
Being mocked on TV is proof of damages? Really, now? How fragile is Dersh’s ego here? Multiple times in the lawsuit, Dershowitz’s lawyer (yes, he found an actual Florida man lawyer to file this lawsuit) talks about how only playing part of his long silly answer would lead people to believe that Dersh had “lost his mind”:
The very notion of that was preposterous and foolish on its face, and that was the point: to falsely paint Professor Dershowitz as a constitutional scholar and intellectual who had lost his mind. With that branding, Professor Dershowitz?s sound and meritorious arguments would then be drowned under a sea of repeated lies.
If only airing one sentence of your preposterous argument makes you look like you’ve lost your mind, perhaps the problem is in how you frame your arguments.
This is yet another SLAPP suit. Florida has an anti-SLAPP law, but it’s a mixed bag in terms of how strong it is. Of course, as with many SLAPP suits, the real goal is likely to just be intimidation, rather than to actually win a vexatious nonsense lawsuit.
If you’ve been on the internet for basically any length of time, you probably know about the Downfall parody videos, sometimes referred to as the “Hitler Finds Out” videos. These are videos that take a clip from a 2004 German movie about the final days of Hitler, and post over them English subtitles of Hitler getting angry over… just about anything. We wrote about it a decade ago, and while the Downfall parodies have become somewhat less common these days, it’s still a bit surprising that anyone might be offended by them.
But, alas, in a yet another (more real world) example of how content moderation is impossible to do well, a popular senior lecturer of accounting, Catherine West Lowry, at UMass Amherst was removed from her teaching role after a student complained that she showed a Downfall parody about accounting made by a former student to the class (found via Reason.com).
To make the class more fun, Lowry had long offered students extra credit for producing entertaining or “fun” videos about concepts in the accounting class, and someone back in 2009 (at the height of the Downfall parody popularity) made this one about accounting concepts and the class:
On November 12th, Lowry showed that video to the class after some students asked her to share a video:
?The point was to engage students in an otherwise dry and difficult subject material,? Lowry said. ?Accounting is really a foreign language for so many of these students.? The videos, she added, have proved ?very successful with bonding with students,? and instructors at other colleges across the country have used them in their own classes.
Lowry occasionally shows past videos in class as a way of introducing a concept to students, but she hadn?t planned to do so on November 12. Still, a few students asked her to show a video at the start of class, she said, and the Downfall clip was relevant to the day?s lesson. ?So I did it, and they clapped and loved it. And that was that,? Lowry said.
However, at least one students was offended. While none of the articles specifically describe what was seen as offensive about the video, it is implied heavily that someone took offense to the idea of showing Nazis/Hitler in class (not that the video or movie in any way glorify Nazism or Hitler). And rather than recognize that perhaps someone was overreacting, the Dean decided to yank Lowry out of class, which appears to have upset many of her students:
On November 14, Lowry sent an email to her students apologizing for the incident. ?I want to apologize to any student who was offended by the Hitler xcredit video on Tuesday. My intent was never to offend or upset anyone. I was unaware of what was going on on campus,? Lowry wrote, according to a copy of the email provided by a student. ?While I?ve received hundreds of wonderful, thoughtful, creative videos over the past 11 years, this issue, along with an earlier issue this semester, has caused the end of these extra-credit videos.
?I truly am sorry,? she continued, ?and I have never wanted to offend or hurt any of my students. Your success and happiness is most important to me.?
Massey, the dean, briefly spoke to the class the next time it met. She announced that another Isenberg professor would take over teaching for the rest of the semester, according to three students The Chronicle spoke with. Some students shouted, ?Bring back Cat,? a reference to Lowry?s first name. Eventually, several dozen students walked out in protest.
While some are arguing that this is another example of over-sensitive students, it’s not clear that’s the case at all (given that it appears many of the students were perfectly fine with this, and it was potentially the administration that overreacted). But, more to the point, it once again highlights the “impossibility” of content moderation, even in real life, rather than just on the internet. A key point that we’ve made about content moderation is that context matters, and everyone has different context, or may not be fully aware of the cultural context around any particular content.
That’s likely the case here. The offended student(s) perhaps were completely unaware of the Downfall parody meme, and simply reacted to a professors showing a film depiction of Hitler. Without the wider context — and adding in the other context of a rise in Neo Nazism — I can see how someone may have overreacted. The real issue, then, is that the administration failed to be the cooler heads that prevailed, and defaulted to removing the professor from teaching. Also, as the Reason piece notes, since UMass is a public school, there are 1st Amendment implications in punishing her over speech.
In the end, it really does seem that the University and, in particular, Dean Anne Massey, should have been able to come up with a much more reasonable approach here. Merely notifying Lowry that at least one student was offended, seems like it would have been more than enough to keep things in perspective. Indeed, Lowry has said as much:
?I was shocked when this came out.? Had a student expressed concern, she said, ?I would have been mortified. I would have addressed it. I?m not trying to make some statement here.?
But, rather than understand that and understand the context, the University and the Dean went to an extreme position instead.
We’ve been saying for ages now that content moderation at scale is literally impossible to do well. It’s not “difficult.” It’s impossible. That does not mean that companies shouldn’t try to get better at it. They should and they are. But every choice involves real tradeoffs, and those tradeoffs can be significant and will upset some contingent who will have legitimate complaints. Too many people think that content moderation is so easy that just having a a single person dedicated to reviewing content can solve the problem. That’s not at all how it works.
Professor Kate Klonick, who has done much of the seminal research into content moderation on large tech platforms, was given the opportunity to go behind the scenes and look at how Facebook dealt with the Christchurch shooting — an event the company was widely criticized over, with many arguing that they took too long to react, and let too many copies of the video slip through. As we wrote in our own analysis, it actually looked like Facebook did a pretty impressive job given the challenges involved.
Klonick, however, got to find out much more from the people actually involved, and has written up an incredible behind the scenes look at how Facebook dealt with the video for the New Yorker. The entire thing is worth reading, but I did want to highlight a few key points. The article details how Facebook has teams of people around the globe who are ready to respond and deal with any such “crisis,” but that doesn’t make the decisions they have to make any easier. One thing that’s interesting, is that Facebook does have a policy that they should gather as much information as possible before making a call — because sometimes what you see at first may not tell the whole story:
The moderators have a three-step crisis-management protocol; in the first phase, ?understand,? they spend as much as an hour gathering information before making any decisions. Jay learned that the shooter seemed to be trying to make the massacre go viral: he had posted links to a seventy-three-page manifesto, in which he espoused white-supremacist beliefs, and live-streamed one of the shootings on Facebook, in a video that lasted seventeen minutes and then remained on his profile. Jay forced himself to watch the video, and then to watch it again. ?It?s not something I would ask others to do without having to watch it myself,? he said.
If you think it’s crazy to think that it might take up to an hour (I should note, this doesn’t mean they always wait an hour — just that it may take that long to gather the necessary info), Klonick demonstrates how the same basic fact pattern may present very different situations when understood in context. For example, you might think that a Facebook live video of one man shooting and killing another probably shouldn’t be shown. But, context matters. A lot.
Understanding context is one of the most difficult aspects of content moderation. Sometimes, a post seems clearly destructive. In April, 2017, Steve William Stephens, a vocational specialist, shot and killed Robert Godwin, Sr., an elderly black man who was walking on the sidewalk near his home in Cleveland. Stephens said, bafflingly, that he had decided to kill someone because he was mad at his ex-girlfriend, and posted a video of the killing on Facebook, where it remained for two hours before the company removed it. People were horrified by how long it stayed up….
The fact pattern there is straightforward. A black man was shot on Facebook live. Facebook should take it down, right? But…
But disturbing videos may not always be damaging. In July, 2016, Philando Castile, a black school-nutrition supervisor, was shot seven times by a police officer during a traffic stop in Minnesota. Castile?s girlfriend, Diamond Reynolds, live-streamed the aftermath, as Castile bled from his wounds and died after twenty minutes. The footage arrived amid a series of videos depicting police violence against black men but was striking because it was streamed live, which exempted it from claims that it had been edited by activists or the police department before it was released.
If the “rules” say no live video of a shooting, you block the first one… but also the latter. Indeed, for a time, Facebook did block the latter, but that resulted in a lot of (reasonable) complaints, and Facebook changed its mind. Even though the basic fact patterns are the same.
Facebook initially removed the video, but then reinstated it with a content warning. To moderators looking at both, the videos might look similar?a grisly shooting of a black man in America?but the company eventually determined that the intentions behind the videos gave them distinct meaning: keeping up Reynolds?s video brought awareness to the systemic racism of the criminal-justice system, while taking down Stephens?s video silenced a murderer?s deranged homage to his ex-girlfriend.
In short: context matters a ton, and you don’t always get the context right away. Indeed, sometimes it’s very difficult to get the context. And, the same video in different contexts can be quite different. Indeed, this turned out to be some of the problem with the Christchurch video. Klonick details how just removing all copies of the video raised some questions about why some people were posting it:
This created an ethical tangle. While obvious bad actors were pushing the video on the site to spread extremist content or to thumb their noses at authority, many more posted it to condemn the attacks, to express sympathy for the victims, or because of the video?s newsworthiness. For consistency, and in deference to a request from the New Zealand government, the team deleted even these posts. The situation was a no-win for Facebook. Politicians were quick to condemn the company for the spread of extremism, and users who had posted the video in good faith felt unreasonably censored.
In other words, there are tradeoffs, and it’s a no win situation. No matter which choice you make, some people are going to be (perhaps totally reasonably) upset about that decision.
And, of course, there was technical difficulties involved as well, though Facebook did move to try to minimize those:
By the time the handling of the Christchurch video switched to teams in the United States, some twelve hours after the shooting, moderators discovered a problem that they hadn?t encountered before at such a scale. When they tried to create a hash databank for the shooter?s video, users began purposefully or accidentally manipulating the video, creating slightly blurred or cropped versions that obscured the hash and could make it past Facebook?s firewall. Ahmed decided to try a new kind of hash technology that took a fingerprint from a vector of the video?its audio?which was likely to remain the same across different versions. This technique, combined with others, worked: in the first twenty-four hours, one and a half million copies of the video were removed from the site, with 1.2 million of those removed at the point of upload.
In short, there are lots of good reasons to complain about Facebook and to hate on the company. And it often does a bad job with its moderation efforts (though, they have gotten much better). But part of the problem is that when you’re doing moderation at that scale, e mistakes are going to be made — and some of those mistakes are going to be a big deal — and some may be because of a lack of context.
Assuming that there’s some magic wand that can be waved (as Australia, the UK, and the EU have suggested in recent days — not to mention some US politicians) suggests a world that does not exist. It is not helpful to demand that companies magically do something that is impossible and that is driven by the fact that human beings aren’t always good people. A more serious look at the issues of people doing bad stuff online should start with the bad people and what they’re doing, not on blaming social media for being used as a tool to broadcast the bad things.
The new director of the FBI, Christopher Wray, has apparently decided to take up James Comey’s anti-encryption fight. He’s been mostly quiet on the issue since assuming the position, but the DOJ’s recent calls for “responsible encryption” has emboldened the new FBI boss to speak up on the subject.
And speak up he has. Although the FBI still hasn’t released the text of his remarks to the International Association of Chiefs of Police, more than a few sites are reporting it was the usual “go team law enforcement” boosterism, but with the added zest of phone encryption complaints.
He also spoke about roadblocks in dealing with cellphone encryption technology, saying that in first 11 months of the fiscal year, the FBI has been unable to access content from 6,900 mobile devices despite having the proper legal authority to do so.
“It’s going to be a lot worse than that in just a couple of years if we don’t come up with some responsible solution,” he lamented. “I’m open to all ideas.”
All ideas, maybe. But certainly not all viewpoints. The Deputy Attorney General has made it clear in multiple speeches he views phone encryption as the end result of tech companies’ low-minded pursuit of revenue. DAG Rosenstein repeatedly emphasized US law enforcement measures success by a different standard — a standard mercenary phone manufacturers couldn’t even begin to approach.
“I get it, there’s a balance that needs to be struck between encryption and the importance of giving us the tools we need to keep the public safe.”
But does he actually “get it?” What if the status quo is the ending “balance?” Would that satisfy Wray? Doubtful. He wants law enforcement-friendly security holes and he wants tech companies to provide them voluntarily.
The number of locked devices means nothing. The “6,900 mobile devices” will be 8,000 or 10,000 by early next year — sound-and-fury totals signifying nothing. It was 6,000 phones when Comey trotted out numbers earlier this year. It will always increase and it will always grab eyeballs but it won’t ever mean anything unless the FBI is willing to provide a lot more context.
Is the FBI just spectacularly bad at cracking cell phones? We’re not hearing these complaints from local law enforcement agencies with less expertise and lower budgets. Is the FBI just not even trying? Is it not using everything it has available — including a number of judicial forgiveness plans for rights violations — to get into these phones? It’s inconceivable the nation’s top law enforcement agency is experiencing nearly a 50% failure rate when it comes to locked phones.
All Wray says is there are 6,900 phones the FBI hasn’t gotten into. Yet. What’s never discussed is how many investigations resumed unimpeded by cellphone encryption. Phones are not the sole repository of criminal evidence in any investigation. The FBI has options even if the seized phone seems impermeable. The FBI insinuates it’s being stopped, but never specifies how many of these phones have resulted in terminated investigations.
It’s just a number, divorced from context, but one the FBI can ensure will always be larger than last time it was mentioned.