corbin.barthold's Techdirt Profile

corbin.barthold

About corbin.barthold

Posted on Techdirt - 20 January 2026 @ 01:38pm

Can You Trust Mark Meador?

The FTC remains politicized. One commissioner is leading the way—when it suits him.

The Federal Trade Commission under Lina Khan was not a well-run institution. I wrote about this at the time, often and at length, and I regret nothing. But wow—wow—would you be forgiven for thinking that the goal of new management is to make Khan’s tenure look good by comparison. There is plenty to say about this sorry state of affairs, but for now let’s focus on a single commissioner.

Why just one? Isn’t the FTC a multi-member body? Well, these days the agency is something of a husk. President Trump has purported to fire two commissioners—the Democrats, naturally. The FTC Act says he cannot do that, but the Supreme Court appears poised to bless the move on constitutional grounds (a serious mistake). A third commissioner, Melissa Holyoak, recently departed after a brief stint. And rumors swirl that the chair, Andrew Ferguson, will soon take on a second job overseeing a nationwide fraud unit at the Justice Department.

That leaves Mark Meador. He may soon be the lone commissioner who has not been defenestrated, jumped ship, or been pulled into a dual role.

Last week I saw Meador speak at an antitrust conference in the Bay Area. As a matter of policy, his remarks were not to my taste. He aired a familiar set of complaints about modern tech products. Apple’s “liquid glass” is confusing; Google’s AI overviews—that stuff that now appears above the search results—are annoying; AI-generated cat videos, and short-form video more generally, are bad for the soul. It is certainly true that tech companies have many bad ideas. It does not follow that Mark Meador knows better. Yet he spoke with complete confidence in his own superior vision for the tech industry. He knows what the social media market should look like. He knows how to “win the AI race the right way.” The man is, apparently, a prophet.

Some of Meador’s gripes were not really about products at all, but about people. People shouldn’t like short-form video. The government, Meador seemed to suggest, must protect them from themselves. You might say that Meador wants to replace the consumer-welfare standard, under which the FTC protects markets that work to give people what they want, with a moral-welfare standard, under which the FTC pushes markets to give people what they are supposed to want—as determined by Mark Meador.

Maybe people should be more virtuous. But what business is that of the FTC? The FTC Act makes commissioners competition regulators, not philosopher-kings or morality police.

One European lawyer I spoke with at the conference seemed rather taken with Meador’s speech. He wants to crack down on Big Tech, after all; what’s not to like? I tried to explain how Meador plainly judges companies by a moral code, and why that code should give any upstanding European pause. Meador is committed to “the just ordering of society that best facilitates human flourishing.” He speaks unabashedly of the need for “beauty and virtue,” “moral values,” and “tradition and custom.” He peppers his writing (yes, his antitrust writing) with theological language, referring to human beings as “embodied souls seeking communion with their fellow man and their Creator.” The undertone—the dog whistle, if you will—is not Brussels-style social democracy. It is national conservatism, if not flat out Christian nationalism.

Which brings me to my real objection to Meador’s appearance. In Palo Alto, he was mild, reasonable, even conciliatory. The speech itself was a little misguided but pleasant enough. The problem was what it concealed: the other Mark Meador, and the other FTC.

In his speech, Meador called for apolitical enforcement. Antitrust, he said, should not serve an “unrelated political agenda.” It should not target disfavored industries. He and the agency should not “make decisions according to how political winds are blowing.”

How rich. Maximally politicized enforcement has characterized the Trump administration at large, and the Trump FTC in particular. Consider the Omnicom–IPG settlement. The FTC allowed two major advertising firms to merge, but only after restricting the new entity’s ability to withhold advertising dollars based on a publisher’s viewpoints. The settlement is a transparent assault on advertising firms’ First Amendment right to boycott publishers on grounds of social or ideological principle. It is also a nakedly political effort to redirect advertising dollars toward right-wing outlets.

Or consider the FTC’s hapless social-media “censorship” inquiry. This move, too, is an attack on First Amendment rights—this time, platforms’ right to moderate content as they see fit. And this move, too, is aimed at helping the right, specifically those right-wing speakers who insist—baselessly, by and large—that platforms have “silenced” them. Take also the FTC’s foray into debates over gender medicine. The FTC is not a medical regulator; it has no expertise in this area. But transgender issues are at the center of the culture war, so the agency could not resist weighing in, thumb firmly on the scale for the political right.

For Meador to sit in Palo Alto and sermonize about ignoring political winds was an insult to anyone paying attention to his agency or the administration it serves.

Equally striking was the contrast between Meador’s tone inside the conference room and the tone he and the FTC adopt elsewhere. In his remarks, Meador urged listeners not to “draw up battle lines.” Washington and Silicon Valley, he said, should root for each other’s success. During the Q&A, he endorsed a “just the facts, ma’am” approach. He expressed distaste for heated rhetoric from private parties—inflated claims about the stakes of litigation or boasts about whipping the FTC in court. Such talk amounts, he complained, to “melodramatic atmospherics.”

But Mark Meador and the Trump FTC do melodramatic atmospherics with the best of them. Last year, for instance, the FTC convened a conference titled “The Attention Economy: How Big Tech Firms Exploit Children and Hurt Families.” The title was all too fitting: the whole event was slanted, overheated, and self-righteous. Meador led the charge. He likened “the battle over the ‘attention economy’” to “the fight against Big Tobacco.” He argued that social media companies sell an addictive and harmful product; that they must keep children hooked, “craving the next fix, the next puff, the next notification”; and that they peddle lies in their defense.

No doubt this jeremiad resonates with some. I think it’s nonsense. But the point here is not whether Meador is right or wrong. It’s that he is two-faced. In Silicon Valley, he presents himself as mildly uneasy about short-form video. Elsewhere, he portrays social media companies as irredeemable reprobates, scarcely distinguishable from cigarette manufacturers. The Meador we saw projected reasonableness. In reality, he is a fanatic.

What Meador concealed about himself pales, though, beside what he concealed about the FTC. Excuse me, commissioner, did you just say you oppose overheated rhetoric? Where were you after the FTC lost its antitrust case against Meta?

The defeat was not surprising. The case was weak from the outset, failing to grapple with competitors such as YouTube and TikTok. It was dismissed in a careful opinion written by an able judge. That judge, James Boasberg, also ruled against the Trump administration’s reprehensible efforts to hustle men, without due process, to a prison in El Salvador. In response to that ruling, some GOP lawmakers launched a campaign to impeach him. The case for impeachment is risible. But that did not stop the FTC from exploiting it. After the Meta loss, an FTC spokesperson, Joe Simonson, sneered: “The deck was always stacked against us with Judge Boasberg, who is currently facing articles of impeachment.”

This statement is an embarrassment. Everyone at the FTC should be mortified by it. But there it is. Mark Meador has no standing to lecture others about decorum.

Nor should we expect this to be an isolated lapse. The second Trump FTC has been staffed with people who are terminally online. In a sense, they are the dog that caught the car: they have memed their way into an amount of power they are neither competent nor responsible enough to wield.

This became obvious when the FTC set out to punish Media Matters. The organization had published a study finding that ads appeared next to hate speech on the alt-right-friendly platform X. The agency then launched a sweeping investigation (another example, contra Meador, of the FTC’s overtly political posture). The courts blocked the probe, finding it to be retaliation for constitutionally protected speech. Evidence of a retaliatory motive included, almost comically, some FTC staffers’ big fat mouths. Before joining the agency, a cadre of young edgelords had been spending their time spouting off on social media. Joe Simonson (he of the appalling comment after the Meta loss) had mocked Media Matters for employing “a number of stupid and resentful Democrats.” Another staffer had called the group “scum of the earth.”

This is the backdrop to Meador’s calls, in Palo Alto, to lower the temperature. Spare us, commissioner.

The word at the conference was that the FTC is in disarray. Many experienced attorneys and economists accepted one of the Trump administration’s buyout offers. Others concluded, after a return-to-office mandate, that if working for the FTC was going to be a hassle—don’t forget those “five things you did this week” emails!—they might as well leave for higher pay. I heard this from a former government official who had himself recently decamped to private practice. When I asked this refugee about the FTC’s ambitions to police social media or wade into gender medicine, he said he would not be surprised if the agency ultimately accomplishes very little. Who knows. But the intuition is sound: you cannot decimate and demoralize an agency and then expect it to move regulatory mountains.

When Meador was appointed, Tyler Cowen summed things up nicely, concluding that he “is just flat out terrible,” including for his inability to maintain “a basic level of professionalism.” Is he lonely at the top? With the agency hollowed out, Meador may be a king without a throne. One can only hope that his capacity for mischief will be constrained by the wreckage below.

Corbin K. Barthold is Internet Policy Counsel at TechFreedom. Republished with permission from Policy & Palimpsests

Posted on Techdirt - 6 February 2025 @ 10:51am

Spam Emails, Spam Lawsuit: The GOP Tries To Break Gmail By Court Order

In 1872 California enacted a law declaring that “every one who offers to the public to carry persons, property, or messages, excepting only telegraphic messages, is a common carrier of whatever he thus offers to carry.” In 2022 the Republican National Committee sued Google, alleging that, by shunting GOP fundraising emails into Gmail spam folders, it had violated this 150-year-old common-carrier law. A federal district court dismissed the complaint. The RNC took the case to the U.S. Court of Appeals for the Ninth Circuit, where last month it submitted its opening brief.

I’m a firm believer in the value of “show, don’t tell” as a principle of writing, and I will address the RNC’s legal arguments in due course. But this is a rare instance where it’s probably best simply to announce up front that what’s happening here is stupid and insane. I’m never happy when lawyers try to redesign via lawsuit complex systems they don’t understand. But this one is jaw-dropping. Why not let some law that governed nineteenth-century blacksmiths dictate how we build rockets? Can someone dig up a decree setting standards for sixteenth-century door locks? Might be useful in a suit against a cybersecurity firm.

“It is revolting,” Oliver Wendell Holmes wrote, “to have no better reason for a rule of law than that so it was laid down in the time of Henry IV.” And “it is still more revolting,” he went on, “if the grounds upon which it was laid down have vanished long since.” You could object that Holmes over-rotated on his disdain for tradition, and I would agree with you. But the GOP’s attempt to make email conform to a statute crafted for coaches, trains, and ferry boats could indeed be called “revolting”—I might go with “demented”—and for essentially the reason Holmes cites: the grounds upon which California’s ancient common-carrier law was laid down have vanished.

There is a reason why digital technologies are thought to have brought on an “Information Age.” When a pair of researchers at UC Berkeley tried twenty-five years ago to measure all the information in the world, they estimated that “printed documents of all kinds comprised only .003% of the total.” That trend has only accelerated. Something like ninety percent of the data that exists today was created in the last three or four years.

To send a letter in California in 1872, you had to buy paper and ink, to print words on the paper by hand or with a machine, and to pay for a massive postal apparatus—clerks, conductors, drivers, engines, cars, coaches, horses, mules, and more—to carry the paper from one place to another. Today, by contrast, the marginal cost of distributing information is nearly zero. Anyone with a computer and an internet connection can create and send virtually unlimited copies of an email free of charge. (Because partisan fundraising emails are full of formulaic slop, not even content creation ought to cost the GOP much.)

“Mail” and “email” sound like they must be very similar, but they aren’t. Mail has a cost; email essentially does not; and that makes all the difference. This has been clear from the start. In 1984 a computer scientist named Jacob Palme noticed that “an electronic mail system can, if used by many people, cause severe information overload.” The “cause of this problem,” he explained, is that “it is so easy to send a message to a large number of people”—the sender has “too much control of the communication process.” As a result, “people get too many messages” and “the really important messages are difficult to find in a large flow of less important messages.” Palme proceeded to sketch a system of recipient-side message controls that looks remarkably like contemporary spam-filtering.

Another distinction is that an email service, unlike a mail service, does not “carry” messages for you. Your missives travel through an internet service provider, a domain-name-system server, and internet backbone providers, then into a recipient email service. Your email service is just an internet edge provider. It’s not like the stage company in the 1870s, carrying your letter from station to station; it’s like a secretary in the 1940s, making sure your letter goes into the right outbox. The RNC responds that California’s 1872 law reaches anyone who “offers” to carry things. The GOP sees Gmail as a carrier, the argument effectively runs, so a court should overlook the GOP’s ignorance of (and their lawyers’ refusal to accept) how the internet actually works. But this just brings us back to Jacob Palme’s prescient concern. Google “offers” not to carry stuff for you, but to sort it for you. It offers to separate the really important messages from the less important ones. Filtering spam is the heart of its service, as shown by its boast that Gmail “blocks 99.9%” of it.

The 1872 law does not cover telegraphy, the most advanced communication method of the time. Nor did California simply invoke the 1872 law when it recently decided to impose common-carrier mandates on ISPs; it passed a distinct law instead. Contrary to the RNC’s claims, therefore, the 1872 law is not some deeply evolving statute, always rushing out in front of great leaps in technology. That Gmail does not “carry” messages is no trifling detail. That spam-filtering is integral to what Google “offers” is not a mere technicality. These are material facts that take Gmail outside the scope of California’s nineteenth-century common-carrier law. (When they saw how much that law cares about things like “schedule[d] time[s] for the starting of trains or vessel[s] from their respective stations or wharves,” the RNC’s lawyers should have admitted defeat and shelved their complaint. But here we are.)

Not surprisingly, the RNC wants to duck responsibility for trying to break your spam filter. Its solution is to contend that common carriers are allowed to filter spam, but that the RNC’s emails are not spam, and that Google has treated them as spam in bad faith.

The RNC’s emails are spam. Their tone would make a used-car salesman blush—“URGENT . . . Patriot, 20X matching EXPIRING SOON”—and they’ve been known to swarm inboxes by the dozen each day. I’ve written elsewhere about what rotten spammy spam they are; I won’t rehash here how they look like legalized elder abuse. The RNC’s lawsuit was dismissed on the pleadings, so we’re stuck, for now, having to take the accusation of bad faith more or less at face value.

But that still leaves the RNC’s assumption that a common carrier is allowed to “filter some . . . spam-related expression.” The RNC plucks those words from Judge Andy Oldham’s opinion in NetChoice v. Paxton (5th Cir. 2022), without acknowledging that, in the part of the opinion they’re quoting, Judge Oldham was writing for himself alone. (Never mind that the whole opinion was also blown to pieces and vacated by the Supreme Court.) Go to that part of the opinion, moreover, and you will find no citation, drawn from the hoary common-carrier cases, for this supposed rule about common carriers and spam—an unavoidable omission, since spam came into its own only with recent technological developments. The 1872 law has nothing to say about spam: it demands that a common carrier “accept and carry whatever is offered to him . . . of a kind that he is accustomed to carry.” Maybe a court could cram a spam exception into that “accustomed to carry” bit, but that new rule would bear no connection to what the common carriers of old did (they’d never heard of “spam”). Rather, it would be cut by judges from whole cloth, and it would place on them the task of drawing from scratch a comprehensive set of lines separating “spam” and “non-spam.” Judges would be anointing themselves the arbiters of Jacob Palme’s distinction between important and unimportant messages.

Google has no magic spam sorting wand. For that matter, email does not arrive in neat “spam” and “non-spam” categories. Email comes in a terrific array of gradations between those two poles, and Google uses a variety of signals—e.g., the sender’s message cadence, the recipient’s reading habits, the presence of certain trigger words—to determine which emails cross the line and fall into the spam folder. It’s a game of cat and mouse, with spammers constantly deploying new strategies to evade Google’s filters, and Google constantly adjusting and filling gaps in its process. The RNC’s new strategy is to file a lawsuit in hopes of evading Google’s filters with the help of a court. It’s a strategy with immense upside potential: if the RNC succeeds, Google will be unable to adjust; the RNC will possess a ticket to pass through Google’s spam defenses indefinitely. This is a great prize the RNC covets, and it is important for the Ninth Circuit to understand that many other entities, too, would go to great lengths to win it. If the RNC succeeds, things will not end there. A mob of other spammers will pile into the litigation strategy of spam-filter evasion.

The issue here is not only that judges would be no good at second-guessing email services’ spam-filtering decisions—though that is of course true. It is also that, faced with the burden and expense of litigating their spam-filtering decisions, email services would likely opt simply to block much less spam.

Don’t take my word for it: the FCC has said as much with regard to text-messaging. Various groups for years urged the FCC to subject text-messaging services to common-carrier rules under the Communications Act of 1934. In 2018 the agency, at the time—and I cannot stress this enough—under Republican control, issued an order declining to do so. Although it had to start by explaining why text-messaging services aren’t common carriers under the somewhat arcane standards set forth in the Communications Act, the FCC devoted most of its energy to protesting that common-carrier rules for text-messaging is just a dumb idea. Why? Because it’d stop text-messaging services from blocking spam.

The FCC “disagree[d] with commenters that [common-carrier rules] would not limit providers’ ability to prevent spam . . . from reaching customers.” Tellingly, some of those commenters were purveyors of “mass-text[s],” who were seeking “to leverage the common carriage [rules] to stop wireless providers from . . . incorporating robotext-blocking, anti-spoofing measures, and other anti-spam features into their offerings.” With common-carrier rules in place, those “spammers” would be free, the agency concluded (quoting a trade group), to “bring endless challenges to filtering practices” and destroy services’ ability to “address evolving threats.” Ultimately, common-carrier rules would “open the floodgates to unwanted messages—drowning consumers in spam at precisely the moment when their tolerance for such messages is at an all-time low.”

The FCC’s 2018 order knocks down two of the main points raised by the RNC today. First, the RNC claims, as we’ve seen, that common-carrier requirements and spam-filtering policies are compatible. Looking, however, at telephone services—quintessential common carriers—the FCC concluded otherwise. The agency had “generally found call blocking by providers to be unlawful, and typically permit[ted] it only in specific, well-defined circumstances.” Hence the FCC’s belief that common-carriage status for text messages would lead to a flood of spam.

Second, the RNC treats Gmail as a “market-dominant” service capable of “systematically chok[ing] off one major political party’s” fundraising emails. But as the FCC observed, communications “providers have every incentive to ensure the delivery of messages that customers want to receive in order to . . . retain consumer loyalty.” Services that over-filter messages “risk losing th[eir] customers” to competitors. This market mechanism is, if anything, stronger in the context of email than in the context of text messages, as it is far easier to set up an email service than to enter the wireless industry. As Justice Clarence Thomas notes, “No small group of people controls e-mail”—its “protocol” is “decentralized.” (That’s right: Thomas is an outspoken proponent of common-carrier rules for social media, and even he seems to understand that such rules make no sense for email.)

By far the most plausible explanation for why the GOP’s emails landed in Gmail spam folders is that the GOP dishes out tons of spam. It would be nice if the Ninth Circuit could cut to the chase and say so. (This would have the added benefit of cleanly slicing through other legal arguments the RNC raises, in addition to its common-carrier argument.) Given the case’s posture (again, the lawsuit was dismissed on the pleadings), the court probably won’t do that. Google will have to retreat to the more subtle, but no less critical, matter of who is to judge what qualifies as spam. Should we leave it to competing email services to make these calls? Or are we better off if any disgruntled third party can throw such decisions into the courts? This is not a hard one. The Ninth Circuit should make clear that it wants nothing to do with email product design and managing your inbox. Along the way, maybe it can pause to mock the RNC’s revolting use, in a case about the internet, of a law fit for horses and steam engines.

Corbin K. Barthold is Internet Policy Counsel at TechFreedom.

Posted on Techdirt - 4 December 2024 @ 09:27am

Elon Musk Should Be Shouting About The Florida And Texas Social Media Laws (But Are You Surprised That He’s Not?)

In May 2022 Thierry Breton, at the time a prominent European Commissioner, went to Texas and dropped in on Elon Musk, who was then on his way to buying Twitter. During his visit, Breton did something remarkable: he showed that Elon, whatever his other accomplishments, is a sucker’s sucker. In a video posted on social media, Breton got Musk to say that the European Union’s forthcoming Digital Services Act was “exactly aligned” with his “thinking.”

Once he took the helm at Twitter, Musk’s chaotic style of management was obviously going to clash with the EU’s meddlesome style of governance. And so it proved. After the DSA took effect, the social media platform now known as X was the first company charged with violating it. The European Commission is currently weighing whether to impose what amounts to personal liability on Musk. The fines could total six percent of the annual revenue of Musk’s closely held firms (SpaceX, Neuralink, xAI, and the Boring Company). In a separate spat over the law’s enforcement, Musk told Breton to “fuck your own face.”

All of this is to say that, when it comes to understanding and navigating social media regulations, Elon Musk needs all the help he can get.

If Elon cared to listen, I’d tell him this: He should start talking, loudly and often, about the threat that Florida’s and Texas’s social media laws, SB 7072 and HB 20, pose to X.

Florida’s SB 7072 Texas’s HB 20 were enacted in 2021, and they’ve already been the subject of extensive litigation. They’ve already been to the Supreme Court, in fact, where, last summer, the justices addressed lawsuits challenging the two laws in Moody v. NetChoice. That decision does some very good things. It confirms that the First Amendment protects curated collections of third-party speech. It finds that social media newsfeeds are exactly that sort of protected expressive compilation. And it concludes that “a state may not interfere” with such feeds “to advance its own vision of ideological balance.”

But Moody is not the final word. The justices were reviewing a pair of interlocutory appeals; they were explaining only what was “likely” to happen, in the two cases, on the merits. What’s more, the decision addresses only what social media platforms do “on their main feeds.” Texas and Florida are “not likely to succeed in enforcing” their laws, the Court declared, “against the platforms’ application of their content-moderation policies to the feeds that were the focus of the proceedings below” (emphasis mine). The Court offered no opinion on whether SB 7072 and HB 20 are constitutional as applied to user profiles, direct messaging, group chats, or event functions. Instead, it sent the cases back to their respective trial courts for further fact-finding through discovery.

In a nutshell, SB 7072 and HB 20 require large social media platforms (1) to carry and promote content against their will and (2) to fulfill onerous transparency requirements. Even if the conclusion that they do not govern content moderation on newsfeeds holds (no sure bet—a point to which I shall return), these two laws could cause huge headaches for Musk and X.

Musk styles himself a “free-speech absolutist,” and this might make it seem as though he has little to fear from SB 7072 and HB 20, which seek to expand the amount of content platforms must carry. But Musk treats X less as a free-speech platform than as a personal plaything. When journalists annoy him—as by interviewing the owner of an account that tracked his private jet—Musk has them banned. When material surfaces that embarrasses his friends—as when a reporter posted pictures of Sen. Ted Cruz’s notes for meetings with donors—Musk has it suppressed. Recently, Musk warned that the “Hammer of Justice is coming” for “those who pushed foreign interference hoaxes.” Maybe he did not mean that such “Justice” will be served on X, but that was the fair implication (X is where he posted the comment, after all). It is easy to picture him embarking on a witch hunt, banning the accounts of users he believes, rightly or wrongly, to have “pushed” such “hoaxes.”

HB 20 bars a platform from “censoring” a user based on “viewpoint.” It defines “censor” as “to block, ban, remove, deplatform, demonetize, de-boost, restrict, deny equal access or visibility to, or otherwise discriminate against.” It does not elaborate on what constitutes a “viewpoint.” SB 7072, meanwhile, requires platforms to “apply censorship, deplatforming, and shadow banning standards in a consistent manner.” It does not elaborate on what “consistent” content moderation looks like. SB 7072 also bars a platform from “tak[ing] any action to censor, deplatform, or shadow ban a journalistic enterprise based on the content of its publication or broadcast.” “Journalistic enterprise” is defined broadly to encompass any popular website. A similar but weaker provision protects content “by or about” political candidates.

Neither law can stop a platform from removing unlawful content, and each acknowledges that it cannot impose liability for acts of content moderation protected by Section 230. Because Section 230 protects platforms from liability for most content moderation of lawful content, Section 230 should essentially nullify SB 7072’s and HB 20’s (anti-)content moderation rules. But for rightwing critics of so-called Big Tech censorship, gutting Section 230 is part of the plan. Without strong Section 230 protection, SB 7072 and HB 20 would stretch to cover almost every otherwise lawful content moderation decision platforms make.

Even with Moody in place, in other words, a platform would expose itself to potential liability nearly every time it blocked a post or banned a user (acts Moody, with its focus on newsfeeds, did not address) due to hate speech, disinformation, or other lawful but awful content. Any user whose post or profile was taken down could make a colorable claim to have promoted some “viewpoint” that stands opposed to some other “viewpoint” a platform leaves up. (A user punished for publishing Ted Cruz’s donor notes could claim that the “viewpoint” being discriminated against is a commitment to putting the powerful under a microscope, a belief that money should be removed from politics, a conviction that the GOP is a rotten political party, or a recognition that Ted Cruz is a ridiculous person.) Similarly, any user could concoct a story about why taking her posts or profile down is “inconsistent” with leaving some other user’s posts or profile up. (Platforms make billions of content moderation decisions, many of which are subjective and value-laden. Of course these decisions are not fully “consistent,” even apart from the fact that theoretical “consistency” is impossible to define.) Meanwhile, many accounts would qualify as “journalistic enterprises,” and much content as “by or about” a political candidate, making a host of profiles and posts privileged or virtually untouchable.

Although state actors can enforce SB 7072 and HB 20, let’s indulge for the moment the modest assumption that Florida and Texas will enforce their laws in bad faith, persecuting their perceived Big Tech enemies while leaving X untouched. This wouldn’t get Musk out of the woods. Both laws provide aggrieved users avenues by which to sue. SB 7072 creates a private right of action for violations of the consistency provision. HB 20 creates a private right of action for violations of its content moderation rules “with respect to the user” bringing the suit. This broad framing appears to enable a user in Texas to sue a platform for removing any post the user wants to see.

Even without touching newsfeeds, in short, SB 7072’s and HB 20’s content moderation rules could subject X to swarms of nuisance suits, not to mention drastically curtail Musk’s cherished ability to operate X however he wants.

The transparency provisions of the two statutes differ in their particulars, but their general thrust is the same. Both laws require platforms (1) to set forth in detail the rules and methods by which they moderate content, (2) to adhere to those rules and methods (i.e., what they disclose must be accurate), and (3) to explain in detail decisions to remove—and, for SB 7072, downrank or label—a piece of content. The Supreme Court did not opine in Moody on whether these requirements can constitutionally be applied to newsfeeds.

Making a platform explain in detail its millions of daily content moderation decisions would, the U.S. Court of Appeals for the Eleventh Circuit dryly noted in its opinion on SB 7072, entail “potentially significant implementation costs.” It would also expose platforms “to massive liability.” SB 7072 “provides for up to $100,000 in statutory damages per claim and pegs liability to vague terms like ‘thorough’ and ‘precise.’” A “platform could,” the Eleventh Circuit understood, “be slapped with millions, or even billions, of dollars in statutory damages” for failing, in the eyes of a Florida court, to “provide sufficiently ‘thorough’ explanations” when removing, downranking, or labeling posts.

While Musk might not flinch at the expense of supplying countless detailed explanations for content moderation decisions, he would surely hate having to adhere to X’s terms of service. As the tech writer Alex Hern points out, Musk has tended to treat his platform’s “written rules” as “a polite fiction”—a “fig leaf” over his “capricious whims.” Consider the private jet episode—a good illustration of what, with SB 7072 or HB 20 in place, Musk would not be allowed to do. Musk pledged not to ban the “ElonJet” account that tracked his private flights. Later, though, he did just that, even though the account was not in violation of then-still-Twitter’s written rules. Twitter then changed the rules to ban posts that disclose a person’s “live location” or that contain images or videos of a person without her consent. This new rule was so broad that Musk promptly broke it himself, by posting a picture of a person (a stalker, according to Musk) sitting on his car.

Now imagine that SB 7072 or HB 20 had been in place. The banning of the ElonJet account revealed that the Twitter rules, as they existed before the ban, were incomplete. And there was no way the platform was ever going to engage in more than haphazard enforcement of the location-sharing and depiction-without-consent bans. SB 7072 and HB 20 don’t appear to allow private enforcement of their “follow your own rules” provisions; but if you’re Musk, do you really want the governments of Florida and Texas to have, sitting in their back pockets, a handy tool for making you operate your platform how they want? You never know when you might have a falling out with a cynical and capricious character like Texas attorney general Ken Paxton. To avoid the possibility of large fines, injunctions, and contempt proceedings, Musk would have to obey X’s written rules, and X’s written rules would have to become far more detailed. X would have to undertake what Musk would undoubtedly view as a heinous routinization and bureaucratization of its content moderation process.

The Supreme Court’s protection of content moderation on newsfeeds is likely to hold, but it is by no means guaranteed to hold. Recall that the Court was applying only the “likelihood of success” standard that governs motions for preliminary injunction. The Court further opened the door to a change of result, following discovery, by acknowledging that “the record is incomplete” even as to “the major social-media platforms’ main feeds.” Adding to the uncertainty, Justice Alito, writing for himself and Justices Thomas and Gorsuch, issued a concurrence laden with suggestions for how the lower courts might evade the majority’s ruling.

And if SB 7072 and HB 20 were to sink their teeth into newsfeeds after all, that would be a disaster for Musk. Forced under SB 7072 to act in a “consistent” manner, and forced under HB 20 never to deny “equal access or visibility to,” or “otherwise discriminate against,” content, Musk would have to fundamentally change how he runs X. He could no longer weight X’s algorithm in favor of his political interests (thereby discriminating against Democrats) or himself (thereby discriminating against literally everyone else). His beloved community notes would draw lawsuits, with plaintiffs claiming the notes aren’t consistent or viewpoint neutral. Under SB 7072, X would additionally have to let users opt out of the platform’s recommendation algorithm altogether, leaving Musk unable to continue force-feeding users the posts he wants them to see.

Even if, despite everything, Musk isn’t worried about SB 7072 or HB 20, he still has good reason to oppose them. For if SB 7072 and HB 20 are valid under the First Amendment, other intrusive regulations of social media will be too. New statutes will be sure to pop up. Blue states will enact laws that force X to engage in more content moderation. Some commentators cite Musk as the reason such laws are needed. “There is a liberal/progressive case to be made” for “regulating content moderation,” argues professor Michael Dorf, that stands in part on the fact that “Musk ha[s] bought Twitter, rebranded it X, and turned it into [a] cesspool of misinformation, hate, and stupidity.” If Musk wants to avoid leftwing regulation of his platform, he’d be wise to oppose rightwing regulation of his platform as well.

Suffice it to say that this is not what Musk to this point has done. What he has done instead is resist or accede to state action based on whether he likes the politics of the government in question. X complied with the Indian government’s demand that it take down a documentary critical of rightwing prime minister Narendra Modi; but it defied (at least for a time) a Brazilian Supreme Court justice’s demand that it ban accounts accused of spreading misinformation in support of rightwing former president Jair Bolsonaro. X has sued to block social media regulations enacted by blue California, yet Musk is not voicing opposition to the social media regulations enacted by red Florida and Texas. (X is technically involved in opposing SB 7072 and HB 20 through its membership in NetChoice, one of the trade groups challenging the laws. But my point all along has been that Musk should be tapping his personal clout here.)

As I said at the outset, Musk initially supported the DSA out of ignorance. Maybe ignorance is all that’s keeping him from speaking out against SB 7072 and HB 20. If so, here’s hoping he becomes better informed. But it’s at least as likely that he knows about the Florida and Texas laws, and that he knows they’re dangerous, but that, because his first priority is to play political favorites, he doesn’t care—certainly not enough to do anything. Elon Musk is a lot of things; principled is not one of them.

Corbin K. Barthold is Internet Policy Counsel at TechFreedom.

Posted on Techdirt - 3 September 2024 @ 12:08pm

The Third Circuit’s Section 230 Decision In Anderson v. TikTok Is Pure Poppycock.

Last week, the U.S. Court of Appeals for the Third Circuit concluded, in Anderson v. TikTok, that algorithmic recommendations aren’t protected by Section 230. Because they’re the platforms’ First Amendment-protected expression, the court reasoned, algorithms are the platforms’ “own first-party speech,” and thus fall outside Section 230’s liability shield for the publication of third-party speech.

Of course, a platform’s decision to host a third party’s speech at all is also First Amendment-protected expression. By the Third Circuit’s logic, then, such hosting decisions, too, are a platform’s “own first-party speech” unprotected by Section 230.

We’ve already hit (and not for the last time) the key problem with the Third Circuit’s analysis. “Given … that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others’ content via their expressive algorithms,” the court declared, “it follows that doing so amounts to first-party speech under [Section] 230, too.” No, it does not. Assuming a lack of overlap between First Amendment protection and Section 230 protection is a basic mistake.

Section 230(c)(1) says that a website shall not be “treated as the publisher” of most third-party content it hosts and spreads. Under the ordinary meaning of the word, a “publisher” prepares information for distribution and disseminates it to the public. Under Section 230, therefore, a website is protected from liability for posting, removing, arranging, and otherwise organizing third-party content. In other words, Section 230 protects a website as it fulfills a publisher’s traditional role. And one of Section 230’s stated purposes is to “promote the continued development of the Internet”—so the statute plainly envisions the protection of new, technology-driven publishing tools as well.

The plaintiffs in Anderson are not the first to contend that websites lose Section 230 protection when they use fancy algorithms to make publishing decisions. Several notable court rulings (all of them unceremoniously brushed aside by the Third Circuit, as we shall see) reject the notion that algorithms are special.

The Second Circuit’s 2019 decision in Force v. Facebook is especially instructive. The plaintiffs there argued that “Facebook’s algorithms make … content more ‘visible,’ ‘available,’ and ‘usable.’” They asserted that “Facebook’s algorithms suggest third-party content to users ‘based on what Facebook believes will cause the user to use Facebook as much as possible,’” and that “Facebook intends to ‘influence’ consumers’ responses to that content.” As in Anderson, the plaintiffs insisted that algorithms are a distinct form of speech, belonging to the platform and unprotected by Section 230.

The Second Circuit was unpersuaded. Nothing in the text of Section 230, it observed, suggests that a website “is not the ‘publisher’ of third-party information when it uses tools such as algorithms that are designed to match that information with a consumer’s interests.” In fact, it noted, the use of such tools promotes Congress’s express policy “to promote the continued development of the Internet.”

By “making information more available,” the Second Circuit wrote, Facebook was engaging in “an essential part of traditional publishing.” It was doing what websites have done “on the Internet since its beginning”—“arranging and distributing third-party information” in a manner that “forms ‘connections’ and ‘matches’ among speakers, content, and viewers of content.” It “would turn Section 230(c)(1) upside down,” the court concluded, to hold that Congress intended to revoke Section 230 protection from websites that, whether through algorithms or otherwise, “become especially adept at performing the functions of publishers.” The Second Circuit had no authority, in short, to curtail Section 230 on the ground that by deploying algorithms, Facebook had “fulfill[ed] its role as a publisher” too “vigorously.”

As the Second Circuit recognized, it would be exceedingly difficult, if not impossible, to draw logical lines, rooted in law, around how a website arranges third-party content. What in Section 230 would enable a court to distinguish between content placed in a “for you” box, content that pops up in a newsfeed, content that appears at the top of a homepage, and content that’s permitted to exist in the bowels of a site? Nothing. It’s the wrong question. The question is not how the website serves up the content; it’s what makes the content problematic. When, under Section 230, is third-party content also a website’s first-party content? Only, the Second Circuit explained, when the website “directly and materially contributed to what made the content itself unlawful.” This is the “crucial distinction”—presenting unlawful content (protected) versus creating unlawful content (unprotected).

Perhaps you think the problem of drawing non-arbitrary lines around different forms of presentation could be solved, if only we could get the best and brightest judges working on it? Well, the Supreme Court recently tried its luck, and it failed miserably. To understand the difficulties with excluding algorithmic recommendations from Section 230, all the Third Circuit had to do was meditate on the oral argument in Gonzalez v. Google. It was widely assumed that the justices took that case because at least some of them wanted to carve algorithms out of Section 230. How hard could it be? But once the rubber hit the road, once they had to look at the matter closely, the justices had not the faintest idea how to do that. They threw up their hands, remanding the case without reaching the merits.

The lesson here is that creating an “algorithm” rule would be rash and wrong—not least because it would involve butchering Section 230 itself—and that opinions such as Force v. Facebook are correct. But instead of taking its cues from the Gonzalez non-decision, the Third Circuit looked to the Supreme Court’s newly released decision in Moody v. NetChoice.

Moody confirms (albeit, alas, in dicta) that social media platforms have a First Amendment right to editorial control over their newsfeeds. The right to editorial control is the right to decide what material to host or block or suppress or promote, including by algorithm. These are all expressive choices. But the Third Circuit homed in on the algorithm piece alone. Because Moody declares algorithms a platform’s protected expression, the Third Circuit claims, a platform does not enjoy Section 230 protection when using an algorithm to recommend third-party content.

The Supreme Court couldn’t coherently separate algorithms from other forms of presentation, and the distinguishing feature of the Third Circuit’s decision is that it never even tries to do so. Moody confirms that choosing to host or block third-party content, too, is a platform’s protected expression. Are those choices “first-party speech” unprotected by Section 230? If so—and the Third Circuit’s logic requires that result—Section 230(c)(1) is a nullity. 

This is nonsense. And it’s lazy nonsense to boot. Having treated Moody’s stray lines about algorithms like live hand grenades, the Third Circuit packs up and goes home. Moody doesn’t break new ground; it merely reiterates existing First Amendment principles. Yet the Third Circuit uses Moody as one neat trick to ignore the universe of Section 230 precedent. In a footnote (for some reason, almost all the decision’s analysis appears in footnotes) the court dismisses eight appellate rulings, including Force v. Facebook, that conflict with its ruling. It doesn’t contest the reasoning of these opinions; it just announces that they all “pre-dated [Moody v.] NetChoice.”

Moody roundly rejects the Fifth Circuit’s (bananas) First Amendment analysis in Paxton v. NetChoice. In that faulty decision, the Fifth Circuit wrote that Section 230 “reflects Congress’s factual determination that Platforms are not ‘publishers,’” and that they “are not ‘speaking’ when they host other people’s speech.” Here again is the basic mistake of seeing the First Amendment and Section 230 as mutually exclusive, rather than mutually reinforcing, mechanisms. The Fifth Circuit conflated not treating a platform as a publisher, for purposes of liability, with a platform’s not being a publisher, for purposes of the First Amendment. In reality, websites that disseminate third-party content both exercise First Amendment-protected editorial control and enjoy Section 230 protection from publisher liability.

The Third Circuit fell into this same mode of woolly thinking. The Fifth Circuit concluded that because the platforms enjoy Section 230 protection, they lack First Amendment rights. Wrong. The Supreme Court having now confirmed that the platforms have First Amendment rights, the Third Circuit concluded that they lack Section 230 protection. Wrong again. Congress could not revoke First Amendment rights wherever Section 230 protection exists, and Section 230 would serve no purpose if it did not apply wherever First Amendment rights exist.

Many on the right think, quite irrationally, that narrowing Section 230 would strike a blow against the bogeyman of online “censorship.” Anderson, meanwhile, involved the shocking death of a ten-year-old girl. (A sign, in the view of one conservative judge on the Anderson panel, that social media platforms are dens of iniquity. For a wild ride, check out his concurring opinion.) So there are distorting factors at play. There are forces—a desire to stick it to Big Tech; the urge to find a remedy in a tragic case—pressing judges to misapply the law. Judges engaging in motivated reasoning is bad in itself. But it is especially alarming here, where judges are waging a frontal assault on the great bulwark of the modern internet. These judges seem oblivious to how much damage their attacks, if successful, are likely to cause. They don’t know what they’re doing.

Corbin Barthold is internet policy counsel at TechFreedom.

Posted on Techdirt - 21 February 2024 @ 11:57am

In SCOTUS NetChoice Cases, Texas’s And Florida’s Worst Enemy Is (Checks Notes) Elon Musk.

Next week, the Supreme Court will hear oral argument in NetChoice v. Paxton and Moody v. NetChoice. The cases are about a pair of laws, enacted by Texas and Florida, that attempt to force large social media platforms such as YouTube, Instagram, and X to host large amounts of speech against their will. (Think neo-Nazi rants, anti-vax conspiracies, and depictions of self-harm.) The states’ effort to co-opt social media companies’ editorial policies blatantly violates the First Amendment.

Since the laws are constitutional trainwrecks, it’s no surprise that Texas’s and Florida’s legal theories are weak. They rely heavily on the notion that what social media companies do is not really editing — and thus is not expressive. Editors, Texas says in a brief, are “reputationally responsible” for the content they reproduce. And yet, the state continues, “no reasonable observer associates” social media companies with the speech they disseminate.

This claim is absurd on its face. Everyone holds social media companies “reputationally responsible” for their content moderation. Users do, because most of them don’t like using a product full of hate speech and harassment. Advertisers do, out of a concern for their “brand safety.” Journalists do. Civil rights groups do. Even the Republican politicians who enacted this pair of bad laws do — that’s why they yell about how “Big Tech oligarchs” engage in so-called censorship.

That the Texas and Florida GOP are openly contemptuous of the First Amendment, and incompetent to boot, isn’t exactly news. So let’s turn instead to some delicious ironies. 

Consider that the right’s favorite social media addict, robber baron, and troll Elon Musk has single-handedly destroyed Texas’s and Florida’s case.

After the two states’ laws were enacted, Elon Musk conducted something of a natural experiment in content moderation—one that has wrecked those laws’ underlying premise. Musk purchased Twitter, transformed it into X, and greatly reduced content moderation on the service. As tech reporter Alex Kantrowitz remarks, the new approach “privileges” extreme content from “edgelords.”

This, in turn, forces users to work harder to find quality content, and to tolerate being exposed to noxious content. But users don’t have to put up with this — and they haven’t. “Since Musk bought Twitter in October 2022,” Kantrowitz finds, “it’s lost approximately 13 percent of its app’s daily active users.” Clearly, users “associate” social-media companies with the speech they host!

It gets better. Last November, Media Matters announced that, searching X, it had found several iconic brands’ advertisements displayed next to neo-Nazi posts. Did Musk say, “Whatever, dudes, racist content being placed next to advertisements on our site doesn’t affect X’s reputation”? No. He had X sue Media Matters.

In its complaint, X asserts that it “invests heavily” in efforts to keep “fringe content” away from advertisers’ posts. The company also alleges that Media Matters gave the world a “false impression” about what content tends to get “pair[ed]” on the platform. These statements make sense only if people care — and X cares that people care — about how X arranges content on X.

X even states that Media Matters has tried to “tarnish X’s reputation by associating [X] with racist content.” It would be hard to admit more explicitly that social-media companies are “reputationally responsible” for, because they are “associated” with, the content they disseminate.

Consider also that Texas ran to Musk’s defense. Oblivious to how Musk’s vendetta hurts Texas’s case at the Supreme Court, Ken Paxton, the state’s attorney general, opened a fraud investigation against Media Matters (the basic truth of whose report Musk’s lawsuit does not dispute).

Consider finally how Texas’s last-ditch defense gets mowed down by the right’s favorite Supreme Court justice. According to Texas, social-media companies can scrub the reputational harm from spreading abhorrent content simply by “disavowing” that content. But none other than Justice Clarence Thomas has blown this argument apart. If, Thomas writes, a state could force speech on an entity merely by letting that entity “disassociate” from the speech with a “disclaimer,” that “would justify any law compelling speech.”

Only the government can “censor” speech. Texas and Florida are the true censors here, as they seek to restrict the expressive editorial judgment of social-media companies. That conduct is expressive. Just ask Elon Musk. And that expressiveness is fatal to Texas’s and Florida’s laws. Just ask Clarence Thomas. Texas’s and Florida’s social-media speech codes aren’t just unconstitutional, they can’t even be defended coherently.

Corbin Barthold is internet policy counsel at TechFreedom.

Posted on Techdirt - 6 October 2023 @ 01:30pm

A Reagan Judge, The First Amendment, And The Eternal War Against Pornography

Using “Protect the children!” as their rallying cry, red states are enacting digital pornography restrictions. Texas’s effort, H.B. 1181, requires commercial pornographic websites—and others, as we’ll see shortly—to verify that their users are adults, and to display state-drafted warnings about pornography’s alleged health dangers. In late August, a federal district judge blocked the law from taking effect. The U.S. Court of Appeals for the Fifth Circuit expedited Texas’s appeal, and it just held oral argument. This law, or one of the others like it, seems destined for the Supreme Court. 

So continues what the Washington Post, in the headline of a 1989 op-ed by the columnist Nat Henthoff, once called “the eternal war against pornography.”

It’s true that the First Amendment does not protect obscenity—which the Supreme Court defines as “prurient” and “patently offensive” material devoid of “serious literary, artistic, political, or scientific value.” Like many past anti-porn crusaders, however, Texas’s legislators blew past those confines. H.B. 1181 targets material that is obscene to minors. Because “virtually all salacious material” is “prurient, offensive, and without value” to young children, the district judge observed, H.B. 1181 covers “sex education [content] for high school seniors,” “prurient R-rated movies,” and much else besides. Texas’s attorneys claim that the state is going after “teen bondage gangbang” films, but the law they’re defending sweeps in paintings like Manet’s Olympia (1863):

Incidentally, this portrait appears—along with other nudes—in a recent Supreme Court opinion. And now, of course, it appears on this website. Time to verify users’ ages (with government IDs or face scans) and post the state’s ridiculous “warnings”? Not quite: the site does not satisfy H.B. 1181’s “one-third . . . sexual material” content threshold. Still, that standard is vague. (What about a website that displays a collection of such paintings?) And in any event, that this webpage is not now governed by H.B. 1181 only confirms the law’s arbitrary scope.

H.B. 1181 flouts Supreme Court decisions on obscenity, internet freedom, and online age verification. This fact was not lost on the district judge, who noted that Texas had raised several of its arguments “largely for the purposes” of setting up “Supreme Court review.” If this case reaches it, the Supreme Court can strike down H.B. 1181 simply by faithfully applying any or all of several precedents.

But the Court should go further, by elaborating on the threat these badly crafted laws pose to free expression.

When it next considers an anti-porn law, the Court will hear a lot about its own rulings. But other opinions grapple with such laws—and one of them, in particular, is worth remembering. Authored by Frank Easterbrook, perhaps the greatest jurist appointed by Ronald Reagan, American Booksellers Association v. Hudnut (7th Cir. 1985) addresses pornography and the First Amendment head on.

At issue was an Indianapolis ordinance that banned the “graphic sexually explicit subordination of women.” Interestingly, this law was inspired by two intellectuals of the left, Catharine MacKinnon and Andrea Dworkin. They maintained (as Easterbrook put it) that “pornography influences attitudes”—that “depictions of subordination tend to perpetuate subordination,” including “affront and lower pay at work, insult and injury at home, battery and rape on the streets.” (You can hear, in today’s debates about kids and social media, echoes of this dire rhetoric.)

Although he quibbled with the empirical studies behind this claim, Easterbrook accepted the premise for the sake of argument. Indeed, he leaned into it. For him, the harms the city alleged “simply demonstrate[d] the power of pornography as speech.” That pornography affects attitudes, which in turn affect conduct, does not distinguish it from other forms of expression. Hitler’s speeches polluted minds and inspired horrific actions. Religions deeply shape people’s lifestyles and worldviews. Television leads (many worry) “to intellectual laziness, to a penchant for violence, to many other ills.” The strong effects of speech are an inherent part of speech—not a ground for regulation. “Any other answer leaves the government in control of all of the institutions of culture, the great censor and director of which thoughts are good for us.”

Like Texas today, Indianapolis targeted not obscenity alone, but adult content more broadly. And like Texas, the city sought to excuse this move by blending the two concepts together. Pornography is “low value” speech, it argued, akin to obscenity and therefore open to special restriction. There were several problems with this claim. But as Easterbrook explained, it also failed on its own terms. Indianapolis asserted that pornography shapes attitudes in the home and at the workplace. It believed, in other words, that the speech at issue influenced politics and society “on a grand scale.” True, Easterbrook acknowledged, “pornography and obscenity have sex in common.” Like Texas today, though, Indianapolis failed to carve out of its ordinance material with literary, artistic, political, or scientific value to adults.

“Exposure to sex is not,” Easterbrook declared, “something the government may prevent.” This is not an exceptional conclusion. “Much speech is dangerous.” Under the First Amendment, however, “the government must leave to the people the evaluation of ideas.” Otherwise free speech dies. Almost everyone would, if operating in a vacuum, happily outlaw certain kinds of noxious speech. Some would bar racial slurs (or disrespect), others religious fundamentalism (or atheism). Some would banish political radicalism (of some stripe or other), others misinformation (defined one way or another). Many of the lawmakers who claim merely to hate porn would, if given the chance, eagerly police all erotic film, literature, and art. (Another pathbreaking Manet painting, Luncheon on the Grass, would plainly have fallen afoul of the Indianapolis ordinance.) The First Amendment stops this downward spiral before it begins. It “removes the government from the role of censor.”

Indianapolis “paint[ed] pornography as part of the culture of power.” Maybe so. But in the end, Easterbrook responded, the First Amendment is a tool of the powerless:

Free speech has been on balance an ally of those seeking change. Governments that want stasis start by restricting speech. . . . Change in any complex system ultimately depends on the ability of outsiders to challenge accepted views and the reigning institutions. Without a strong guarantee of freedom of speech, there is no effective right to challenge what is.

Earlier this year, the Supreme Court’s conservative justices sang a similar tune. It is “not the role of the State or its officials,” they declared in 303 Creative v. Elenis, “to prescribe what shall be offensive.” On the contrary, the Constitution “protect[s] the speech rights of all comers, no matter how controversial—or even repugnant—many may find the message at hand.” Here’s hoping that, when they’re dragged back into the eternal war against pornography, those justices give these words their proper sweep.

Corbin K. Barthold is internet policy counsel at TechFreedom.

Posted on Techdirt - 28 March 2023 @ 10:46am

In Internet Speech Cases, SCOTUS Should Stick Up For Reno v. ACLU

It was by no means certain that the internet would enjoy full First Amendment protection. The radio is not shielded from the government in that way. Nor is broadcast television. Both Congress and the President supported placing online speech under some degree of state control. In Reno v. ACLU (1997), however, the Supreme Court could find “no basis for qualifying the level of First Amendment scrutiny that should be applied to this [new] medium.” Liberty won out.

A quarter-century later, the free internet faces an array of new threats. Sometimes the danger is announced openly and without regret. Discussing his intention to sign a law restricting minors’ access to social media, the governor of Utah recently declared Reno “wrongly decided.” There are “new facts,” he tells us. He earns points for candor. Most opponents of internet freedom attempt to hide what they’re doing. Some of these aspiring regulators even try to snatch the banner of free speech for themselves. But they all want, by hook or by crook, to curtail or evade Reno.

Many states chafe at the restraints Reno places on the government. A few have already arrived at the Supreme Court. These states endorse legal theories that would drastically shrink Reno’s scope. But they do not want Reno narrowed in a neutral, even-handed fashion. For the states in question stand on opposite sides of our nation’s culture war. Each side’s message is this: Limit Reno for thee, but not for me. Each side wants the Justices to revoke Reno’s protection for the other side.

Yet both sides appeal to the same legal principles. Each side makes arguments in its own litigation that, if accepted in the other side’s litigation, would blow up in its face. Each side makes arguments that, if given full play, could lead to Reno’s being destroyed for everyone. The two sides risk pulling the temple down on our heads.

The cases in question are 303 Creative v. Elenis, Moody v. NetChoice, and NetChoice v. Paxton. In 303 Creative, Colorado seeks to compel a Christian website designer to express a message, in the form of a website for a gay wedding, to which she objects. The U.S. Court of Appeals for the Tenth Circuit ruled for the state. The Supreme Court granted review and heard oral argument last December. In Moody and Paxton, states seek to force large social media platforms to spread messages that those platforms believe are dangerous, harmful, or abhorrent. In Moody, the Eleventh Circuit ruled for the platforms, blocking a Florida law called SB7072. In Paxton, the Fifth Circuit ruled against them, upholding a Texas law, HB20, that requires “viewpoint neutral” content moderation (i.e., if you carry Holocaust documentaries, you must carry Holocaust deniers). Petitions for certiorari have been filed in both cases, and the Court is almost certain to grant at least one of them.

The driving forces here are Colorado (supported by other blue states and the federal government) and Florida and Texas (supported by other red states). Still, each side has found able champions on the bench. Judges figure prominently in these legal debates, as we will see. Yet the Supreme Court now has the full picture. With both 303 Creative and Moody/Paxton before them, a majority of the Justices might take a different view. They might see that the best course is to defend the rule and spirit of Reno against all comers.

How is Reno being challenged? How do the attacks on it match up in 303 Creative, Moody, and Paxton? Let’s dig in.

Common Carrier / Place of Public Accommodation

Two years back, Justice Thomas, writing for himself, suggested that “some digital platforms” are “akin to common carriers or places of public accommodation.” If that’s right, he surmised, then “laws that restrict” those platforms’ “right to exclude” might satisfy the First Amendment. The state might lawfully force such entities to disseminate speech against their will. 

Upholding HB20 in Paxton, Judge Oldham took the next step. Texas claimed that large social media platforms can be treated like common carriers. Oldham agreed. He concluded—in dicta; no other judge joined this part of his opinion—that HB20’s viewpoint neutrality rule “falls comfortably within the historical ambit of permissible common carrier regulation.”

The idea of common carriage has, Oldham wrote, “been part of Anglo-American law for more than half a millennium.” He explored the concept’s history at length, following it on a “long technological march” from “ferries and bakeries,” to “steamboats and stagecoaches,” to “telegraph and telephone lines,” and finally—in his mind—to “social media platforms.” He argued “the centrality of the Platforms to public discourse.” He grappled with “modern precedents.” He engaged with the “counterarguments” of “the Platforms and their amici.” No one can dispute his rigor.

The Eleventh Circuit, speaking through Judge Newsom, ruled in Moody that the platforms are not like common carriers. Newsom, too, was careful and thorough. But in any event, how much of this debate is genuinely relevant? Judge Southwick’s answer, in his dissent in Paxton, was short and to the point. “Few of the cases cited” by Judge Oldham, Southwick wrote, “concern the intersection of common carrier obligations and First Amendment rights,” and the ones that do “reinforce the idea [that] common carriers retain their First Amendment protections of their own speech.” To show that a legal principle can trump a constitutional right, in other words, it does not suffice to show that the principle has an impressive pedigree. One must establish that the principle has in fact been used to trump the constitutional right.

Here is where things get interesting. This is precisely the approach that Lorie Smith, the Christian website designer, urges the Supreme Court to deploy in 303 Creative. Colorado says that Smith must make websites for gay weddings because her business is a place of public accommodation. What must Colorado do to connect its premise and its conclusion? It must prove, Smith contends, that “public-accommodation laws historically compelled speech, not that they merely existed.” At oral argument, Justice Thomas picked up this line of thought. Is there a “long tradition,” he asked (appearing to depart from the stance he teased with two years ago), “of public accommodations laws applying to speech . . . or expressive conduct?”

Where are the cases showing that, by declaring an entity a common carrier, the state can strip that entity of its right to decide what speech it will (or will not) disseminate to the public at large? Judge Oldham cited none. Where are the cases showing that, by declaring an entity a place of public accommodation, the state can force that entity to create expressive products against its will? In response to Justice Thomas’s question, Colorado’s counsel conceded that “the historical record is sparse.”

Would conservatives be glad to see Smith forced to design websites that go against her religious convictions? Would liberals rejoice at seeing social media platforms forced to host and amplify hate speech? If the answer to these questions is no, perhaps neither side should start down this path. Perhaps neither should be trying to use common carrier or public accommodation rules to evade Reno and control the internet.

Market Power

As support for the common carrier argument, Judge Oldham asserted the major social media platforms’ market power. “Each Platform has an effective monopoly,” he insisted, “over its particular niche of online discourse.” In his view, “sports ‘influencers’ need access to Instagram,” “political pundits need access to Twitter,” and so on.

There are a number of problems with this claim. To begin with, an entity that wins itself market power does not lose its right to free speech. In Miami Herald v. Tornillo (1973), it was argued that “debate on public issues” was at that time “open only to a monopoly in control of the press.” The Court did not disagree. Nonetheless, it unanimously struck down a state law requiring newspapers to let political candidates reply to negative coverage. “Press responsibility is not mandated by the Constitution,” the Justices explained, “and like many other virtues it cannot be legislated.”

Even if market power mattered, it is far from obvious that platforms have “effective monopolies,” whether over “niches” or otherwise. A month after the Fifth Circuit issued Paxton, Elon Musk purchased Twitter, causing more than a few commentators to ditch the service for Mastodon. Influencers—and, for that matter, political pundits—can gain a large following on Snapchat, TikTok (for now), YouTube, or Rumble. More broadly, the overlap among social media products is greater than might appear at first blush. Suing to break up Facebook and Instagram, for instance, the Federal Trade Commission has asserted that the products’ common parent, Meta, dominates a market for “personal social networking services.” The only large competitor in this market, the agency alleges, is Snapchat. Yet the agency has struggled to explain what makes this market distinct. These days, in fact, Meta is scrambling to make its products more like TikTok.

So the worst thing about the “effective monopol[ies]” claim is that it bounces off the surface. The typical antitrust case is a complex dispute about costs and outputs, profit margins and elasticities, and much else besides. Judge Oldham offered a bare assertion. A just-so story. A useful belief, if one’s goal is to let states commandeer the biggest social media platforms.

No one would cry for those platforms if the judiciary were to overestimate the size and stability of their market “niches.” Indeed, many will smile at the prospect. But be careful what you wish for.

Recall that the Tenth Circuit ruled against Lorie Smith in 303 Creative. Smith’s “custom and unique services,” the court wrote, “are inherently not fungible.” They are, “by definition, unavailable elsewhere.” Smith is therefore a market of one, the court thought, and that is grounds for forcing her to speak. Outlandish? Probably so. Then again, Colorado warns that if Smith wins, belief-based restrictions on service might proliferate, leading to market foreclosure in the aggregate. And that argument is not ridiculous; it is merely speculative and weak—not unlike the “effective monopol[ies]” argument in Paxton.

Anyone tempted to use loose pronouncements of market power as a weapon of (culture) war should first picture how the tactic might be misused in a variety of other cases. One careless claim of market power begets another.

Speech vs. Conduct

On the way to upholding HB20, the Fifth Circuit relied heavily on Rumsfeld v. FAIR (2006). A federal statute required law schools to host military recruiters on pain of losing government funding. FAIR upheld this mandate. “A law school’s decision to allow recruiters on campus,” the Court reasoned, “is not inherently expressive.” The statute regulated “conduct, not speech.” It affected “what law schools must do—afford equal access to military recruiters—not what they may or may not say.”

The Fifth Circuit used FAIR as a guide. The “targeted denial of access to only military recruiters,” the court said, could not be distinguished from the “viewpoint-based” content moderation “regulated by HB 20.” In both cases, the court concluded, the regulated activity is “conduct” that lacks “inherent expressiveness.” Therefore social media platforms have no First Amendment right to control what speech they host.

This, it turns out, is a popular way to justify letting the state regulate speech. In 303 Creative, the Biden administration filed a brief in support of Colorado. Colorado’s public accommodations law “target[s] conduct,” the brief says, invoking FAIR, and it “impose[s]” only “‘incidental’ burdens on expression.” The brief cites FAIR more than two dozen times. 

FAIR was authored by Chief Justice Roberts. At the oral argument in 303 Creative, he did not seem thrilled about how the decision was thrown back at him. That case involved “providing rooms,” he protested, and the Court held merely that “empty rooms don’t speak.”

The Chief Justice is on to something. Here again, the best move is not to play. Conservatives and liberals can come up with creative ways selectively to apply FAIR to this or that (but no other!) form of online speech. They can try to exploit the decision with callous craft, expecting, for some reason, that the gambit will work always in favor of their interests, and never against them. Or they can put FAIR down and affirm Reno for all.

Editorial Discretion

Which brings us to the most aggressive, and the most dangerous, of the attacks on Reno. Included within the First Amendment is a right to editorial discretion. This is why the government generally cannot tell a newspaper which articles or letters to publish, or a parade which marchers to allow, or a television channel which movies to carry. As the Eleventh Circuit said in Moody, it is why social media services are “constitutionally protected” when “they moderate and curate the content that they disseminate on their platforms.”

In Paxton, the Fifth Circuit swept this right aside. “Editorial discretion,” the court proclaimed, is not “a freestanding category of constitutionally protected speech.”

In their petition for certiorari, the platforms’ representatives cast serious doubt on this claim. They quote the Supreme Court’s discussion, across various decisions, of the “exercise [of] editorial discretion over . . . speech and speakers,” of the “editorial function” as being “itself” an “aspect of ‘speech,’” and of the right of “editorial discretion in the selection and presentation” of content. As they observe, the Fifth Circuit “essentially limited th[e] Court’s editorial discretion cases to their facts.”

That’s true—but hold on. Let us return, one last time, to 303 Creative. At argument, Justice Sotomayor sounded remarkably like Judge Oldham. “Show me where,” on the website, “it’s your message,” she asked Smith’s counsel. “How is this your story? It’s [the couple’s] story.” Counsel responded with—the right to editorial discretion. “Every page” on the website is Smith’s “message,” counsel said, “just as in a newspaper that posts an op-ed written by someone else.” Sotomayor did not seem impressed.

We must again ask whether the states would welcome consistent application of their legal principles. If Colorado successfully compels Smith to speak in 303 Creative, will it accept that it has strengthened Florida’s and Texas’s hand in Moody and Paxton? Would Florida and Texas be willing to remove the platforms’ right to editorial discretion at the price of nixing many Christian artists’ right to such discretion as well? A state could duck the question by dreaming up new and clever ways to distinguish the cases. Yes, of course. Other, very different states could do the same. That is the problem.

The Court has called for the views of the Solicitor General in Moody and Paxton. The Biden administration will be tempted to try to thread the needle. To get cute. To argue that the red-state social media laws before the Court are toxic and scary and unconstitutional, but that the blue-state social media laws in the works are beneficial and enlightened and in perfect harmony with the First Amendment. 

The Solicitor General should resist the urge to make everything come out right (from a liberal perspective). Here is what she should do instead. Agree that review is warranted. Denounce SB7072 and HB20. Celebrate the right to editorial discretion. Heap praise on Reno v. ACLU. Stop.

Posted on Techdirt - 18 January 2023 @ 12:08pm

If You Believe In Free Speech, The GOP’s “Weaponization” Subcommittee Is Not Your Friend

“Politics,” the writer Auberon Waugh liked to say, “is for social and emotional misfits.” Its purpose is “to help them overcome these feelings of inferiority and compensate for their personal inadequacies in the pursuit of power.” You could accuse old Bron of painting with a rather broad brush, and you would be right. But he plainly understood the likes of Kevin McCarthy. As the Washington Post’s Ruth Marcus observed last week, two aspects of McCarthy’s bid to become Speaker of the House stand out. First, that he “seems to crave power for power’s sake, not for any higher purposes.” And second, that he “is willing to debase himself so completely to obtain it.”

Of the many concessions McCarthy made to his far-right flank to obtain the Speaker’s gavel, one of the most straightforward was to create a new Select Subcommittee on the Weaponization of the Federal Government. The desire for such an entity “percolat[ed] on the edges of the [party] conference and conservative media,” Politico reported last month, and the calls for it then quickly spread, “getting harder for the speaker hopeful to ignore.” But the hardliners were pushing at an open door: McCarthy had already been promising sweeping investigations of the Department of Justice and the FBI.

It’s amusing that the subcommittee is simply “on” weaponization, leaving onlookers the latitude to decide for themselves whether the body’s position is “pro” or “con.” The subcommittee will likely seek to disrupt the executive branch’s probes of Donald Trump’s interference in the 2020 election, role in the Capitol attack, and defiant mishandling of classified documents. It might also seek to hinder the government’s efforts to prosecute Jan. 6 rioters. In attempting to obstruct federal law enforcement, the House GOP would be engaging in its own forms of “weaponization.” It would be trying to “weaponize” its own authority—which, under our Constitution’s separation of powers, does not extend to meddling in ongoing criminal investigations. And it would be trying to “weaponize” the federal government by compelling it not to enforce the law. A better label might have been the “Select Subcommittee on Weaponizing the Federal Government Our Way.” Or, for brevity’s sake, perhaps “Partisan Hacks Against the Rule of Law.”

It is in this light that we must view another of the subcommittee’s main goals—getting “to the very bottom” (McCarthy’s words) of the federal government’s relationship with Big Tech. Last month Rep. Jim Jordan, the incoming chair of the House Judiciary Committee—and, now, of its “weaponization” subcommittee as well—accused the major tech firms of being “out to get conservatives.” He demanded that those firms preserve records of their “‘collusion’ with the Biden administration to censor conservatives on their platforms.” According to Axios, the subcommittee “will demand copies of White House emails, memos and other communications with Big Tech companies.”

There is nothing inherently wrong with setting up a congressional committee to investigate whether and how the government is influencing online speech and content moderation. After all, Congress has good reason to care about what the government itself is saying, especially if the government is using its own speech to violate the Free Speech Clause. Congress has a constitutional duty to oversee (though not intrude on) the executive branch’s faithful execution of the laws Congress has passed.

Lately, moreover, the executive branch has indeed displayed an unhealthy desire to control constitutionally protected expression. Government officials now routinely jawbone social media platforms over content moderation. There were Surgeon General Vivek Murthy’s guidelines on “health misinformation,” issued—the platforms may have noticed—amid a push by the Biden administration to expose platforms to litigation over “misinformation” by paring back their Section 230 protection. Biden’s then-Press Secretary Jen Psaki announced that the administration was flagging posts for platforms to remove. What’s worse, she declared that a ban from one social media platform should trigger a ban from all platforms. And then there was the notorious “Disinformation Governance Board”—a body whose name was dystopian, whose powers were ill-defined, whose rollout was ham-fisted, and whose brief existence unsettled all but the most sanguine proponents of government power. It can hardly be said that there’s nothing worth investigating.

The First Amendment bars the government from censoring speech it doesn’t like—even speech that might be called “misinformation.” The state may try to influence speech indirectly—it is allowed, within limits, to express its opinion about others’ speech—but that doesn’t mean doing so is a good idea. The government shouldn’t be telling social media platforms what content to allow, much as it shouldn’t be telling newspapers what stories to print.

Misguided though they may be, however, none of the government’s efforts—to this point—have violated the First Amendment. The government has not ordered platforms to remove or ban specific content. It has not issued threats that rise to the level of government coercion. And it has not co-opted the platforms in a manner that would turn them into state actors. If anything, the right’s ongoing lawsuits alleging otherwise have helped reveal a quite different problem: that the platforms are all too receptive to government input. But agreeing with the government does not make one’s actions attributable to the government.

The “Twitter Files”—which helped inspire, and will drive much of, the subcommittee’s investigation—change precisely none of this. Much misunderstood and even more misrepresented, the information released via Elon Musk’s surrogates actually undercuts the narrative that the federal government is dictating the platforms’ editorial decisions. 

We were promised evidence that the FBI and the federal government conspired with platforms to squash the Hunter Biden laptop story. Instead, we learned—as “Twitter Files” player Matt Taibbi himself put it—that “there’s no evidence … of any government involvement.” Messages to Twitter sent by the Biden campaign, we were told, amounted to a bona fide First Amendment violation. But a non-state actor lobbying a non-state actor does not a state action make. Such lobbying by political campaigns is common—and, in many instances, even proper. (Many of the tweets the Biden campaign flagged contained links to leaked nude photos of Hunter Biden. Even political candidates may try to defend their families’ privacy.)

Yet another “Twitter Files” document dump showed Twitter receiving payments from the FBI. This, we heard, definitively revealed the Grand Conspiracy to Censor Conservatives. Except that the payments were simply statutorily mandated reimbursements for expenses Twitter incurred replying to court-ordered requests for investigatory information.

So although there might well be issues regarding government jawboning worth investigating, you can be forgiven for doubting that the House GOP, proceeding through its “weaponization” subcommittee, is up to the task of seriously investigating them. Judging from past performance, the Republicans who control the body will use its hearings to emit great waves of impotent, performative, largely unintelligible sound. “The yells and animal noises” of parliamentary debates, Auberon Waugh wrote, have nothing to do with principles or policy. “They are cries of pain and anger, mingled with hatred and envy, at the spectacle of another group exercising the ‘power’ which the first group covets.” That will describe Republican-run Big Tech hearings to a tee.

The GOP is not fighting to stop so-called “censorship”; it’s fighting to stop so-called “censorship” performed by those they dislike. When Musk suspended some journalists from Twitter—on trumped up charges, no less—many on the right responded with whoops of glee. That Musk had just engaged in precisely the sort of conduct those pundits had long denounced was of no consequence. Indeed, when some on the left pointed out that the suspensions were arbitrary, impulsive, and imposed under false pretenses, their remarks launched a thousand conservative op-eds crowing about progressive hypocrisy. (There should be a long German word for shouting “Hypocrite!” at someone as you pass by him on the flip-flop road.)

Choking on outrage, the contemporary political right has descended into practicing “Who, whom?” politics of the crassest sort. House Republicans have no problem with “weaponizing” the government, so long as they’re the ones doing the “weaponizing.” This explains how they can rail against a government campaign to reduce COVID misinformation on social media while also arguing that Section 230, the law that gives social media platforms the legal breathing room to host sketchy content to begin with, should be scrapped.

If you believe for one moment that Kevin McCarthy, Jim Jordan, and their myrmidons truly support free speech on the Internet, we’ve got beachfront property in Kansas to sell you. There was no limit to Waugh’s disdain for such men. Until the public “accepts that the urge to power is a personality disorder in its own right,” he said, “like the urge to sexual congress with children or the taste for rubber underwear, there will always be a danger of circumstances arising which persuade ordinary people to start listening to politicians … and taking them seriously.” A bit over the top, to be sure—though not in this case.

Posted on Techdirt - 21 July 2022 @ 01:40pm

Two Dogmas Of The Free Speech Panic

Antonio García Martínez recently invited me on his podcast, The Pull Request. I was thrilled. Antonio is witty, charming, and intimidatingly brilliant (he was a PhD student in physics at Berkeley, and it shows). We did the episode, and we had a great time. But we never got to an important topic—Antonio’s take on free speech and the Internet.

In April, Antonio released a piece on his Substack, Freeze peach and the Internet, in which he asserts the existence of a “‘content moderation’ regime that is utterly re-defining speech in liberal societies.” That “regime” wants, Antonio contends, to “arbitrate truth and regulate online behavior for the sake of some supposed greater good.” It is opposed by those who still support freedom of speech. Antonio believes that the “regime” and its opponents are locked in an epic battle, and that we all must pick a side.

I’m not sure what to make of some of Antonio’s claims. We’re told, for instance, that “freedom of reach is freedom of speech”—which sounds like a nod to the New Left’s call, in the 1960s and 70s, to seize “the means of communication.” But then we’re told that “Twitter isn’t obligated to give you reach if user interest in your speech is low.” So Antonio is not demanding reach equality. “It’s simply not the case,” he says, “that freedom of speech is some legal binary switched between an abstract allow/not-allow state.” Maybe, then, the point is that we must think about the effects of algorithmic amplification. Who is ignoring or attacking that point, I do not know.

At any rate, a general critique of Antonio’s article this post is not.

In 1951 Willard Van Orman Quine, one of the great analytic philosophers of the twentieth century, wrote a short paper called “Two Dogmas of Empiricism.” Quine put to the torch two key assumptions made by the logical positivists, a philosophical school popular in the first half of the century. Antonio, in his piece, promotes two key assumptions commonly made by those who fear “Big Tech censorship.” If Mike Masnick can riff on Arrow’s impossibility theorem to explain why content moderation is so difficult, I figure I can riff on Quine’s “dogmas” paper to explore two ways in which the fears of online “censorship” by private platforms are overblown. As we’re about to see, in fact, Quine’s work can teach us something valuable about content moderation.

Antonio’s first dogma is the belief that either you’re for free speech, or you’re not—you’re for the censors and the would-be arbiters of truth. His second is the belief that Twitter is the “public square,” and that the state of the restrictions there is the proper gauge of the state of free speech in our nation as a whole. With apologies to H.L. Mencken, these dogmas are clear, simple, and wrong.

Dogma #1: Free Speech: With Us or Against Us

AGM insists that the debate about content moderation boils down to a single overriding divide. “The real issue,” he says—the issue “the consensus pro-censorship crowd will never directly address”—is this:

Do you think freedom of speech includes the right to say and believe obnoxious stupid shit that’s almost certainly false, or do you feel platforms have the responsibility to arbitrate truth and regulate online behavior for the sake of some supposed greater good?

That’s it. “If you think” that “dumb and even offensive speech” is “protected speech,” you’re “on the Elon [Musk] side of this debate.” Otherwise, you think that “platforms should be putting their fingers on the scales,” and you’re therefore on “the anti-Elon” side. As if to add an exclamation point, Antonio declares: “Some countries have real free speech, and some countries have monarchs on their coins.” (I’ve seen it said, in a similar vein, that all anyone “really” cares about is “political censorship,” and that that’s the key issue the “consensus pro-censorship crowd” won’t grapple with.)

Antonio presents a nice, neat dividing line. There’s the stuff no one likes—Antonio points to dick pics, beheading videos, child sexual abuse material, and hate speech that incites violence—and then there’s people’s opinions. All the talk of content moderation is just obfuscation—an elaborate effort to hide this clear line. “Quibbling over the precise content policy in the pro-content moderation view,” Antonio warns, “is just haggling over implementation details, and essentially ceding the field to that side of the debate.”

The logical positivists, too, wanted some nice, neat lines. Bear with me.

Like most philosophers, the LPs wanted to know what we can know. One reason arguments often go in circles, or bog down in confusion, is that humans make a lot of statements that aren’t so much wrong as simply meaningless. Many sentences don’t connect to anything in the real world over which a productive argument can be had. (Extreme example: “the Absolute enters into, but is itself incapable of, evolution and progress.”) The LPs wanted to separate the wheat (statements of knowledge) from the chaff (metaphysical gobbledygook, empty emotive utterances, tribal call signs, etc.). To that end, they came up with something called the verification principle.

In 1936 a brash young thinker named A.J. Ayer—the AGM of early twentieth century philosophy—published a crisp and majestic but (as Ayer himself later admitted) often mistaken book, Language, Truth & Logic, in which he set forth the verification principle in its most succinct form. Can observation of the world convince us of the likely truth or falsity of a statement? If so, the statement can be verified. And “a sentence,” Ayer argued, “says nothing unless it is empirically verifiable.” That’s it.

Problem: mathematics and formal logic seem to reveal useful—indeed, surprising—things about the world, but without adhering to the verification principle. In the LPs’ view, though, this was just a wrinkle. They postulated a distinction between good, juicy “synthetic” statements that can be verified, and drab old “analytic” statements that, according to (young) Ayer, are just games we play with definitions. (“A being whose intellect was infinitely powerful would take no interest in logic and mathematics. For he would be able to see at a glance everything that his definitions implied[.]”)

So the LPs had two dogmas: that a sentence either does or does not refer to immediate experience, and that a sentence can be analytic or synthetic. But as Quine explained in his paper, these pat categories are rubbish. He addressed the latter dogma first, raising a number of problems with it that aren’t worth getting into here. (For one thing, definitions are set by human convention; their “correct” use is open to empirical debate.) He then took aim at the verification principle—or, as he put it, the “dogma of reductionism”—itself.

The logical positivists went wrong, Quine observed, in supposing “that each statement, taken in isolation from its fellows, can admit of confirmation or infirmation.” It’s “misleading to speak of the empirical content of an individual statement,” he explained, because statements “face the tribunal of sense experience not individually but only as a corporate body.” There aren’t two piles of statements—those that can be verified and those that can’t. Rather, “the totality of our so-called knowledge or beliefs, from the most casual matters of geography and history to the profoundest laws of atomic physics or even pure mathematics and logic,” is a continuous “man-made fabric.” As we learn new things, “truth values have to be redistributed over some of our statements. Re-evaluation of some statements entails re-evaluation of others.” Our knowledge is not a barrel of apples that we go through, apple-by-apple, keeping the ripe ones and tossing the rotten. It is, in the words of philosopher Simon Blackburn, a “jelly of belief,” the whole of which “quiver[s] in reaction to ‘recalcitrant’ or surprising experience.”

See how this ties into content moderation? Steve Bannon was booted from Twitter because he said: “I’d put [Anthony Fauci’s and Christopher Wray’s] heads on pikes. Right. I’d put them at the two corners of the White House. As a warning to federal bureaucrats: Either get with the program or you’re gone.” Is this just an outlandish opinion—some “obnoxious stupid shit that’s almost certainly false”—or is it an incitement to violence? Why is this statement different from, say, “I’d put Gentle’s and Funshine’s heads on pikes . . . as a warning to the other Care Bears”?

When Donald Trump told the January 6 rioters, “We love you. You’re very special,” was that political speech? Or was it sedition? As with “heads on pikes,” the statement itself won’t answer that question for you. The same problem arises when Senate candidate Eric Greitens invites you to go “RINO hunting,” or when a rightwing pundit announces that the Consitution is “null and void.” And who says we must look at each piece of content in isolation? Say the Oath Keepers are prevalent on your platform. They’re not planning an insurrection right now; they’re just riling each other up and getting their message out and recruiting. Is this just (dumb) political speech? Or is it more like a slowly developing beheading video? (If a platform says, “Don’t care where you go, guys, but you can’t stay here,” is it time to put monarchs on our coins?)

Similar issues arise with harassment. Doxxing, deadnaming, coordinated pile-ons, racist code words, Pepe memes—all present line-drawing issues that can’t be resolved with appeals to a simple divide between bad opinions and bad behavior. In each instance, we have no choice but to “quibbl[e] over the precise content policy.” Disagreement will reign, moreover, because each of us will enter the debate with a distinct set of political, cultural, contextual, and experiential priors. To some people, Jordan Peterson deadnaming Elliot Page is obviously harassment. To others (including, I confess, myself), his doing so pretty clearly falls within the rough-and-tumble of public debate. But that disagreement is not, at bottom, about that individual piece of content; it’s about the entire panoply of clashing priors.

It’s great that we have acerbic polemicists like Antonio. I’m glad that he’s out there pushing his conception of freedom and decrying safety-ism. (He’s on his strongest footing, I suppose, when he complains about the labeling, “fact-checking,” and blocking of Covid claims.) I hope that he and his swashbuckling ilk never stop defending “our American birthright of constant and cantankerous rebellion against the status quo.” But it’s just not true that there’s a free speech crowd and a pro-censorship crowd and nothing in between. Content moderation is complicated and difficult, and people’s views about it sit on a continuum.

Dogma #2: The Public Square, Website-by-Website

Antonio’s other dogma is the view—held by many—that Twitter is in some meaningful sense the “public square.” Antonio has some pointed criticisms for those who believe that “Twitter isn’t the public forum, and as such shouldn’t be treated with the sacrosanct respect we typically imbue anything First Amendment-related.”

As the second part of that sentence suggests, AGM gets to his destination by an idiosyncratic route. He seems to think that, in other people’s minds, the public square is where solemn and civilized discussion of public issues occurs. But as Antonio points out, there’s never been such a place. We’re Americans; we’ve always hashed things out by shouting at each other. Today, one of the places where we shout at each other is on Twitter. Ergo, in Antonio’s mind, Twitter is the public square.

I don’t get it. “Everyone invoking some fusty idea of ‘debate’ or even a healthy ‘marketplace of ideas,’” Antonio writes, “is citing bygone utopias that never were, and never will be.” Who is this “everyone”? Anyway, just because there’s a place where debate occurs does not mean that that place is the “public square.” In 2019 Antonio was saying that we should break up Facebook because it has a “stranglehold” on “attention.” So why isn’t it the public square? Perhaps it’s both Twitter and Facebook? But then what about Substack—where AGM published his piece? What about the many podcast platforms that carry his conversations? What about Rumble and TikTok? Heck, what about Techdirt? The “public square”—if we really must go about trying to precisely define such a thing—is not Twitter but the Internet.

Antonio appeals to the “conditions our democracy was born in.” The “vicious, ribald, scabrous, offensive, and often violent tumult of the Founders’ era,” he notes, “makes modern Twitter look like a Mormon picnic by comparison.” This begs the question. Look at what Americans are saying on the Internet as a whole; it’s as vicious, ribald, scabrous, offensive, and violent as you please. If what matters is that our discourse resemble that of the founding era, we can rest easy. Ben Franklin’s brother used his publication, The New-England Courant, to rail against smallpox inoculation; modern anti-vaxxers use Gab to similar effect. James Callender used newspapers and pamphlets to viciously (but often accurately) attack Adams, Hamilton, and Jefferson; Matt Taibbi and Glenn Greenwald use newsletters and podcasts to viciously (but at times accurately) attack Joe Biden and Hillary Clinton. In his Porcupine’s Gazette, William Cobbett cried, “Professions of impartiality I shall make none”; the website American Greatness boasts about being called “a hotbed of far-right Trumpist nationalism.” Plus ça change . . .

Antonio says that we need “unfettered debate” in a “public square” that we “shar[e]” with “our despised political enemies.” Surveying the Internet, I’d say we have exactly that.

Now, I don’t deny that there’s a swarm of activists, researchers, academics, columnists, politicians, and government officials—not to mention the tech companies themselves—that make up what journalist Joe Bernstein calls “Big Disinfo.” Not surprisingly, the old gatekeepers of information, along with those who once benefited from greater information gatekeeping, are upset that social media allows information to bypass gates. “That the most prestigious liberal institutions of the pre-digital age are the most invested in fighting disinformation,” Bernstein submits, “reveals a lot about what they stand to lose, or hope to regain.” Indeed.

But so what? There’s a certain irony here. The people most convinced that our elite institutions are inept and crumbling are also the ones most concerned that those institutions will take over the Internet, throttle speech, and (toughest of all) reshape opinion—all, presumably, without violating the First Amendment. Are the forces of Big Disinfo really that competent? Please.

Antonio and I are both fans of Martin Gurri, whose 2014 book The Revolt of the Public is basically a long meditation on why Antonio’s “content-moderation regime” can’t succeed. “A curious thing happens to sources of information under conditions of scarcity,” Gurri proposes. “They become authoritative.” Thanks to the Internet, however, we are living through an unprecedented information explosion. When there’s information abundance, no claim is authoritative. Many claims must compete with each other. All claims (but especially elite claims) are questioned, challenged, and ridiculed. (In this telling, our current tumult is more vicious, ribald, etc., than that of the founding era.) Unable to shut down competing claims, elites can’t speak with authority. Unable to speak with authority, they can’t shut down competing claims.

Short of an asteroid strike, World War III, the rise of a thoroughgoing despotism, or some kind of Butlerian jihad, the flow of information can’t be stopped.

Posted on Techdirt - 17 August 2021 @ 12:09pm

Why Is The Republican Party Obsessed With Social Media?

“In 1970,” observes Edmund Fawcett in his recent survey of political conservatism, “the best predictor of high conservative alignment in voting was a college education.” “Now,” he notes, “it is the reverse.” Many other statistics sing this tune of political realignment. Whereas the counties Al Gore won in the 2000 election accounted for about half the nation’s economic output, for instance, the counties Joe Biden won in 2020 account for more than 70 percent of it. Many observers have tried to capture this shift’s cultural significance. You could say that the Republicans have rejected Apollo for Dionysus. You could conclude that they have embraced Foucault and postmodern philosophy. Or you could sting to the quick, as David Brooks does, and acknowledge that “much of the Republican Party has become detached from reality.”

This political rearrangement has been helped along by much larger historical forces, among them the decline of social trust, the collapse of Christianity, the erosion of faith in experts and institutions, the flattening of authority structures and information flows, and the accelerating pace of technological change. Put to one side the knotty question whether the benefits of modernity outweigh the costs. No one can deny the size and sweep of liberal capitalist disruption.

Are Republicans grappling with the megatrends reshaping their party, society, and the world? Are the big disruptions sparking big thoughts that lead to big policy proposals? In a word, no. In fact, the party’s leaders have rallied around something remarkably small. Not for them the pursuit of the grand contemporary challenges. Their first thought, it often seems, is for how social media companies treat the extremists, conspiracy theorists, and other fringe characters on their websites. Republican legislators emit plumes of bills on the subject. Rightwing scholars and pundits take a bottomless interest in it (and in how to circumvent the companies’ First Amendment right to moderate content as they see fit). Over and over, Republican politicians say that Big Tech has become “Big Brother,” that Twitter and Facebook pose an “existential threat” to free speech, and that Jack Dorsey and Mark Zuckerberg are “out to get” conservatives. They say these things so often—they spend so much time saying them, to the exclusion of saying other things about other issues—that their voters can almost be forgiven for thinking them true.

By now people simply assume that disdain for social media firms is a key plank of the GOP platform. Should it be? Actually, that Republicans devote so much energy to denouncing content moderation is exceedingly odd. Not only is the supposed problem trivial; there is arguably, even from the perspective of a conservative, no problem at all. It is doubtful that content moderation harms the Republican Party. Some rightwing commentators all but admit as much. As David Harsanyi, an outspoken critic of Twitter’s and Facebook’s content-moderation practices, sees it, “There is no evidence that regulations, whether enforced by corporate stooges or government itself, make us safer or alter human nature or stop people from believing stupid things.” Which is to say that major social media sites have not stopped, and perhaps cannot stop, abhorrent views, crackpot views, or rightwing populist views from spreading, even thriving, online.

So why the clamor? Because the claim that average people are being silenced by “Silicon Valley oligarchs” is simple. It’s easy to grasp. It lends itself to the perpetual partisan fund drive. Above all, it’s emotional.

The right’s fixation with online speech is, at bottom, about dignity. Your rustic aunt—the one who sneers, “The election was stolen, and there’s nothing you can say to convince me otherwise!”—might be unrefined. She might be stubborn. She might even be a bit batty. But she also feels frustrated, as she struggles in earnest to make sense of a fast-evolving world. And she feels ignored, if not maligned, by journalists and intellectuals who dismiss her as a rube and a bigot. She feels treated unfairly. Whether the treatment is truly unfair is beside the point. “When you tell a large chunk of the country that their voices are not worth hearing,” writes Brooks, “they are going to react badly—and they have.”

Here as elsewhere, though, the GOP cannot square what its voters purport to want with how they so obviously feel. On the one hand, many on the right seek precisely what conservatives, in the traditional sense of the word, have sought since the early nineteenth century: security and stability in the face of innovation and churn. “To ordinary people shaken by a hurricane of social change that nobody yet understands,” says Fawcett, “the hard right promises a longed-for security of life, imagined as a common shelter.” On the other hand, the populist right is brimming with contempt for a system that rejects them. They therefore value their ability to use social media to mock academics, journalists, government officials, and other figures of authority. Theirs is (to return to Fawcett) a “gospel [that] sets itself as at war with a conservatism of prudence and moderation.”

Think of it this way. A party that celebrates the 1950s as a simpler, happier time of community feeling and patriotic elan, but that believes trolls getting exiled to Parler, Gettr, and Gab is among the most pressing problems of our moment, is by definition a neurotic mess. “The unreconciled right,” in Fawcett’s words, “cannot be said to have a coherent, thought-through critique of present-day liberal orthodoxy, let alone a positive conservative orthodoxy.” What it has instead is merely “a powerful set of rhetorical themes,” one of the most prominent of which is the accusation that liberals “stop conservatives from telling the truth about a desolate state of affairs.” Hence Republicans’ hollow obsession with what can and cannot be said on Twitter or Facebook.

Corbin Barthold is internet policy counsel at TechFreedom.

More posts from corbin.barthold >>