ari.cohn's Techdirt Profile

ari.cohn

About ari.cohn

Posted on Techdirt - 4 May 2022 @ 12:00pm

Musk, Twitter, Bluesky & The Future Of Content Moderation (Part II)

In Part I, we explained why the First Amendment doesn’t get Musk to where he seemingly wants to be: If Twitter were truly, legally the “town square” (i.e., public forum) he wants it to be, it couldn’t do certain things Musk wants (cracking down on spam, authenticating users, banning things equivalent to “shouting fire in a crowded theatre,” etc.). Twitter also couldn’t do the things it clearly needs to do to continue to attract the critical mass of users that make the site worth buying, let alone attract those—eight times as many Americans—who don’t use Twitter every day. 

So what, exactly, should Twitter do to become a more meaningful “de facto town square,” as Musk puts it?

What Objectives Should Guide Content Moderation?

Even existing alternative social media networks claim to offer the kind of neutrality that Musk contemplates—but have failed to deliver. In June 2020, John Matze, Parler’s founder and then its CEO, proudly declared the site to be an “a community town square, an open town square, with no censorship,” adding, “if you can say it on the street of New York, you can say it on Parler.” Yet that same day, Matze also bragged of “banning trolls” from the left.

Likewise, GETTR’s CEO has bragged about tracking, catching, and deleting “left-of-center” content, with little clarity about what that might mean. Musk promises to void such hypocrisy:

Let’s take Musk at his word. The more interesting thing about GETTR, Parler and other alternative apps that claim to be “town squares” is just how much discretion they allow themselves to moderate content—and how much content moderation they do. 

Even in mid-2020, Parler reserved the right to “remove any content and terminate your access to the Services at any time and for any reason or no reason,” adding only a vague aspiration: “although Parler endeavors to allow all free speech that is lawful and does not infringe the legal rights of others.” today, Parler forbids any user to “harass, abuse, insult, harm, defame, slander, disparage, intimidate, or discriminate based on gender, sexual orientation, religion, ethnicity, race, age, national origin, or disability.” Despite claiming that it “defends free speech,” GETTR bans racial slurs such as those by Miller as well as white nationalist codewords

Why do these supposed free-speech-absolutist sites remove perfectly lawful content? Would you spend more or less time on a site that turned a blind eye to racial slurs? By the same token, would you spend more or less time on Twitter if the site stopped removing content denying the Holocaust, advocating new genocides, promoting violence, showing animals being tortured, encouraging teenagers to cut or even kill themselves, and so on? Would you want to be part of such a community? Would any reputable advertiser want to be associated with it? That platforms ostensibly starting with the same goal as Musk have reserved broad discretion to make these content moderation decisions underscores the difficulty in drawing these lines and balancing competing interests.

Musk may not care about alienating advertisers, but all social media platforms moderate some lawful content because it alienates potential users. Musk implicitly acknowledges this imperative on user engagement, at least when it comes to the other half of content moderation: deciding which content to recommend to users algorithmically—an essential feature of any social media site. (Few Twitter users activate the option to view their feeds in reverse-chronological order.) When TED’s Chrius Anderson asked him about a tweet many people have flagged as “obnoxious,” Musk hedged: “obviously in a case where there’s perhaps a lot of controversy, that you would not want to necessarily promote that tweet.” Why? Because, presumably, it could alienate users. What is “obvious” is that the First Amendment would not allow the government to disfavor content merely because it is “controversial” or “obnoxious.”

Today, Twitter lets you block and mute other users. Some claim user empowerment should be enough to address users’ concerns—or that user empowerment just needs to work better. A former Twitter employee tells the Washington Post that Twitter has considered an “algorithm marketplace” in which users can choose different ways to view their feeds. Such algorithms could indeed make user-controlled filtering easier and more scalable. 

But such controls offer only “out of sight, out of mind” comfort. That won’t be enough if a harasser hounds your employer, colleagues, family, or friends—or organizes others, or creates new accounts, to harass you. Even sophisticated filtering won’t change the reality of what content is available on Twitter.

And herein lies the critical point: advertisers don’t want their content to be associated with repugnant content even if their ads don’t appear next to that content. Likewise, most users care what kind of content a site allows even if they don’t see it. Remember, by default, everything said on Twitter is public—unlike the phone network. Few, if anyone, would associate the phone company with what’s said in private telephone communications. But every Tweet that isn’t posted to the rare private account can be seen by anyone. Reporters embed tweets in news stories. Broadcasters include screenshots in the evening news. If Twitter allows odious content, most Twitter users will see some of that one way or another—and they’ll hold Twitter responsible for deciding to allow it.

If you want to find such lawful but awful content, you can find it online somewhere. But is that enough? Should you be able to find it on Twitter, too? These are undoubtedly difficult questions on which many disagree; but they are unavoidable.

What, Exactly, Is the Virtual Town Square?

The idea of a virtual town square isn’t new, but what, precisely, that means has always been fuzzy, and lofty talk in a recent Supreme Court ruling greatly exacerbated that confusion. 

“Through the use of chat rooms,” proclaimed the Supreme Court in Reno v. ACLU (1997), “any person with a phone line can become a town crier with a voice that resonates farther than it could from any soapbox. Through the use of Web pages, mail exploders, and newsgroups, the same individual can become a pamphleteer.” The Court wasn’t saying that digital media were public fora without First Amendment rights. Rather, it said the opposite: digital publishers have the same First Amendment rights as traditional publishers. Thus, the Court struck down Congress’s first attempt to regulate online “indecency” to protect children, rejecting analogies to broadcasting, which rested on government licensing of a “‘scarce’ expressive commodity.” Unlike broadcasting, the Internet empowers anyone to speak; it just doesn’t guarantee them an audience.

In Packingham v. North Carolina (2017), citing Reno’s “town crier” language, the Court waxed even more lyrical: “By prohibiting sex offenders from using [social media], North Carolina with one broad stroke bars access to what for many are the principal sources for knowing current events, checking ads for employment, speaking and listening in the modern public square, and otherwise exploring the vast realms of human thought and knowledge.” This rhetorical flourish launched a thousand conservative op-eds—all claiming that social media were legally public fora like town squares. 

Of course, Packingham doesn’t address that question; it merely said governments can’t deny Internet access to those who have completed their sentences. Manhattan Community Access Corp. v. Halleck (2019) essentially answers the question, albeit in the slightly different context of public access cable channels: “merely hosting speech by others” doesn’t “transform private entities into” public fora. 

The question facing Musk now is harder: what part, exactly, of the Internet should be treated as if it were a public forum—where anyone can say anything “within the bounds of the law”? The easiest way to understand the debate is the Open Systems Interconnection model, which has guided the understanding of the Internet since the 1970s:

Long before “net neutrality” was a policy buzzword, it described the longstanding operational state of the Internet: Internet service (broadband) providers won’t block, throttle or discriminate against lawful Internet content. The sky didn’t fall when the Republican FCC repealed net neutrality rules in 2018. Indeed, nothing really changed: You can still send or receive lawful content exactly as before. ISPs promise to deliver connectivity to all lawful content. The Federal Trade Commission enforces those promises, as do state attorneys general. And, in upholding the FCC’s 2015 net neutrality rules over then-Judge Brett Kavanaugh’s arguments that they violated the First Amendment, the D.C. Circuit noted that the rules applied only to providers that “sell retail customers the ability to go anywhere (lawful) on the Internet.” The rules simply didn’t apply to “an ISP making sufficiently clear to potential customers that it provides a filtered service involving the ISP’s exercise of ‘editorial intervention.’”)

In essence, Musk is talking about applying something like net neutrality principles, developed to govern the uncurated service ISPs offer at layers 1-3, to Twitter, which operates at layer 7—but with a major difference: Twitter can monitor all content, which ISPs can’t do. This means embroiling Twitter in trying to decide what content is lawful in a far, far deeper way than any ISP has ever attempted.

Implementing Twitter’s existing plans to offer users an “algorithm marketplace” would essentially mean creating a new layer of user control on top of Twitter. But Twitter has also been working on a different idea: creating a layer below Twitter, interconnecting all the Internet’s “soapboxes” into one, giant virtual town square while still preserving Twitter as a community within that square that most people feel comfortable participating in.

“Bluesky”: Decentralization While Preserving Twitter’s Brand

Jack Dorsey, former Twitter CEO, has been talking about “decentralizing” social media for over three years—leading some reporters to conclude that Dorsey and Musk “share similar views … promoting more free speech online.” In fact, their visions for Twitter seem to be very different: unlike Musk, Dorsey saw Twitter as a community that, like any community, requires curation.

In late 2019, Dorsey announced that Twitter would fund Bluesky, an independent project intended “to develop an open and decentralized standard for social media.” Bluesky “isn’t going to happen overnight,” Dorsey warned in 2019. “It will take many years to develop a sound, scalable, and usable decentralized standard for social media.” The project’s latest update detailed the many significant challenges facing the effort, but significant progress. 

Twitter has a strong financial incentive to shake up social media: Bluesky would “allow us to access and contribute to a much larger corpus of public conversation.” That’s lofty talk for an obvious business imperative. Recall Metcalfe’s Law: a network’s impact is the square of the number of nodes in the network. Twitter (330 million active users worldwide) is a fraction as large as its “Big Tech” rivals: Facebook (2.4 billion), Instagram (1 billion), YouTube (1.9 billion) and TikTok. So it’s not surprising that Twitter’s market cap is a much smaller fraction of theirs—just 1/16 that of Facebook. Adopting Bluesky should dramatically increase the value of Twitter and smaller companies like Reddit (330 million users) and LinkedIn (560 million users) because Bluesky would allow users of each participating site to interact easily with content posted on other participating sites. Each site would be more an application or a “client” than “platform”—just as Gmail and Outlook both use the same email protocols. 

Dorsey also framed Bluesky as a way to address concerns about content moderation. Days after the January 6 insurrection, Dorsey defended Trump’s suspension from Twitter yet noted concerns about content moderation:

Dorsey acknowledged the need for more “transparency in our moderation operations,” but pointed to Bluesky as a more fundamental, structural solution:

Adopting Bluesky won’t change how each company does its own content moderation, but it would make those decisions much less consequential. Twitter could moderate content on Twitter, but not on the “public conversation layer.” No central authority could control that, just as with email protocols and Bitcoin. Twitter and other participating social networks would no longer be “platforms” for speech so much as applications (or “clients”) for viewing the public conversation layer,  the universal “corpus” of social content.

Four years ago, Twitter banned Alex Jones for repeatedly violating rules against harassment. The conspiracy theorist par excellence moved to Gab, an alternative social network launched in 2017 that claims 15 million monthly visitors (an unverified number). On Gab, Jones now has only a quarter as many followers as he once had on Twitter. And because the site is much smaller overall, he gets much less engagement and attention than he once did. Metcalfe’s Law means fewer people talk about him.

Bluesky won’t get Alex Jones or his posts back on Twitter or other mainstream social media sites, but it might ensure that his content is available on the public conversation layer, where users of any app that doesn’t block him can see it. Thus, Jones could use his Gab account to seamlessly reach audiences on Parler, Getter, Truth Social, or any other site using Bluesky that doesn’t ban him. Each of these sites, in turn, would have a strong incentive to adopt Bluesky because the protocol would make them more viable competitors to mainstream social media. Bluesky would turn Metcalfe’s Law to their advantage: no longer separate, tiny town squares, these sites would be ways of experiencing the same town square—only with a different set of filters. 

But Mecalfe’s Law cuts both ways: even if Twitter and other social media sites implemented Bluesky, so long as Twitter continues to moderate the likes of Alex Jones, the portion of the “town square” enabled by Bluesky that Jones has access to will be limited. Twitter would remain a curated community, a filter (or set of filters) for experiencing the “public conversation layer.” When first announcing Bluesky, Dorsey said the effort would be good for Twitter not only for allowing the company “to access and contribute to a much larger corpus of public conversation” but also because Twitter could “focus our efforts on building open recommendation algorithms which promote healthy conversation.” With user-generated content becoming more interchangeable across services—essentially a commodity—Twitter and other social media sites would compete on user experience.

Given this divergence in visions, it shouldn’t be surprising that Musk has never mentioned Bluesky. If he merely wanted to make Bluesky happen faster, he could pour money into the effort—an independent, open source project—without buying Twitter. He could help implement proposals to run the effort as a decentralized autonomous organization (DAO) to ensure its long-term independence from any effort to moderate content. Instead, Musk is focused on cutting back Twitter’s moderation of content—except where he wants more moderation. 

What Does Political Neutrality Really Mean?

Much of the popular debate over content moderation revolves around the perception that moderation practices are biased against certain political identities, beliefs, or viewpoints. Jack Dorsey responded to such concerns in a 2018 congressional hearing, telling lawmakers: “We don’t consider political viewpoints—period. Impartiality is our guiding principle.” Dorsey was invoking the First Amendment, which bars discrimination based on content, speakers, or viewpoints. Musk has said something that sounds similar, but isn’t quite the same:

The First Amendment doesn’t require neutrality as to outcomes. If user behavior varies across the political spectrum, neutral enforcement of any neutral rule will produce what might look like politically “biased” results.

Take, for example, a study routinely invoked by conservatives that purportedly shows Twitter’s political bias in the 2016 election. Richard Hanania, a political scientist at Columbia University, concluded that Twitter suspended Trump supporters more often than Clinton supporters at a ratio of 22:1. Hanania postulated that this meant Trump supporters would have to be at least four times as likely to violate neutrally applied rules to rule out Twitter’s political bias—and dismissed such a possibility as implausible. But Hanania’s study was based on a tiny sample of only reported (i.e., newsworthy) suspensions—just a small percentage of overall content moderation. And when one bothers to actually look at Hanania’s data—something none of the many conservatives who have since invoked his study seem to have done—one finds exactly those you’d expect to be several times more likely to violate neutrally-applied rules: the American Nazi Party, leading white supremacists including David Duke, Richard Spencer, Jared Taylor, Alex Jones, Charlottesville “Unite the Right” organizer James Allsup, and various Proud Boys. 

Was Twitter non-neutral because it didn’t ban an equal number of “far left” and “far right” users? Or because the “right” was incensed by endless reporting in leading outlets like The Wall Street Journal of a study purporting to show that “conservatives” were being disproportionately “censored”?

There’s no way to assess Musk’s outcome-based conception neutrality without knowing a lot more about objectionable content on the site. We don’t know how many accounts were reported, for what reasons, and what happened to those complaints. There is no clear denominator that allows for meaningful measurements—leaving only self-serving speculation about how content moderation is or is not biased. This is one problem Musk can do something about.

Greater Transparency Would Help, But…

After telling Anderson “I’m not saying that I have all the answers here,” Musk fell back on something simpler than line-drawing in content moderation: increased transparency. If Twitter should “make any changes to people’s tweets, if they’re emphasized or de-emphasized, that action should be made apparent so anyone can see that action’s been taken, so there’s no behind the scenes manipulation, either algorithmically or manually.” Such tweet-by-tweet reporting sounds appealing in principle, but it’s hard to know what it will mean in practice. What kind of transparency will users actually find useful? After all, all tweets are “emphasized or de-emphasized” to some degree; that is simply what Twitter’s recommendation algorithm does.

Greater transparency, implemented well, could indeed increase trust in Twitter’s impartiality. But ultimately, only large-scale statistical analysis can resolve claims of systemic bias. Twitter could certainly help to facilitate such research by providing data—and perhaps funding—to bona fide researchers.

More problematic is Musk’s suggestion that Twitter’s content moderation algorithm should be “open source” so anyone could see it. There is an obvious reason why such algorithms aren’t open source: revealing precisely how a site decides what content to recommend would make it easy to manipulate the algorithm. This is especially true for those most determined to abuse the site: the spambots on whom Musk has declared war. Making Twitter’s content moderation less opaque will have to be done carefully, lest it fosters the abuses that Musk recognizes as making Twitter a less valuable place for conversation.

Public Officials Shouldn’t Be Able to Block Users

Making Twitter more like a public forum is, in short, vastly more complicated than Musk suggests. But there is one easy thing Twitter could do to, quite literally, enforce the First Amendment. Courts have repeatedly found that government officials can violate the First Amendment by blocking commenters on their official accounts. After then-President Trump blocked several users from replying to his tweets, the users sued. The Second Circuit held that Trump violated the First Amendment by blocking users because Trump’s Twitter account was, with respect to what he could do, a public forum. The Supreme Court vacated the Second Circuit’s decision—Trump left office, so the case was moot—but Justice Thomas indicated that some aspects of government officials’ accounts seem like constitutionally protected spaces. Unless a user’s conduct constitutes harassment, government accounts likely can’t block them without violating the First Amendment. Whatever courts ultimately decide, Twitter could easily implement this principle.

Conclusion

Like Musk, we definitely “don’t have all the answers here.” In introducing what we know as the “marketplace of ideas” to First Amendment doctrine, Justice Holmes’s famous dissent in Abrams v. United States (1919) said this of the First Amendment: “It is an experiment, as all life is an experiment.” The same could be said of the Internet, Twitter, and content moderation. 

The First Amendment may help guide Musk’s experimentation with content moderation, but it simply isn’t the precise roadmap he imagines—at least, not for making Twitter the “town square” everyone wants to go participate in actively. Bluesky offers the best of both worlds: a much more meaningful town square where anyone can say anything, but also a community that continues to thrive. 

Berin Szóka (@BerinSzoka) is President of TechFreedom. Ari Cohn (@AriCohn) is Free Speech Counsel at TechFreedom. Both are lawyers focused on the First Amendment’s application to the Internet.

Posted on Techdirt - 4 May 2022 @ 09:30am

Musk, Twitter, Why The First Amendment Can’t Resolve Content Moderation (Part I)

“Twitter has become the de facto town square,” proclaims Elon Musk. “So, it’s really important that people have both the reality and the perception that they’re able to speak freely within the bounds of the law.” When pressed by TED’s Chris Anderson, he hedged: “I’m not saying that I have all the answers here.” Now, after buying Twitter, his position is less clear: “I am against censorship that goes far beyond the law.” Does he mean either position literally?

Musk wants Twitter to stop making contentious decisions about speech. “[G]oing beyond the law is contrary to the will of the people,” he declares. Just following the First Amendment, he imagines, is what the people want. Is it, though? The First Amendment is far, far more absolutist than Musk realizes. 

Remember Neo-Nazis with burning torches and screaming “the Jews will not replace us!”? The First Amendment required Charlottesville to allow that demonstration. Some of them were arrested and prosecuted for committing acts of violence. One even killed a bystander with his car. The First Amendment permits the government to punish violent conduct but—contrary to what Musk believes—almost none of the speech associated with it. 

The Constitution protects “freedom for the thought that we hate,” as Justice Oliver Wendell Holmes declared in a 1929 dissent that has become the bedrock of modern First Amendment jurisprudence. In most of the places where we speak, the First Amendment does not set limits on what speech the host, platform, proprietor, station, or publication may block or reject. The exceptions are few: actual town squares, company-owned towns, and the like—but not social media, as every court to decide the issue has held

Musk wants to treat Twitter as if it were legally a public forum. A laudable impulse—and of course Musk has every legal right to do that. But does he really want to? His own statements indicate not. And on a practical level, it would not make much sense. Allowing anyone to say anything lawful, or even almost anything lawful, would make Twitter a less useful, less vibrant virtual town square than it is today. It might even set the site on a downward spiral from which it never recovers.

Can Musk have it both ways? Can Twitter help ensure that everyone has a soapbox, however appalling their speech, without alienating both users and the advertisers who sustain the site?  Twitter is already working on a way to do just that—by funding Bluesky—but Musk doesn’t seem interested. Nor does he seem interested in other technical and institutional improvements Twitter could make to address concerns about arbitrary content moderation. None of these reforms would achieve what seems to be Musk’s real goal: politically neutral outcomes. We’ll discuss all this in Part II.

How Much Might Twitter’s Business Model Change?

A decade ago, a Twitter executive famously described the company as “the free speech wing of the free speech party.” Musk may imagine returning to some purer, freer version of Twitter when he says “I don’t care about the economics at all.” But in fact, increasing Twitter’s value as a “town square” will require Twitter to continue striking a careful balance between what individual users can say and creating an environment that so many people want to use so regularly.

User Growth. A traditional public forum (like Lee Park in Charlottesville) is indifferent to whether people choose to use it. Its function is simply to provide a space for people to speak. But if Musk didn’t care how many people used Twitter, he’d buy an existing site like Parler or build a new one. He values Twitter for the same reason any network is valuable: network effects. Digital markets have always been ruled by Metcalfe’s Law: the impact of any network is equal to the square of the number of nodes in the network. 

No, not all “nodes” are equal. Twitter is especially popular among journalists, politicians and certain influencers. Yet the site has only 39.6 million active daily U.S. users. That may make Twitter something like ten times larger than Parler, but it’s only one-seventh the size of Facebook—and only the world’s fifteenth-largest social network. To some in the “very online” set, Twitter may seem like everything, but 240 million Americans age 13+ don’t use Twitter every day. Quadrupling Twitter’s user base would make the site still only a little more than half as large as Facebook, but Metcalfe’s law suggests that would make Twitter roughly sixteen times more impactful than it is today. 

Of course, trying to maximize user growth is exactly what Twitter has been doing since 2006. It’s a much harder challenge than for Facebook or other sites premised on existing connections. Getting more people engaged on Twitter requires making them comfortable with content from people they don’t know offline. Twitter moderates harmful content primarily to cultivate a community where the timid can express themselves, where moms and grandpas feel comfortable, too. Very few Americans want to be anywhere near anything like the Charlottesville rally—whether offline or online.

User Engagement. Twitter’s critics allege the site highlights the most polarizing, sensationalist content because it drives engagement on the site. It’s certainly possible that a company less focused on its bottom line might change its algorithms to focus on more boring content. Whether that would make the site more or less useful as a town square is the kind of subjective value judgment that would be difficult to justify under the First Amendment if the government attempted to legislate it.

But maximizing Twitter’s “town squareness” means more than maximizing “time on site”—the gold standard for most sites. Musk will need to account for users’ willingness to actually engage in dialogue on the site. 

Short of leaving Twitter altogether, overwhelmed and disgusted users may turn off notifications for “mentions” of them, or limit who can reply to their tweets. As Aaron Ross Powell notes, such a response “effectively turns Twitter from an open conversation to a set of private group chats the public can eavesdrop on.” It might be enough, if Musk truly doesn’t care about the economics, for Twitter to be a place where anything lawful goes and users who don’t like it can go elsewhere. But the realities of running a business are obviously different from those of traditional, government-owned public fora. If Musk wants to keep or grow Twitter’s user base, and maintain high engagement levels, there are a plethora of considerations he’ll need to account for.

Revenue. Twitter makes money by making users comfortable with using the site—and advertisers comfortable being associated with what users say. This is much like the traditional model of any newspaper. No reputable company would buy ads in a newspaper willing to publish everything lawful. These risks are much, much greater online. Newspapers carefully screen both writers before they’re hired and content before it’s published. Digital publishers generally can’t do likewise without ruining the user experience. Instead, users help a mixture of algorithms and human content moderators flag content potentially toxic to users and advertisers. 

Even without going as far as Musk says he wants to, alternative “free speech” platforms like Gab and Parler have failed to attract any mainstream advertisers. By taking Twitter private, Musk could relieve pressure to maximize quarterly earnings. He might be willing to lose money but the lenders financing roughly half the deal definitely aren’t. The interest payments on their loans could exceed Twitter’s 2021 earnings before interest, taxes, depreciation, and amortization. How will Twitter support itself? 

Protected Speech That Musk Already Wants To Moderate

As Musk’s analysts examine whether the purchase is really worth doing, the key question they’ll face is just what it would mean to cut back on content moderation. Ultimately, Musk will find that the First Amendment just doesn’t offer the roadmap he thinks it does. Indeed, he’s already implicitly conceded that by saying he wants to moderate certain kinds of content in ways the First Amendment wouldn’t allow. 

Spam. “If our twitter bid succeeds,” declared Musk in announcing his takeover plans, “we will defeat the spam bots or die trying!” The First Amendment, if he were using it as a guide for moderation, would largely thwart him.

Far from banning spam, as Musk proposes, the 2003 CAN-SPAM Act merely requires email senders to, most notably, include unsubscribe options, honor unsubscribe requests, and accurately label both subject and sender. Moreover, the law defines spam narrowly: “the commercial advertisement or promotion of a commercial product or service.” Why such a narrow approach? 

Even unsolicited commercial messages are protected by the First Amendment so long as they’re truthful. Because truthful commercial speech receives only “intermediate scrutiny,” it’s easier for the government to justify regulating it. Thus, courts have also protected the constitutional right of public universities to block commercial solicitations. 

But, as courts have noted, “the more general meaning” of “spam” “does not (1) imply anything about the veracity of the information contained in the email, (2) require that the entity sending it be properly identified or authenticated, or (3) require that the email, even if true, be commercial in character.” Check any spam folder and you’ll find plenty of messages that don’t obviously qualify as commercial speech, which the Supreme Court has defined as speech which does “no more than propose a commercial transaction.” 

Some emails in your spam folder come from non-profits, political organizations, or other groups. Such non-commercial speech is fully protected by the First Amendment. Some messages you signed up for may inadvertently wind up in your spam filter; plaintiffs regularly sue when their emails get flagged as spam. When it’s private companies like ISPs and email providers making such judgments, the case is easy: the First Amendment broadly protects their exercise of editorial judgment. Challenges to public universities’ email filters have been brought by commercial spammers, so the courts have dodged deciding whether email servers constituted public fora. These courts have implied, however, that if such taxpayer-funded email servers were public fora, email filtering of non-commercial speech would have to be content- and viewpoint-neutral, which may be impossible.

Anonymity. After declaring his intention to “defeat the spam bots,” Musk added a second objective of his plan for Twitter: “And authenticate all real humans.” After an outpouring of concern, Musk qualified his position:

Whatever “balance” Musk has in mind, the First Amendment doesn’t tell him how to strike it. Authentication might seem like a content- and viewpoint-neutral way to fight tweet-spam, but it implicates a well-established First Amendment right to anonymous and pseudonymous speech.

Fake accounts plague most social media sites, but they’re a bigger problem for Twitter since, unlike Facebook, it’s not built around existing offline connections and Twitter doesn’t even try to require users to use their real names. A 2021 study estimated that “between 9% and 15% of active Twitter accounts are bots” controlled by software rather than individual humans. Bots can have a hugely disproportionate impact online. They’re more active than humans and can coordinate their behavior, as that study noted, to “manufacture fake grassroots political support, promote terrorist propaganda and recruitment, manipulate the stock market, and disseminate rumors and conspiracy theories.” Given Musk’s concerns about “cancel culture,” he should recognize that online harassment, especially targeting employers and intimate personal connections, as a way that lawful speech can be wielded against lawful speech.

When Musk talks about “authenticating” humans, it’s not clear what he means. Clearly, “authentication” means more than simply requiring captchas to make it harder for machines to create Twitter accounts. Those have been shown to be defeatable by spambots. Surely, he doesn’t mean making real names publicly visible, as on Facebook. After all, pseudonymous publications have always been a part of American political discourse. Presumably, Musk means Twitter would, instead of merely requiring an email address, somehow verify and log the real identity behind each account. This isn’t really a “middle ground”: pseudonyms alone won’t protect vulnerable users from governments, Twitter employees, or anyone else who might be able to access Twitter’s logs. However such logs are protected, the mere fact of collecting such information would necessarily chill speech by those concerned of being persecuted for their speech. Such authentication would clearly be unconstitutional if a government were to do it.

“Anonymity is a shield from the tyranny of the majority,” ruled the Supreme Court in McIntyre v. Ohio Elections Comm’n (1995). “It thus exemplifies the purpose behind the Bill of Rights and of the First Amendment in particular: to protect unpopular individuals from retaliation . . . at the hand of an intolerant society.” As one lower court put it, “the free exchange of ideas on the Internet is driven in large part by the ability of Internet users to communicate anonymously.” 

We know how these principles apply to the Internet because Congress has already tried to require websites to “authenticate” users. The Child Online Protection Act (COPA) of 1998 required websites to age-verify users before they could access material that could be “harmful to minors.” In practice, this meant providing a credit card, which supposedly proved the user was likely an adult. Courts blocked the law and, after a decade of litigation, the U.S. Court of Appeals for the Eighth Circuit finally struck it down in 2008. The court held that “many users who are not willing to access information non-anonymously will be deterred from accessing the desired information.” The Supreme Court let that decision stand. The United Kingdom now plans to implement its own version of COPA, but First Amendment scholars broadly agree: age verification and user authentication are constitutional non-starters in the United States.

What kind of “balance” might the First Amendment allow Twitter to strike? Clearly, requiring all users to identify themselves wouldn’t pass muster. But suppose Twitter required authentication only for those users who exhibit spambot-like behavior—say, coordinating tweets with other accounts that behave like spambots. This would be different from COPA, but would it be constitutional? Probably not. Courts have explicitly recognized a right to engage send non-commercial spam (unsolicited messages), for example: “were the Federalist Papers just being published today via e-mail,” warned the Virginia Supreme Court in striking down a Virginia anti-spam law, “that transmission by Publius would violate the statute.” 

Incitement. In his TED interview, Musk readily agreed with Anderson that “crying fire in a movie theater” “would be a crime.” No metaphor has done more to sow confusion about the First Amendment. It comes from the Supreme Court’s 1919 Schenck decision, which upheld the conviction of the head of the U.S. Socialist Party for distributing pamphlets criticizing the military draft. Advocating obstructing military recruiting, held the Court, constituted a “clear and present danger.” Justice Oliver Wendell Holmes mentioned “falsely shouting fire in a theatre” as a rhetorical flourish to drive the point home.

But Holmes revised his position just months later when he dissented in a similar case, Abrams v. United States. “[T]he best test of truth,” he wrote, “is the power of the thought to get itself accepted in the competition of the market.” That concept guides First Amendment decisions to this day—not Schenk’s vivid metaphor. Musk wants the open marketplace of ideas Holmes lauded in Abrams—yet also, somehow, Schenck’s much lower standard. 

In Brandenburg v. Ohio (1969), the Court finally overturned Schenck: the First Amendment does not “permit a State to forbid or proscribe advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action.” Thus, a Klansman’s openly racist speech and calls for a march on Washington were protected by the First Amendment. The Brandenburg standard has proven almost impossible to satisfy when speakers are separated from their listeners in both space and time. Even the Unabomber Manifesto wouldn’t qualify—which is why The New York Times and The Washington Post faced no legal liability when they agreed to publish the essay back in 1995 (to help law enforcement stop the serial mail-bomber). 

Demands that Twitter and other social media remove “harmful” speech—such as COVID misinformation—frequently invoke Schenck. Indeed, while many expect Musk will reinstate Trump on Twitter, his embrace of Schenck suggests the opposite: Trump could easily have been convicted of incitement under Schenck’s “clear and present danger” standard.

Self-Harm. Musk’s confusion over incitement may also extend to its close cousin: speech encouraging, or about, self-harm. Like incitement, “speech integral to criminal conduct” isn’t constitutionally protected, but, also like incitement, courts have defined that term so narrowly that the vast majority of content that Twitter currently moderates under its suicide and self-harm policy is protected by the First Amendment.

William Francis Melchert-Dinkel, a veteran nurse with a suicide fetish, claimed to have encouraged dozens of strangers to kill themselves and to have succeeded at least five times. Using fake profiles, Melchert-Dinkel entered into fake suicide pacts (“i wish [we both] could die now while we are quietly in our homes tonite:)”), invoked his medical experience to advise hanging over other methods (“in 7 years ive never seen a failed hanging that is why i chose that”), and asked to watch his victims hang themselves. He was convicted of violating Minnesota’s assisted suicide law in two cases, but the Minnesota Supreme Court voided the statute’s prohibitions on “advis[ing]” and “encourag[ing]” suicide. Only for providing “step-by-step instructions” on hanging could Melchert-Dinkel ultimately be convicted.

In another case, the Massachusetts Supreme Court upheld the manslaughter conviction of Michelle Carter; “she did not merely encourage the victim,” her boyfriend, also age 17, “but coerced him to get back into the truck, causing his death” from carbon monoxide poisoning. Like Melchert-Dinkel, Carter provided specific instructions on completing suicide: “knowing the victim was inside the truck and that the water pump was operating — … she could hear the sound of the pump and the victim’s coughing — [she] took no steps to save him.”

Such cases are the tiniest tip of a very large iceberg of self-harm content. With nearly one in six teens intentionally hurting themselves annually, researchers found 1.2 million Instagram posts in 2018 containing “one of five popular hashtags related to self-injury: #cutting, #selfharm, #selfharmmm, #hatemyself and #selfharmawareness.” More troubling, the rate of such posts nearly doubled across that year. Unlike suicide or assisted suicide, self-harm, even by teenagers, isn’t illegal, so even supplying direct instructions about how to do it it would be constitutionally protected speech. With the possible exception of direct user-to-user instructions about suicide, the First Amendment would require a traditional public forum to allow all this speech. It wouldn’t even allow Twitter to restrict access to self-harm content to adults—for the same reasons COPA’s age-gating requirement for “harmful-to-minors” content was unconstitutional. 

Trade-Offs in Moderating Other Forms of Constitutionally Protected Content

So it’s clear that Musk doesn’t literally mean Twitter users should be able to “speak freely within the bounds of the law.” He clearly wants to restrict some speech in ways that the government could not in a traditional public forum. His invocation of the First Amendment likely refers primarily to moderation of speech considered by some to be harmful—which the government has very limited authority to regulate. Such speech presents one of the most challenging content moderation issues: how a business should balance a desire for free discourse with the need to foster the environment that the most people will want to use for discourse. That has to matter to Musk, however much money he’s willing to lose on supporting a Twitter that alienates advertisers.

Hateful & Offensive Speech. Two leading “free speech” networks moderate, or even ban, hateful or otherwise offensive speech. “GETTR defends free speech,” the company said in January after banning former Blaze TV host Jon Miller, “but there is no room for racial slurs on our platform.” Likewise, Gab bans “doxing,” the exposure of someone’s private information with the intent to encourage others to harass them. These policies clearly aren’t consistent with the First Amendment: hate speech is fully protected by the First Amendment, and so is most speech that might colloquially be considered “harassment” or “bullying.”

In Texas v. Johnson (1989), the Supreme Court struck down a ban on flag burning: “if there is a bedrock principle underlying the First Amendment, it is simply that the government may not prohibit the expression of an idea simply because society finds the idea itself offensive or disagreeable.” In Matal v. Tam (2017), the Supreme Court reaffirmed this principle and struck down a prohibition on offensive trademark registrations: “Speech that demeans on the basis of race, ethnicity, gender, religion, age, disability, or any other similar ground is hateful; but the proudest boast of our free speech jurisprudence is that we protect the freedom to express the thought that we hate.” 

Most famously, in 1978, the American Nazi Party won the right to march down the streets of Skokie, Illinois, a majority-Jewish town where ten percent of the population had survived the Holocaust. The town had refused to issue a permit to march. Displaying the swastika, Skokie’s lawyers argued, amounted to “fighting words”—which the Supreme Court had ruled, in 1942, could be forbidden if they had a “direct tendency to cause acts of violence by the persons to whom, individually, the remark is addressed.” The Illinois Supreme Court disagreed: “The display of the swastika, as offensive to the principles of a free nation as the memories it recalls may be, is symbolic political speech intended to convey to the public the beliefs of those who display it”—not “fighting words.” Even the revulsion of “the survivors of the Nazi persecutions, tormented by their recollections … does not justify enjoining defendants’ speech.”

Protection of “freedom for the thought we hate” in the literal town square is sacrosanct. The American Civil Liberties Union lawyers who defended the Nazis’ right to march in Skokie were Jews as passionately committed to the First Amendment as was Justice Holmes (post-Schenck). But they certainly wouldn’t have insisted the Nazis be invited to join in a Jewish community day parade. Indeed, the Court has since upheld the right of parade organizers to exclude messages they find abhorrent.

Does Musk really intend Twitter to host Nazis and white supremacists? Perhaps. There are, after all, principled reasons for not banning speech, even in a private forum, just because it is hateful. But there are unavoidable trade-offs. Musk will have to decide what balance will optimize user engagement and keep advertisers (and those financing his purchase) satisfied. It’s unlikely that those lines will be drawn entirely consistent with the First Amendment; at most, it can provide a very general guide.

Harassment & Threats. Often, users are banned by social media platforms for “threatening behavior” or “targeted abuse” (e.g., harassment, doxing). The first category may be easier to apply, but even then, a true public forum would be sharply limited in which threats it could restrict. “True threats,” explained the Court in Virginia v. Black (2003), “encompass those statements where the speaker means to communicate a serious expression of an intent to commit an act of unlawful violence to a particular individual or group of individuals.” But courts split on whether the First Amendment requires that a speaker have the subjective intent to threaten the target, or if it suffices that a reasonable recipient would have felt threatened. Maximal protection for free speech means a subjective requirement, lest the law punish protected speech merely because it might be interpreted as a threat. But in most cases, it would be difficult—if not impossible—to establish subjective intent without the kind of access to witnesses and testimony courts have. These are difficult enough issues even for courts; content moderators will likely find it impossible to adhere strictly, or perhaps even approximately, to First Amendment standards.

Targeted abuse and harassment policies present even thornier issues; what is (or should be) prohibited in this area remains among the most contentious aspects of content moderation. While social media sites vary in how they draw lines, all the major sites “[go] far beyond,” as Musk put it, what the First Amendment would permit a public forum to proscribe.

Mere offensiveness does not suffice to justify restricting speech as harassment; such content-based regulation is generally unconstitutional. Many courts have upheld harassment laws insofar as they target not speech but conduct, such as placing repeated telephone calls to a person in the middle of the night or physically stalking someone. Some scholars argue instead that the consistent principle across cases is that proscribable harassment involves an unwanted physical intrusion into a listener’s private space (whether their home or a physical radius around the person) for the purposes of unwanted one-on-one communication. Either way, neatly and consistently applying legal standards of harassment to content moderation would be no small lift.

Some lines are clear. Ranting about a group hatefully is not itself harassment, while sending repeated unwanted direct messages to an individual user might well be. But Twitter isn’t the telephone network. Line-drawing is more difficult when speech is merely about a person, or occurs in the context of a public, multi-party discussion. Is it harassment to be the “reply guy” who always has to have the last word on everything? What about tagging a person in a tweet about them, or even simply mentioning them by name? What if tweets about another user are filled with pornography or violent imagery? First Amendment standards protect similar real-world speech, but how many users want to party to such conversation?

Again, Musk may well want to err on the side of more permissiveness when it comes to moderation of “targeted abuse” or “harassment.”  We all want words to keep their power to motivate; that remains their most important function. As the Supreme Court said in 1949: “free speech… may indeed best serve its high purpose when it induces a condition of unrest … or even stirs people to anger. Speech is often provocative and challenging. It may strike at prejudices and preconceptions and have profound unsettling effects as it presses for the acceptance of an idea.” 

But Musk’s goal is ultimately, in part, to attract users and keep them engaged. To do that, Twitter will have to moderate some content that the First Amendment would not allow the government to punish. Content moderators have long struggled on how to balance these competing interests. The only certainty is that this is, and will continue to be, an extremely difficult tightrope to walk—especially for Musk. 

Obscenity & Pornography. Twitter already allows pornography involving consenting adults. Yet even this is more complicated than simply following the First Amendment. On the one hand, child sexual abuse material (CSAM) is considered obscenity, which the First Amendment simply doesn’t protect. All social media sites ban CSAM (and all mainstream sites proactively filter for, and block, it). On the other hand, nonconsensual pornography involving adults isn’t obscene, and therefore is protected by the First Amendment. Some courts have nonetheless upheld state “revenge porn” laws, but those laws are actually much narrower than Twitter’s flat ban (“You may not post or share intimate photos or videos of someone that were produced or distributed without their consent.”) 

Critical to the Vermont Supreme Court’s decision to uphold the state’s revenge porn law were two features that made the law “narrowly tailored.” First, it required intent to “harm, harass, intimidate, threaten, or coerce the person depicted.” Such an intent standard is a common limiting feature of speech restrictions upheld by courts. Yet none of Twitter’s policies turn on intent. Again, it would be impossible to meaningfully apply intent-based standards at the scale of the Internet and outside the established procedures of courtrooms. Intent is a complex inquiry unto itself; content moderators would find it nearly impossible to make these decisions with meaningful accuracy. Second, the Vermont law excluded  “[d]isclosures of materials that constitute a matter of public concern,” and those “made in the public interest.” Twitter does have a public-interest exception to its policies, yet, Twitter notes:

At present, we limit exceptions to one critical type of public-interest content—Tweets from elected and government officials—given the significant public interest in knowing and being able to discuss their actions and statements. 

It’s unlikely that Twitter would actually allow public officials to post pornographic images of others without consent today, simply because they were public officials. But to “follow the First Amendment,” Twitter would have to go much further than this: it would have to allow anyone to post such images, in the name of the “public interest.” Is that really what Musk means?

Gratuitous Gore. Twitter bans depictions of “dismembered or mutilated humans; charred or burned human remains; exposed internal organs or bones; and animal torture or killing.” All of these are protected speech. Violence is not obscenity, the Supreme Court ruled in Brown v. Entertainment Merchants Association (2011), and neither is animal cruelty, ruled the Court in U.S. v. Stevens (2010). Thus, the Court struck down a California law barring the sale of “violent” video games to minors and requiring that they be labeled “18,” and a federal law criminalizing “crush videos” and other depictions of the torture and killing of animals.

The Illusion of Constitutionalizing Content Moderation

The problem isn’t just that the “bounds of the law” aren’t where Musk may think they are. For many kinds of speech, identifying those bounds and applying them to particular facts is a far more complicated task than any social media site is really capable of. 

It’s not as simple as whether “the First Amendment protects” certain kinds of speech. Only three things we’ve discussed fall outside the protection of the First Amendment altogether: CSAM, non-expressive conduct, and speech integral to criminal conduct. In other cases, speech may be protected in some circumstances, and unprotected in others.

Musk is far from the only person who thinks the First Amendment can provide clear, easy answers to content moderation questions. But invoking First Amendment concepts without doing the kind of careful analysis courts do in applying complex legal doctrines to facts means hiding the ball: it  conceals subjective value judgments behind an illusion of faux-constitutional objectivity. 

This doesn’t mean Twitter couldn’t improve how it makes content moderation decisions, or that it couldn’t come closer to doing something like what courts do in sussing out the “bounds of the law.” Musk would want to start by considering Facebook’s initial efforts to create a quasi-judicial review of the company’s most controversial, or precedent-setting, moderation decisions. In 2018, Facebook funded the creation of an independent Oversight Board, which appointed a diverse panel of stakeholders to assess complaints. The Board has issued 23 decisions in little more than a year, including one on Facebook’s suspension of Donald Trump for posts he made during the January 6 storming of the Capitol, expressing support for the rioters. 

Trump’s lawyers argued the Board should “defer to the legal principles of the nation state in which the leader is, or was governing.” The Board responded that its “decisions do not concern the human rights obligations of states or application of national laws, but focus on Facebook’s content policies, its values and its human rights responsibilities as a business.” The Oversight Board’s charter makes this point very clear. Twitter could, of course, tie its policies to the First Amendment and create its own oversight board, chartered with enforcing the company’s adherence to First Amendment principles. But by now, it should be clear how much more complicated that would be than it might seem. While constitutional protection of speech is clearly established in some areas, new law is constantly being created on the margins—by applying complex legal standards to a never-ending kaleidoscope of new fact patterns. The complexities of these cases keep many lawyers busy for years; it would be naïve to presume that an extra-judicial board will be able to meaningfully implement First Amendment standards.

At a minimum, any serious attempt at constitutionalizing content moderation would require hiring vastly more humans to process complaints, make decisions, and issue meaningful reports—even if Twitter did less content moderation overall. And Twitter’s oversight board would have to be composed of bona fide First Amendment experts. Even then, it may be that the decision of such a board might later be undercut by actual court decisions involving similar facts. This doesn’t mean that attempting to hew to the First Amendment is a bad idea; in some areas, it might make sense, but it will be far more difficult than Musk imagines.

In Part II, we’ll ask what principles, if not the First Amendment, should guide content moderation, and what Musk could do to make Twitter more of a “de facto town square.”

Berin Szóka (@BerinSzoka) is President of TechFreedom. Ari Cohn (@AriCohn) is Free Speech Counsel at TechFreedom. Both are lawyers focused on the First Amendment’s application to the Internet.

Posted on Techdirt - 10 February 2022 @ 03:30pm

The Top Ten Mistakes Senators Made During Today's EARN IT Markup

Today, the Senate Judiciary Committee unanimously approved the EARN IT Act and sent that legislation to the Senate floor. As drafted, the bill will be a disaster. Only by monitoring what users communicate could tech services avoid vast new liability, and only by abandoning, or compromising, end-to-end encryption, could they implement such monitoring. Thus, the bill poses a dire threat to the privacy, security and safety of law-abiding Internet users around the world, especially those whose lives depend on having messaging tools that governments cannot crack. Aiding such dissidents is precisely why it was the U.S. government that initially funded the development of the end-to-end encryption (E2EE) now found in Signal, Whatsapp and other such tools. Even worse, the bill will do the opposite of what it claims: instead of helping law enforcement crack down on child sexual abuse material (CSAM), the bill will actually help the most odious criminals walk free.

As with the July 2020 markup of the last Congress’s version of this bill, the vote was unanimous. This time, no amendments were adopted; indeed, none were even put up for a vote. We knew there wouldn’t be much time for discussion because Sen. Dick Durbin kicked off the discussion by noting that Sen. Lindsey Graham would have to leave soon for a floor vote. 

The Committee didn’t bother holding a hearing on the bill before rushing it to markup. The one and only hearing on the bill occurred just six days after its introduction back in March 2020. The Committee thereafter made major (but largely cosmetic) changes to the bill, leaving its Members more confused than ever about what the bill actually does. Today’s markup was a singular low-point in the history of what is supposed to be one of the most serious bodies in Congress. It showed that there is nothing remotely judicious about the Judiciary Committee; that most of its members have little understanding of the Internet and even less of how the, ahem, judiciary actually works; and, saddest of all, that they simply do not care.

Here are the top ten legal and technical mistakes the Committee made today.

Mistake #1: “Encryption Is not Threatened by This Bill”

Strong encryption is essential to online life today. It protects our commerce and our communications from the prying eyes of criminals, hostile authorian regimes and other malicious actors.

Sen. Richard Blumenthal called encryption a “red herring,” relying on his work with Sen. Leahy’s office to implement language from his 2020 amendment to the previous version of EARN IT (even as he admitted to a reporter that encryption was a target). Leahy’s 2020 amendment aimed to preserve companies’ ability to offer secure encryption in their products by providing that a company could not be found in violation of the law because it utilized secure encryption, doesn’t have the ability to decrypt communications, or fails to undermine the security of their encryption (for example, by building in a backdoor for use by law enforcement). 

But while the 2022 EARN IT Act contains the same list of protected activities, the authors snuck in new language that undermines that very protection. This version of the bill says that those activities can’t be an independent basis of liability, but that courts can consider them as evidence while proving the civil and criminal claims permitted by the bill’s provisions. That’s a big deal. EARN IT opens the door to liability under an enormous number of state civil and criminal laws, some of which require (or could require, if state legislatures so choose) a showing that a company was only reckless in its actions—a far lower showing than federal law’s requirement that a defendant have acted “knowingly.” If a court can consider the use of encryption, or failure to create security flaws in that encryption, as evidence that a company was “reckless,” it is effectively the same as imposing liability for encryption itself. No sane company would take the chance of being found liable for transmitting CSAM; they’ll just stop offering strong encryption instead. 

Mistake #2: The Bill’s Sponsors Readily Conceded that EARN IT Would Coerce Monitoring for CSAM

EARN IT’s sponsors repeatedly complained that tech companies aren’t doing enough to monitor for CSAM—and that their goal was to force them to do more. As Sen. Blumenthal noted, free software (PhotoDNA) makes it easy to detect CSAM, and it’s simply outrageous that some sites aren’t even using it. He didn’t get specific but we will: both Parler and Gettr, the alternative social networks favored by the MAGA right, have refused to use PhotoDNA. When asked about it, Parler’s COO told The Washington Post: “I don’t look for that content, so why should I know it exists?” The Stanford Internet Observatory’s David Thiel responded:

We agree completely—morally. So why, as Berin asked when EARN IT was first introduced, doesn’t Congress just directly mandate the use of such easy filtering tools? The answer lies in understanding why Parler and Gettr can get away with this today. Back in 2008, Congress required tech companies that become aware of CSAM to report it immediately to NCMEC, the quasi-governmental clearinghouse that administers the database of CSAM hashes used by PhotoDNA to identify known CSAM. Instead of requiring companies to monitor for CSAM, Congress said exactly the opposite: nothing in 18 U.S.C. § 2258A “shall be construed to require a provider to monitor [for CSAM].”

Why? Was Congress soft on child predators back then? Obviously not. Just the opposite: they understood that requiring tech companies to conduct searches for CSAM would make them state actors subject to the Fourth Amendment’s warrant requirement—and they didn’t want to jeopardize criminal prosecutions. 

Conceding that the purpose of EARN IT Act is to coerce searches for CSAM is a mistake, a colossal one, because it invites courts to rule that searching wasn’t voluntary.

Mistake #3: The Leahy Amendment Alone Won’t Protect Privacy & Security, or Avoid Triggering the Fourth Amendment

While Sen. Leahy’s 2020 amendment was a positive step towards protecting the privacy and security of online communications, and Lee’s proposal today to revive it is welcome, it was always an incomplete solution. While it protected companies against liability for offering encryption or failing to undermine the security of their encryption, it did not protect the refusal to conduct monitoring of user communications. A company offering E2EE products might still be coerced into compromising the security of its devices by scanning user communications “client-side” (i.e., on the device) prior to encrypting sent communications or after decrypting received communications. 

Apple recently proposed such a technology for such client-side scanning, raising concerns from privacy advocates and civil society groups. For its part, Apple assured that safeguards would limit use of the system to known CSAM to prevent the capability from being abused by foreign governments or rogue actors. But the capacity to conduct such surveillance presents an inherent risk of being exploited by malicious actors. Some companies may be able to successfully safeguard such surveillance architecture from misuse or exploitation. However, resources and approaches will vary across companies, and it is a virtual certainty that not all of them will be successful. And if done under coercion, create a risk that such efforts will be ruled state action requiring a warrant under the Fourth Amendment. 

Our letter to the Committee proposes an easy way to expand the Leahy amendment to ensure that companies won’t be held liable for not monitoring user content: borrow language directly from Section 2258A(f).

Mistake #4: EARN IT’s Sponsors Just Don’t Understand the Fourth Amendment Problem

Sen. Blumenthal insisted, repeatedly, that EARN IT contained no explicit requirement not to use encryption. The original version of the bill would, indeed, have allowed a commission to develop “best practices” that would be “required” as conditions of “earning” back the Section 230 immunity tech companies need to operate—hence the bill’s name. But dropping that concept didn’t really make the bill less coercive because the commission and its recommendations were always a sideshow. The bill has always coerced monitoring of user communications—and, to do that, the abandonment or bypassing of strong encryption—indirectly, through the threat of vast legal liability for not doing enough to stop the spread of CSAM. 

Blumenthal simply misunderstands how the courts assess whether a company is conducting unconstitutional warrantless searches as a “government actor.” “Even when a search is not required by law, … if a statute or regulation so strongly encourages a private party to conduct a search that the search is not ‘primarily the result of private initiative,’ then the Fourth Amendment applies.” U.S. v. Stevenson, 727 F.3d 826, 829 (8th Cir. 2013) (quoting Skinner v. Railway Labor Executives’ Assn, 489 U.S. 602, 615 (1989)). In that case, the court found that AOL was not a government actor because it “began using the filtering process for business reasons: to detect files that threaten the operation of AOL’s network, like malware and spam, as well as files containing what the affidavit describes as “reputational” threats, like images depicting child pornography.” AOL insisted that it “operate[d] its file-scanning program independently of any government program designed to identify either sex-offenders or images of child pornography, and the government never asked AOL to scan Stevenson’s e-mail.” Id. By contrast, every time EARN IT’s supporters explain their bill, they make clear that they intend to force companies to search user communications in ways they’re not doing today.

Mistake #2 Again: EARN IT’s Sponsors Make Clear that Coercion Is the Point

In his opening remarks today, Sen. Graham didn’t hide the ball:

“Our goal is to tell the social media companies ‘get involved and stop this crap. And if you don’t take responsibility for what’s on your platform, then Section 230 will not be there for you.’ And it’s never going to end until we change the game.”

Sen. Chris Coons added that he is “hopeful that this will send a strong signal that technology companies … need to do more.” And so on and so forth.

If they had any idea what they were doing, if they understood the Fourth Amendment issue, these Senators would never admit that they’re using liability as a cudgel to force companies to take affirmative steps to combat CSAM. By making intentions unmistakable, they’ve given the most vile criminals exactly what they need to to challenge the admissibility of CSAM evidence resulting from companies “getting involved” and “doing more.” Though some companies, concerned with negative publicity, may tell courts that they conducted searches of user communications for “business reasons,” we know what defendants will argue: the companies’ “business reason” is avoiding the wide, loose liability that EARN IT subjected them to. EARN IT’s sponsors said so.

Mistake #5: EARN IT’s Sponsors Misunderstanding How Liability Would Work 

Except for Sen. Mike Lee, no one on the Committee seemed to understand what kind of liability rolling back Section 230 immunity, as EARN IT does, would create. Sen. Blumenthal repeatedly claimed that the bill requires actual knowledge. One of the bill’s amendments (the new Section 230(e)(6)(A)) would, indeed, require actual knowledge by enabling civil claims under 18 U.S.C. § 2255 “if the conduct underlying the claim constitutes a violation of section 2252 or section 2252A,” both of which contain knowledge requirements. This amendment is certainly an improvement over the original version of EARN IT, which would have explicitly allowed 2255 claims under a recklessness standard. 

But the two other changes to Section 230 clearly don’t require knowledge. As Sen. Lee pointed out today, a church could be sued, or even prosecuted, simply because someone posted CSAM on its bulletin board. Multiple existing state laws already create liability based on something less than actual knowledge of CSAM. As Lee noted, a state could pass a law creating strict liability for hosting CSAM. Allowing states to hold websites liable for recklessness (or even less) while claiming that the bill requires actual knowledge is simply dishonest. All these less-than-knowledge standards will have the same result: coercing sites into monitoring user communications, and into abandoning strong encryption as an obstacle to such monitoring. 

Blumenthal made it clear that this is precisely what he intends, saying: “Other states may wish to follow [those using the “recklessness” standard]. As Justice Brandeis said, states are the laboratories of democracy … and as a former state attorney general I welcome states using that flexibility. I would be loath to straightjacket them in their adoption of different standards.”

Mistake #6: “This Is a Criminal statute, This Is Not Civil Liability”

So said Sen. Lindsey Graham, apparently forgetting what his own bill says. Sen. Dianne Feinstein added her own misunderstanding, saying that she “didn’t know that there was a blanket immunity in this area of the law.” But if either of those statements were true, the EARN IT Act wouldn’t really do much at all. Section 230 has always explicitly carved out federal criminal law from its immunities; companies can already be charged for knowing distribution of child sexual abuse material (CSAM) or child sexual exploitation (CSE) under federal criminal statutes. Indeed, Backpage and its founders were criminally prosecuted even without SESTA’s 2017 changes to Section 230. If the federal government needs assistance in enforcing those laws, it could adopt Sen. Mike Lee’s amendment to permit state criminal prosecutions when the conduct would constitute a violation of federal law. Better yet, the Attorney General could use an existing federal law (28 U.S.C. § 543) to deputize state, local, and tribal prosecutors as “special attorneys” empowered to prosecute violations of federal law. Why no AG has bothered to do so yet is unclear.

What is clear is that EARN IT isn’t just about criminal law. EARN IT expressly carves out civil claims under certain federal statutes, and also under whatever state laws arguably relate to “the advertisement, promotion, presentation, distribution, or solicitation of child sexual abuse material” as defined by federal law. Those laws can and do vary, not only with respect to the substance of what is prohibited, but also the mental state required for liability. This expansive breadth of potential civil liability is part of what makes this bill so dangerous in the first place.

Mistake #7: “If They Can Censor Conservatives, They Can Stop CSAM!”

As at the 2020 markup, Sen. Lee seemed to understand most clearly how EARN IT would work, the Fourth Amendment problems it raises, and how to fix at least some of them. A former Supreme Court Clerk, Lee has a sharp legal mind, but he seems to misunderstand much of how the bill would work in practice, and how content moderation works more generally.

Lee complained that, if Big Tech companies can be so aggressive in “censoring” speech they don’t like, surely they can do the same for CSAM. He’s mixing apples and oranges in two ways. First, CSAM is the digital equivalent of radioactive waste: if a platform gains knowledge of it, it must take it down immediately and report it to NCMEC, and faces stiff criminal penalties if it doesn’t. And while “free speech” platforms like Parler and Gettr refuse to proactively monitor for CSAM (as discussed below), every mainstream service goes out of its way to stamp out CSAM on unencrypted service. Like AOL in the Stevenson case, they do so for business and reputational reasons.

By contrast no website even tries to block all “conservative” speech; rather, mainstream platforms must make difficult judgment calls about taking down politically charged content, such as Trump’s account only after he incited an insurrection in an attempted coup and misinformation about the 2020 election being stolen. Republicans are mad about where tech companies draw such lines.

Second, social media platforms can only moderate content that they can monitor. Signal can’t moderate user content and that is precisely the point: end-to-end-encryption means that no one other than the parties to a communication can see it. Unlike normal communications, which may be protected by lesser forms of “encryption,” the provider isn’t standing in the middle of the communication and it doesn’t have the keys to unlock the messages that it is passing back and forth. Yes, some users will abuse E2EE to share CSAM, but the alternative is to ban it for everyone. There simply isn’t a middle ground.

There may indeed be more that some tech companies could do about content they can see—both public content like social media posts and private content like messages (protected by something less than E2EE). But their being aggressive about, say, misinformation about COVID or the 2020 election has nothing whatsoever to do with the cold, hard reality that they can’t moderate content protected by strong encryption.

It’s hard to tell whether Lee understands these distinctions. Maybe not. Maybe he’s just looking to wave the bloody shirt of “censorship” again. Maybe he’s saying the same thing everyone else is saying, essentially: “Ah, yes, but if only Facebook, Apple and Google didn’t use end-to-end encryption for their messaging services, then they could monitor those for CSAM just like they monitor and moderate other content!” Proposing to amend the bill to require actual knowledge under both state and federal law suggests he doesn’t want this result, but who knows?

Mistake #8: Assuming the Fourth Amendment Won’t Require Warrants If It Applies

Visibility to the provider relates to one important legal distinction not discussed at all today—but that may well explain why the bill’s sponsors don’t seem to care about Fourth Amendment concerns. It’s an argument Senate staffers have used to defend the bill since its introduction. Even if compulsion through vast legal liability did make tech companies government actors, the Fourth Amendment requires a warrant only for searches of material for which users have a reasonable expectation of privacy. Kyllo v. United States, 533 U.S. 27, 33 (2001); see Katz v. United States, 389 U.S. 347, 361 (1967) (Harlan, J., concurring). Courts long held that users had no such expectations for digital messages like email held by third parties. 

But that began to change in 2010. If searches of emails trigger the Fourth Amendment—and U.S. v. Warshak, 631 F.3d 266 (6th Cir. 2010) said they do—searches of private messaging certainly would. The entire purpose of E2EE is to give users rock-solid expectations of privacy in their communications. More recently, the Supreme Court has said that, “given the unique nature of cell phone location records, the fact that the information is held by a third party does not by itself overcome the user’s claim to Fourth Amendment protection.” Carpenter v. United States, 138 S. Ct. 2206, 2217 (2018). These cases draw the line Sen. Lee is missing: no, of course users don’t have reasonable expectations of privacy in public social media posts—which is what he’s talking about when he points to “censorship” of conservative speech. EARN IT could avoid the Fourth Amendment by focusing on content providers can see, but it doesn’t, because it’s intended to force companies to be able to see all user communications.

Mistake #9: What They didn’t Discuss: Anonymous Speech

The Committee didn’t discuss how EARN IT would affect speech protected by the First Amendment. No, of course CSAM isn’t protected speech, but the bill would affect lawful speech by law-abiding citizens—primarily by restricting anonymous speech. Critically, EARN IT doesn’t just create liability for trafficking in CSAM. The bill also creates liability for failing to stop communications that “solicit” or “promote” CSAM. Software like PhotoDNA can flag CSAM (by matching cryptographic hashes to known images in NCMEC’s database) but identifying “solicitation” or “promotion” is infinitely more complicated. Every flirtatious conversation between two adult users could be “solicitation” of CSAM—or it might be two adults doing adult things. (Adults sext each other—a lot. Get over it!) But “on the Internet, nobody knows you’re a dog”—and there’s no sure way to distinguish between adults and children. 

The federal government tried to do just that in the Communications Decency Act (CDA) of 1996 (nearly all of which, except Section 230, was struck down) and the Child Online Protection Act (COPA) of 1998. Both laws were struck down as infringing on the First Amendment right to accessing lawful content anonymously. EARN IT accomplishes much the same thing indirectly, the same way it attacks encryption: basing liability on anything less than knowledge means you can be sued for not actively monitoring, or for not age-verifying users, especially when the risks are particularly high (such as when you “should have known” you were dealing with minor users). 

Indeed, EARN IT is even more constitutionally suspect. At least COPA focused on content deemed “harmful to minors.” Instead of requiring age-gating for sites that offered porn and sex-related content (e.g., LGBTQ teen health), EARN IT would affect all users of private communications services, regardless of the nature of the content they access or exchange. Again, the point of E2EE is that the service provider has no way of knowing whether messages are innocent chatter or CSAM. 

EARN IT could raise other novel First Amendment problems. Companies could be held liable not only for failing to age-verify all users—a clear First Amendment violation— but also for failing to bar minors from using E2EE services so that their communications can be monitored or failing to use client-side monitoring on minors’ devices, and even failing to segregate adults from minors so they can’t communicate with each other. 

Without the Lee Amendment, EARN IT leaves states free to base liability on explicitly requiring age-verification or limits on what minors can do. 

Mistake #10: Claiming the Bill Is “Narrowly Crafted”

If you’ve read this far, Sen. Blumenthal’s stubborn insistence that this bill is a “narrowly targeted approach” should make you laugh—or sigh. If he truly believes that, either he hasn’t adequately thought about what this bill really does or he’s so confident in his own genius that he can simply ignore the chorus of protest from civil liberties groups, privacy advocates, human rights activists, minority groups, and civil society—all of whom are saying that this bill is bad policy.

If he doesn’t truly believe what he’s saying, well… that’s another problem entirely.

Bonus Mistake!: A Postscript About the Real CSAM problem

Lee never mentioned that the only significant social media services that don’t take basic measures to identify and block CSAM are Parler, Gettr and other fringe sites celebrated by Republicans as “neutral public fora” for “free speech.” Has any Congressional Republican sent letters to these sites asking why they refuse to use PhotoDNA? 

Instead, Lee did join Rep. Ken Buck in March 2021 to interrogate Apple about its decision to take down the Parler app. Answer: Parler hadn’t bothered setting any meaningful content moderation system. Only after Parler agreed to start doing some moderation of what appeared in its Apple app (but not its website) did Apple reinstate the app.

Posted on Techdirt - 9 July 2021 @ 10:41am

Section 230 Continues To Not Mean Whatever You Want It To

In the annals of Section 230 crackpottery, the “publisher or platformcanard reigns supreme. Like the worst (or perhaps best) game of “Broken Telephone” ever, it has morphed into a series of increasingly bizarre theories about a law that is actually fairly short and straightforward.

Last week, this fanciful yarn took an even more absurd turn. It began on Friday, when Facebook began to roll out test warnings about extremism as part of its anti-radicalization efforts and in response to the Christchurch Call for Action campaign. There appears to be two iterations of the warnings: one asks the user whether they are concerned that someone they know is becoming an extremist, a second warns the user that they may have been exposed to extremist content (allegedly appearing while users were viewing specific types of content). Both warnings provide a link to support resources to combat extremism.

As it is wont to do, the Internet quickly erupted into an indiscriminate furor. Talking heads and politicians raged about the “Orwellian environment” and “snitch squads” that Facebook is creating, and the conservative media eagerly lapped it up (ignoring, of course, that nobody is forced to use Facebook or to pay any credence to their warnings). That’s not to say there is no valid criticism to be lodged?surely the propriety of the warnings and definition of “extremist” are matters on which people can reasonably disagree, and those are conversations worth having in a reasoned fashion.

But then someone went there. It was inevitable, really, given that Section 230 has become a proxy for “things social media platforms do that I don’t like.” And Section 230 Truthers never miss an opportunity to make something wrongly about the target of their eternal ire.

Notorious COVID (and all-around) crank Alex Berenson led the charge, boosted by the usual media crowd, tweeting:

Yeah, I’m becoming an extremist. An anti-@Facebook extremist. “Confidential help is available?” Who do they think they are?

Either they’re a publisher and a political platform legally liable for every bit of content they host, or they need to STAY OUT OF THE WAY. Zuck’s choice.

That is, to be diplomatic, deeply stupid.

Like decent toilet paper, the inanity of this tweet is two-ply. First (setting aside the question of what exactly “political platform” means) is the mundane reality, explained ad nauseum, that Facebook needs not?in fact?make any such choice. It bears repeating: Section 230 provides that websites are not liable as the publishers of content provided by others. There are no conditions or requirements. Period. End of story. The law would make no sense otherwise; the entire point of Section 230 was to facilitate the ability for websites to engage in “publisher” activities (including deciding what content to carry or not carry) without the threat of innumerable lawsuits over every piece of content on their sites.

Of course, that’s exactly what grinds 230 Truthers’ gears: they don’t like that platforms can choose which content to permit or prohibit. But social media platforms would have a First Amendment right to do that even without Section 230, and thus what the anti-230 crowd really wants is to punish platforms for exercising their own First Amendment rights.

Which leads us to the second ply, where Berenson gives up this game in spectacular fashion because Section 230 isn’t even relevant. Facebook’s warnings are its own content, which is not immunized under Section 230 in the first place. Facebook is liable as the publisher of content it creates; always has been, always will be. If Facebook’s extremism warnings were somehow actionable (as rather nonspecific opinions, they aren’t) it would be forced to defend a lawsuit on the merits.

It simply makes no sense at all. Even if you (very wrongly) believe that Section 230 requires platforms to host all content without picking and choosing, that is entirely unrelated to a platform’s right to use its own speech to criticize or distance itself from certain content. And that’s all Facebook did. It didn’t remove or restrict access to content; Facebook simply added its own additional speech. If there’s a more explicit admission that the real goal is to curtail platforms’ own expression, it’s difficult to think of.

Punishing speakers for their expression is, of course, anathema to the First Amendment. In halting enforcement of Florida’s new social media law, U.S. District Judge Robert Hinkle noted that Florida would prohibit platforms from appending their own speech to users’ posts, compounding the statute’s constitutional infirmities. Conditioning Section 230 immunity on a platform’s forfeiture of its completely separate First Amendment right to use its own voice would fare no better.

Suppose Democrats introduced a bill that conditioned the immunity provided to the firearms industry by the PLCAA on industry members refraining from speaking out out or lobbying against gun control legislation. Inevitably, and without a hint of irony, many of the people urging fundamentally the same thing for social media platforms would find newfound outrage at the brazen attack on First Amendment rights.

At the end of the day, despite all their protestations, what people like Berenson want is not freedom of speech. Quite the opposite. They want to dragoon private websites into service as their free publishing house and silence any criticism by those websites with the threat of financial ruin. It’s hard to think of anything less free speech-y, or intellectually honest, than that.

Ari Cohn is Free Speech Counsel at TechFreedom

More posts from ari.cohn >>