ari.cohn's Techdirt Profile

ari.cohn

About ari.cohn

Posted on Techdirt - 12 July 2023 @ 10:46am

Republican AGs Decide That Coercive Jawboning Is Good, Actually (When They Do It)

It will surprise nobody to learn that when politicians trumpet the First Amendment, they are generally referring only to expression that they agree with. But occasionally, they demonstrate their hypocrisy in a fashion so outrageously transparent that it shocks even the most cynical and jaded First Amendment practitioners. Last week, we were treated to just such an instance, courtesy of seven Republican Attorneys General. They deserve to be named, ignominiously: Todd Rokita (IN), Andrew Bailey (MO), Tim Griffin (AR), Daniel Cameron (KY), Raul Labrador (ID), Lynn Fitch (MS), and Alan Wilson (SC).

One of those names might stick out: Missouri AG Andrew Bailey. Last week, Bailey took a victory lap in Missouri’s lawsuit against the Biden administration: U.S. District Judge Terry Doughty engaged in some judicial theatrics, releasing a 155-page ruling on July 4 finding that an assortment of government actors likely violated the First Amendment by discussing content moderation with social media platforms.1

That ruling was a very mixed bag, and is outside the scope of this article (Mike Masnick has a good writeup here). The important thing to remember is that Missouri sued government officials, asserting that their pressure on social media platforms over content was unconstitutional—and a judge agreed.

The very next day, Bailey turned around and joined these other AGs in a ham-fisted, legally and factually inaccurate letter threatening Target over the sale of Pride Month merchandise and its support of an LGBT organization—all of which happens to be, you guessed it, protected expression. Let’s dig in.

The Merchandise

It’s worth reviewing exactly what products the AGs complained about:

  1. LGBT-themed onesies, bibs, and overalls
  2. T-shirts labeled “Girls Gays Theys,” “Pride Adult Drag Queen Katya”
  3. “Girls’ swimsuits with ‘tuck-friendly construction’ and ‘extra crotch coverage’ for male genitalia”
    1. I’m going to stop them right here: The use of “girls” in this sentence is clearly intended to insinuate that the complained-of swimsuits are for children. But as it so (not surprisingly) happens, that was false: theses swimsuits were available in adult sizes only).
  4. “Merchandise by the self-declared ‘Satanist-Inspired’ brand Abprallen” which “include the phrases ‘We Bash Back’ with a heart-shaped mace in the trans-flag colors, ‘Transphobe Collector’ with a skull, and ‘Homophobe Headrest’ with skulls beside a pastel guillotine.”
  5. “[P]roducts with anti-Christian designs such as pentagrams, horned skulls, and other Satanic products . . . [including] the phrase ‘Satan Respects Pronouns’ with a horned ram representing Baphomet—a half-human, half-animal, hermaphrodite worshipped by the occult.”

It would be difficult to come up with a clearer example of government targeting expression on the basis of viewpoint—the most fundamental First Amendment violation possible. You don’t see them going after “daddy’s little girl” shirts or “Jesus Calling” books, and I’d bet my life that they wouldn’t pursue the seller of a shirt that says “there are only two genders.” The AGs’ complaint is, by its own admission, directed at the messages contained within certain products.

You may not need reminding, but apparently these inept AGs do: the First Amendment’s protection is quite broad.

It envelops expression conveyed via clothing (or other products) the same as it protects the words written in a book: the government cannot ban “Satanist” shirts any more than it could ban the sale of bibles.

And it protects the saledistribution, and reception of expression no less than the right to create the expression: the government cannot punish the seller of a book any more than it could prohibit writing it in the first place.

So What’s These AGs’ Problem, Exactly?

As a general matter, that’s a question better directed to their therapists—there’s probably a lot going on there.

But specific to these products, our merry band of hapless censors really had to heave a (entirely unconvincing) Hail Mary to try getting around the First Amendment:

Our concerns entail the company’s promotion and sale of potentially harmful products to minors [and] related interference with parental authority in matters of sex and gender identity [].

State child-protection laws penalize the “sale or distribution . . . of obscene matter.” A matter is considered “obscene” if “the dominant theme of the matter . . . appeals to the prurient interest in sex,” including “material harmful to minors.” Indiana, as well as other states, have passed laws to protect children from harmful content meant to sexualize them and prohibit gender transitions of children.

Obscenity and “Harmful to Minors”

Threshold note: Obscenity doctrine is a complete mess, and for various reasons obscenity prosecutions are extremely difficult in this day and age. But historically, obscenity law has been a favorite tool of government actors seeking to suppress LGBT speech. These AGs are following in that ignoble, censorious, and bigoted tradition.

Let’s start with the definition of obscenity that Indiana AG Todd Rokita (who authored the letter) provides:

A matter is considered obscene “if the dominant theme of the matter . . . appeals to the prurient interest in sex,” including material harmful to minors.

First, Rokita actually gets his own state’s law wrong. Obscenity does not include “material harmful to minors” under Indiana law. The latter is its own separate category.2 Perhaps that’s a minor quibble, but if you’re going to issue bumptious threats under the color of law, you should at least describe the law correctly.

Second, Rokita conveniently leaves out the three other requirements for matter to be “harmful to minors”:

Sec. 2. A matter or performance is harmful to minors for purposes of this article if:

(1) it describes or represents, in any form, nudity, sexual conduct, sexual excitement, or sado-masochistic abuse;

(2) considered as a whole, it appeals to the prurient interest in sex of minors;

(3) it is patently offensive to prevailing standards in the adult community as a whole with respect to what is suitable matter for or performance before minors; and

(4) considered as a whole, it lacks serious literary, artistic, political, or scientific value for minors.

He leaves them out, of course, because it’s obvious that none of the products discussed describe or represent “nudity, sexual conduct, sexual excitement, or sado-masochistic abuse” and the inquiry properly ends at Step One.

But even under his truncated definition, you would have to be incompetent to stand trial—let alone practice law—to conclude that any merchandise the letter complains of, “considered as a whole . . . appeals to the prurient interest in sex of minors.” The Supreme Court defined “prurient interest” as “a shameful or morbid interest in nudity, sex, or excretion.” As with all Supreme Court attempts to define sex-related things, this definition is somewhat clunky and unsatisfying; yet it still demonstrates how asinine these sorry excuses for lawyers are.

Recall some of the products named in the letter:

LGBT-themed onesies, bibs, and overalls. The inclusion of “bibs” indicates to me that they’re referring to…clothes for infants? First of all, that very young child wearing their Pride bib over their Pride onesie while chucking Cheerios across the room from their highchair has no knowledge of “nudity, sex, or excretion,” let alone the capacity for a shameful interest in it. Second, if these AGs look at an infant wearing a Pride bib and their mind immediately goes to SEX, I would urge them to seek immediate mental health care and stay at least 1000 feet away from any child, ever.

I’m also curious how either of these insanely benign shirts (made for adults, by the way) could possibly appeal to the prurient interest of anyone:

Aha, they will say. What about the tuck-friendly swimwear? Set aside the fact that they were apparently only available in adult sizes. Do they appeal to a shameful interest in nudity? Considering that it’s clothing, quite the opposite. What about sex? No, not really: sex means sex acts or sexual behavior, not mere gender expression. If a statute defining “prurient interest” as “incit[ing] lasciviousness or lust” was held unconstitutionally overbroad, there is no question that defining gender expression as “a shameful interest in sex” is not going to work. Excretion? Well, unless you’re the type of person that pees in the pool and gets off on it (way to tell on yourselves), that’s not going to work either.

And obviously the “Satanist” and “anti-Christian” merchandise they complain about in such a delicate, snowflake-like fashion have absolutely nothing to do with sex.

The only possible way that the AGs could believe (other than by reason of sheer incompetence) that these products are legally “harmful to minors” is if they believe that anything LGBT-related is ipso facto sexual. That’s a belief that is both shockingly prejudiced, and so stupid that even the Fifth Circuit wouldn’t likely accept it. During oral arguments in the litigation over Texas’ content moderation law, Judge Andy Oldham found it “extraordinary” that social media platforms affirmed that under their view of the First Amendment, they could ban all pro-LGBT content if they so desired. If all such content is “harmful to minors,” I have a hard time believing he would have found the proposition so troubling.

None of these products are even close calls. They are emphatically, and unquestionably protected by the First Amendment.

Parental Rights

The AGs cite as another concern “potential interference with parental authority in matters of sex and gender identity.” Footnote 3 provides citations to a bevy of state laws about school libraries and gender-affirming care (several of which have been enjoined). Which, of course, have nothing to do with anything, as the footnote even acknowledges: “all of these laws may not be implicated by Target’s recent campaign.”

But even after acknowledging that these laws are irrelevant, the letter continues to say “they nevertheless demonstrate that our States have a strong interest in protecting children and the interests of parental rights.”

That’s great, I’m happy for them, but also…no. What they demonstrate is that your state legislatures passed some bills. What they don’t demonstrate is that you have the constitutionally valid interest you think you do. The merchandise is clearly protected by the First Amendment for both adults and minors. And “[s]peech that is neither obscene as to youths nor subject to some other legitimate proscription cannot be suppressed solely to protect the young from ideas or images that a legislative body thinks unsuitable for them.”

California, too, tried the “parental rights” argument when it banned the sale of violent video games to minors. The Supreme Court was not impressed:

Such laws do not enforce parental authority over children’s speech . . . they impose governmental authority, subject only to a parental veto. In the absence of any precedent for state control, uninvited by the parents, over a child’s speech . . . and in the absence of any justification for such control that would satisfy strict scrutiny, those laws must be unconstitutional.

The law is clear: government may not place limits on (or punish) the distribution of constitutionally protected materials to minors by shouting “parental rights.” Parents are free to parent, but the government is not free to enforce its version of “good parenting” (guffaw) on everyone by law.

Target’s Donations to GLSEN

If you thought that was the end of the stupidity, buckle up. The AGs also complain about Target’s donations to GLSEN, an LGBT education advocacy group which the letter, for no apparent reason, instructs readers on how to pronounce (“glisten,” if you’re curious). Because GLSEN advocates that educators should not reveal students’ gender identity to their parents without consent, the AGs claim that the donations “raise concerns” under “child-protection and parental-rights laws.”

Nonsense.

First things first: GLSEN has a First Amendment right to advocate for what it believes school policies should be,3 no matter what a state’s law says. The AGs’ insinuation that advocacy against their states’ laws is somehow unlawful is startling and dangerous.

Second, Target has a First Amendment right to support GLSEN through its partnership. This thinly-veiled threat that Target could face prosecution if it doesn’t stop donating to advocacy that government officials don’t like is wholly beneath contempt, and should be repulsive to every American. I’m not sure how much there is to say about this; it’s a dark sign that the attorneys general of seven states would so readily declare their opposition to fundamental liberties.

“But this speech we don’t like”

Simply put, the government “is not permitted to employ threats to squelch the free speech of private citizens.” Backpage.com, 807 F.3d at 235. “The mere fact that [the private party] might have been willing to act without coercion makes no difference if the government did coerce.” Mathis, 891 F.2d at 1434. “[S]uch a threat is actionable and thus can be enjoined even if it turns out to be empty…. But the victims in this case yielded to the threat.” Backpage.com, 807 F.3d at 230-31. Further, even a vaguely worded threat can constitute government coercion. See Okwedy, 333 F.3d at 341-42. But here, the threats have been repeated and explicit, and “the threats ha[ve] worked.” Backpage.com, 807 F.3d at 232.

The threats in this case . . . include a threat of criminal prosecution . . . Even an “implicit threat of retaliation” can constitute coercion, Okwedy, 333 F.3d at 344, and here the threats are open and explicit.

You could be forgiven for thinking that this came from a draft complaint or motion for a preliminary injunction aimed at the attorneys general who signed this letter.

But in fact, it is from Missouri’s own motion for a preliminary injunction in Missouri v. Biden, arguing that the federal government coerced social media platforms into censoring users.

What was the “threat of criminal prosecution” so explicit and coercive, in Missouri’s view, to render the government responsible for platforms’ content moderation decisions? Then-candidate Biden

threatened that Facebook CEO Mark Zuckerberg should be subject to civil liability, and possibly even criminal prosecution, for not censoring core political speech: “He should be submitted to civil liability and his company to civil liability…. Whether he engaged in something and amounted to collusion that in fact caused harm that would in fact be equal to a criminal offense, that’s a different issue. That’s possible. That’s possible – it could happen.”

So, according to Missouri, the blustering of a candidate who, if elected, would not himself even have the power to actually prosecute is sufficiently explicit and coercive. And that’s in a case about whether the government can be held responsible for private action against third-party speech.

This argument leaves precisely no room for the notion that a letter from states’ top prosecutors, citing various criminal statutes, to the speaker of the targeted, protected speech itself, is anything but an even more obvious First Amendment violation. It would be so even had Missouri not made this argument. But the rank hypocrisy here is so brazen that it cannot escape notice.

Spaghetti at the Wall

In the second half of the letter, the AGs shift gears to say they are also writing as the representatives of their states in their capacity as shareholders of Target. They allege that Target’s management “may have acted negligently” in its Pride campaign, due to the backlash and falling stock price. They write:

Target’s management has no duty to fill stores with objectionable goods, let alone endorse or feature them in attention-grabbing displays at the behest of radical activists. However, Target management does have fiduciary duties to its share-holders to prudently manage the company and act loyally in the company’s best interests. Target’s board and its management may not lawfully dilute their fiduciary duties to satisfy the Board’s (or left-wing activists’) desires to foist contentious social or political agendas upon families and children at the expense of the company’s hard-won good will and against its best interests.

They aren’t even trying to hide their perverse inversion of the First Amendment, turning the company’s right to decide what expressive products to sell into a threat of liability for deciding to sell the expressive products they disfavor.

Perhaps the AGs think that framing it as a “shareholder” concern makes the First Amendment magically go away. They are wrong.

Regardless of how they try to obfuscate it, the AGs are using the coercive authority of the state to silence views they disagree with. Whether the states are shareholders is irrelevant, and I suspect Missouri would have said as much had the federal government defendants in Missouri v. Biden been daft enough to attempt this argument.

Dig into the investments of FERS, the U.S. Railroad Retirement Board, etc., and I’ll bet good money that you’ll find investments in companies that own social media platforms. If the federal government communicated concerns as a “shareholder” of those companies, threatening that they may be breaching their fiduciary duty/duty of care by not removing noxious content, what do you suppose the reaction from the Right would be? You know exactly what it would be.

To paraphrase the Supreme Court, very recently, “When a state [business regulation] and the Constitution collide, there can be no question which must prevail. U.S. Const., Art. VI, cl. 2.” Purporting to write as government “shareholders” is not an invisibility cloak against the First Amendment: state governments cannot simply purchase stock in a company and declare that they now have the right to threaten the company over their protected expression.

Implicitly Condoning Violence Against Speech (Provided it’s Against the People We Don’t Like)

To round off its unrelenting hypocrisy, the letter concludes by warning Target to “not yield” to “threats of violence.” But only some threats, apparently:

Some activists have recently pressured Target [to backtrack on its removal/relocation of Pride merchandise] by making threats of violence . . . Target’s board and management should not use such threats as a pretext . . . to promote collateral political and social agendas.

“You hear that, Target? You better not use anything as an excuse to say things we don’t like!”

Conspicuously absent is any note of the fact that it was threats of violence against Target employees that caused the merchandise to be removed or relocated in the first place. That, perhaps unsurprisingly. doesn’t seem to bother them so much—the violent threats, and Target caving to them, is just fine if these AGs agree with the perpetrators of the violence. Because for them, the First Amendment is about their own power, and nothing else.


Whatever one thinks of Target’s decisions, having even the slightest shred of honesty and principle when it comes to the First Amendment should leave you thoroughly disgusted by this letter.

But these AGs are not principled, honest, ethical, or competent attorneys (I’d wager that they aren’t those things as people either), and they deserve neither respect nor the offices they hold despite their manifest unfitness.

They are con-artists engaging in the familiar ploy of using the First Amendment as a partisan cudgel to claim expression they like is being censored, while actively working to censor speech they disagree with. Their view of the First Amendment is clear and pernicious: you can say whatever they think you should be allowed to say.

It’s nothing new, of course. But it’s always worthy of scorn and condemnation. And maybe a lawsuit or two.


1 It also bears mentioning that five of these seven state AG’s offices also signed on to an amicus brief asking the Fifth Circuit to uphold Texas’ content moderation law, arguing that platforms do not have a First Amendment right to decide for themselves what content to allow on their services.

2 Rokita also pulls the “dominant theme” language from the obscenity statute rather than the “harmful to minors” statute, so that’s another strike against his having a firm grasp on his own state’s law, but I suppose “considered as a whole” does similar (though not exactly the same) work.

3 In their zeal to glom on to culture war nonsense, the AGs also failed to recognize that this advocacy is contained in GLSEN’s model policy. That is, the ideal policy that they provide on their website for any school, anywhere to use or adapt.

Republished with permission from Ari Cohn’s Substack.

Posted on Techdirt - 20 June 2023 @ 12:06pm

Texas Legislature Convinced First Amendment Simply Does Not Exist

Over the past two years, there has been a concerted push by state legislatures to regulate the Internet, the likes of which has not been seen since the late 90s/early aughts. Content moderation, financial relationships between journalists and platforms, social media design and transparency, “national security,” kids being exposed to “bad” Internet speech—you name it, a state legislature has introduced an unconstitutional bill about it. So it’s no surprise that the anti-porn crowd seized the moment to once again exhibit a creepy and unhealthy interest in what other people do with their pants off.

The Texas legislature, also unsurprisingly, was all too happy to help out. Last week, Texas Governor Greg Abbott signed into law HB 1181, which regulates websites that publish or distribute “material harmful to minors,” i.e., porn.

Start from the premise that pornography is protected by the First Amendment, but that it may be restricted for minors where it could not be for adults under variable obscenity jurisprudence.

The law’s requirements applies to any “commercial entity,” explicitly including social media platforms, that “intentionally publishes or distributes material on an Internet website… more than one-third of which” is porn. That’s a problematic criterion in the first place. I don’t know that there’s an easy (or even feasible) way for a social media platform to know precisely how much porn is on it (perhaps there is, though). And what about a non-social media website—what is the denominator? If a website has articles (which is definitely the reason you’re on it, I know) plus naughty pictures, is the percentage calculated by comparing the number of porn-y things to the number of articles? Words? Pages? Who knows—the law sure doesn’t say.

But that’s the least of the law’s problems. HB 1181 requires qualifying entities (however determined) to do two things, both of which clear First Amendment hurdles about as well as a rhinoceros competing in a steeplechase.

Age-Verifying Users

This has been a recurring theme in state and federal legislation recently. HB 1181 requires covered entities to “use reasonable age verification methods” to ensure that users are 18 or older before allowing access.

We’ve been here before, and explaining this over and over again is getting exhausting. But I’ll do it again, louder, for the people in the back.

Age Verification Laws: A Brief History

In the beginning (of the web) there was porn. And the Government saw that it was “icky” and said “let there be laws.”

In 1996, Congress passed the Communications Decency Act, prohibiting the knowing transmission or display of “obscene or indecent” messages to minors using the Internet. A unanimous Supreme Court struck down the law (with the exception of Section 230) in Reno v. ACLU, holding that it chilled protected speech, in part because there was no way for users in chat rooms, newsgroups, etc. to know the age of other users—and even if there was, a heckler’s veto could be easily imposed by

any opponent of indecent speech who might simply log on and inform the would-be discoursers that his 17-year-old child…would be present.

The Court rejected the government’s argument that affirmative defenses for use of age-verification methods (in particular credit card verification) saved the law, noting that not every adult has a credit card, and that existing age verification methods did not “actually preclude minors from posing as adults.”

So Congress tried again, passing the Child Online Protection Act (COPA) in 1998, ostensibly narrowed to only commercial enterprises, and again containing affirmative defenses for using age-verification. Again, the courts were not buying it: in a pair of decisions, the Third Circuit struck down COPA.

With respect to the viability of age verification, the court found that the affirmative defense was “effectively unavailable” because, again, entering a credit or debit card number does precisely nothing to verify a user’s age.

But more importantly, the court ruled that the entire idea of conditioning access to material on a government-imposed age verification scheme violates the First Amendment. Noting Supreme Court precedent “disapprov[ing] of content-based restrictions that require recipients to identify themselves affirmatively before being granted access to disfavored speech,” the Third Circuit ruled in 2003 that age-verification would chill protected speech:

We agree with the District Court’s determination that COPA will likely deter many adults from accessing restricted content, because many Web users are simply unwilling to provide identification information in order to gain access to content, especially where the information they wish to access is sensitive or controversial. People may fear to transmit their personal information, and may also fear that their personal, identifying information will be collected and stored in the records of various Web sites or providers of adult identification numbers.

In its second decision, coming in 2008, the court again agreed that “many users who are not willing to access information non-anonymously will be deterred from accessing the desired information.” And thus, after the Supreme Court denied cert, COPA—and the notion that government could force websites to age-verify users—died.

Until now.

Age Verification Today

Has anything changed that would render these laws newly-constitutional? One might argue that age-verification technologies have improved, and are no longer as crude as “enter a credit card number.” I suppose that’s true in a sense, but not a meaningful one. HB 1181 requires age verification by either (a) a user providing “digital identification” (left undefined), or (b) use of a commercial age-verification system that uses either government-issued ID or “a commercially reasonable method that relies on public or private transactional data.”

It stands to reason that if a minor can swipe a parent’s credit card for long enough to enter it into a verification service, they can do the same with a form of Government ID. Or even easier, they could just borrow one from an older friend or relative. And like entering a credit card number, simply entering (or photographing) a government ID does not ensure that the person doing so is the owner of that ID. And what of verification solutions that rely on selfies or live video? There is very good reason to doubt that they are any more reliable: the first page of Google search results for “trick selfie verification” turns up numerous methods for bypassing verification using free, easy-to-use software. Even the French, who very much want online age-verification to be a thing, have acknowledged that all current methods “are circumventable and intrusive.”

But even assuming that there was a reliable way to do age verification, the First Amendment problem remains: HB 1181 requires adult users to sacrifice their anonymity in order to access content disfavored by the government, and First Amendment jurisprudence on that point has not changed since 2008. Texas might argue that because HB 1181 prohibits websites or verification services from retaining any identifying information, the chilling harm is mitigated. But there are two problems with that argument:

First, on a practical level, I don’t know how that prohibition can work. A Texas attorney general suing a platform for violating the law will have to point to specific instances where an entity failed to age-verify. But how, exactly, is an entity to prove that it indeed did perform adequate verification, if it must delete all the proof? Surely just keeping a record that verification occurred wouldn’t be acceptable to Texas—otherwise companies could simply create the record for each user and Texas would have no way of disproving it.

Second, whether or not entities retain identification information is entirely irrelevant. The chilling effect isn’t dependent on whether or not a user’s browsing history or personal information is ultimately revealed. It occurs because the user is asked for their identifying information in the first place. Few if any users are even likely to even know about the data retention prohibition. All they will know is that they are being asked to hand over ID to access content that they might not want associated with their identity—and many will likely refrain as a result. The de-anonymization to anyone, for any amount of time, is what causes the First Amendment harm.

Technology has changed, but humans and the First Amendment…not so much. Age verification remains a threat to user privacy and security, and to protected First Amendment activity.

Anti-Porn Disclaimers

HB 1181 also requires covered entities to display three conspicuous notices on their home page (and any advertising for their website):

TEXAS HEALTH AND HUMAN SERVICES WARNING: Pornography is potentially biologically addictive, is proven to harm human brain development, desensitizes brain reward circuits, increases conditioned responses, and weakens brain function.

TEXAS HEALTH AND HUMAN SERVICES WARNING: Exposure to this content is associated with low self-esteem and body image, eating disorders, impaired brain development, and other emotional and mental illnesses.

TEXAS HEALTH AND HUMAN SERVICES WARNING: Pornography increases the demand for prostitution, child exploitation, and child pornography.

It’s obvious what Texas is trying to do here. And it’s also obvious what Texas will argue: “The government often forces companies to place warnings on dangerous products, just look at cigarette packages. That’s what we’re doing here too!”

You can likely anticipate what I have to think about that, but it’s worth interrogating in some depth to see exactly why it’s so very wrong.

What Kind of Speech Regulation is This?

Obviously, HB 1181 compels speech. In First Amendment jurisprudence, compelled speech is generally anathema, and subject to strict scrutiny. But the government has more leeway to regulate (or compel) “commercial speech,” that is, non-misleading speech that “does no more than propose a commercial transaction” or “relate[s] solely to the economic interests of the speaker and its audience.

At the outset, I am skeptical that this is a commercial speech regulation. True, it applies only to “commercial entities” (defined effectively as any legally recognized business entity), but speech by a business entity is not ipso facto commercial speech, nor does a profit motive automatically render speech “commercial.” Imagine, for example, that 30% of Twitter content was found to be pornographic. Twitter makes money through its Twitter Blue subscriptions and advertisements. But does that make Twitter as a whole, and every piece of content on it, “commercial speech?” Certainly not. See Riley v. National Federation of Blind, 487 U.S. 781, 796 (1988) (when commercial speech is “inextricably intertwined with otherwise fully protected speech,” the relaxed standards for commercial speech are inapplicable).

And even as applied to commercial pornography websites in the traditional sense1 (presuming that in this application, courts would view the notice requirement as a commercial speech regulation), HB 1181 might be in trouble. In International Outdoor, Inc. v. City of Troy, the Sixth Circuit persuasively reasoned that even commercial regulations are subject to strict scrutiny when they are content based (as HB 1181 plainly is), particularly where they also regulate noncommercial speech (as HB 1181 plainly does). If strict scrutiny is the applicable constitutional standard, the law is certainly dead.

But let’s assume for the sake of argument that we are in Commercial Speech Land, because either way the notice requirement is unconstitutional.

Constitutional Standards for Compelled Commercial Speech

For a commercial speech regulation to be constitutional, it must directly advance a substantial government interest and be narrowly tailored so as not to be more extensive than necessary to further that interest—known as the Central Hudson test.

But there’s another wrinkle: certain compelled commercial disclosures are subjected to the lower constitutional standard articulated in Zauderer v. Office of Disciplinary Counsel. Under Zauderer, compelled disclosures of “purely factual and uncontroversial information” must only “reasonably relate” to a substantial government interest and not be unjustified or unduly burdensome. What type of government interest suffices has been a matter of controversy: Zauderer (and Supreme Court cases applying it) have, on their face, related to remedying or preventing consumer deception in advertising.2 But multiple appellate courts have held that the government interest need not be related to consumer deception.

Would the HB 1181 Receive the More Permissive Zauderer Analysis?

Setting aside the question of government interest for just a moment, the HB 1181 notices are clearly not governed by the lower Zauderer standard because in no way are they “purely factual and uncontroversial.”

In 2015, the U.S. Court of Appeals for the D.C. Circuit struck down a regulation requiring (to simplify) labeling of “conflict minerals.” While the origin of minerals might be a factual matter, the court found that the “not conflict free” label was not “non-ideological” (i.e., uncontroversial): it conveyed “moral responsibility for the Congo war” and required sellers to “publicly condemn [themselves]” and tell consumers that their products are “ethically tainted.”

Dissenting, Judge Srinivasan would have read “uncontroversial” as relating to “factual”—that is, disclosures are uncontroversial if they disclose facts that are indisputably accurate. Even under Judge Srinivasan’s more permissive construction, the HB 1181 notices are not factual and uncontroversial. They are, quite simply, standard hysterical anti-porn lobby talking points—some rejected by science and in every other case hotly disputed by professionals and the scientific literature.

And then the Supreme Court decided National Institute of Family & Life Advocates v. Becerra (NIFLA), striking down a California regulation requiring family planning clinics to disseminate a government notice regarding state-provided family-planning services, including abortion—”anything but an ‘uncontroversial’ topic,” the Court noted. In a later case, the Ninth Circuit explained that the notices in NIFLA were not “uncontroversial” under Zauderer because they “took sides in a heated political controversy, forcing [clinics opposed to abortion] to convey a message fundamentally at odds with its mission.”

However you look at it, these notices are not “factual and uncontroversial.” They make claims that are by no means established facts (one might even call them opinions), put the government thumb on the scale in support of them, and force speakers to promote controversial hot-button views that condemn their own constitutionally protected speech. They are simply not the type of disclosures that Zauderer contemplates.

Do the Notices Satisfy the Central Hudson Test?

I’ll admit to hiding the ball a little in order to talk about Zauderer. Regardless of whether Zauderer or Central Hudson controls, the first step of the analysis would remain the same: does the government have a substantial interest?

It seems clear to me that the answer is “no,” so the notice requirement would fail scrutiny either way.

Texas may argue that its interest is “protecting the physical and psychological well-being of minors,” as the federal government asserted when defending the CDA and COPA. While the Supreme Court has held that interest to be compelling, I’m not sure Texas can plausibly claim it here. If the harm to minors comes from viewing porn, but the age verification requirement prevents them from seeing the porn while they are minors, is there a substantial government interest in telling them that the porn they can’t even access is “bad?” To my mind, it doesn’t adequately square. (Admittedly, this may be more of a question of whether the notices “directly advance” the government interest.)

The plain language of the notices evince a much broader theme. To the extent that Texas is trying to protect minors, it seems that it is also trying to protect them from the “harms” of porn even once they are no longer minors—that is, to keep them from getting “hooked on porn” ever. In that sense, the notice requirement is aimed as much at adults as it is at minors. The message is clear: porn is harmful and bad—no matter what age you are—and you should abstain from consuming it.

Here’s where Texas will invariably analogize HB 1181 to mandated warning labels on cigarettes. “It’s constitutionally permissible to force companies to label dangerous products, and that’s all we’re doing,” Texas will say. But the government interest there is to reduce smoking rates—thereby protecting consumer and public health from a physical product that definitively causes serious and deadly physical disease.

HB 1181 is different in every respect, by a country mile. Distilled to its core, the government interest that Texas must be asserting is: generally reducing the consumption of protected expression disfavored by a government that considers it psychologically harmful to readers/viewers. HB 1181 seeks to protect citizens not from a product with physical effects,3 but rather, from ideas and how they make us think and feel.4 Can that be any government interest at all, let alone a substantial one?

It’s a startling proposition that would give government the power to shape the contours of public discourse in ways entirely at odds with First Amendment principles. Could the government invoke an interest in protecting the public from the psychological harms of hateful speech and demand that any commercial entity distributing it affix a warning label dissuading readers from consuming it? What about the damaging effects (including on health) of political polarizations? Could the government rely on those harms and force “partisan media” to issue warnings about the dangers of their content? Must gun-related periodicals warn readers that “gun culture” leads to mass shootings at the government’s demand? Or can fashion magazines be forced to tell readers that looking at skinny people causes low self-esteem eating disorders? You get the picture.

Consider New York’s “Hateful Conduct Law,” recently struck down by a federal district court in a challenge brought by Eugene Volokh and and two social media platforms. That law requires any commercial operator of a service that allows users to share content to establish a mechanism for users to complain about “hateful conduct” and post a policy detailing how such reports will be addressed. (Notably, the court rejected New York’s assertion that the law only compelled commercial speech.) While the court ultimately accepted “reducing instances of hate-fueled mass shootings” as a compelling government interest (and then held the law not narrowly tailored), it explained in a footnote that “a state’s desire to reduce [constitutionally protected speech] from the public discourse cannot be a compelling government interest.”

And that is clearly the aim of the HB 1181 notices: to reduce porn consumption. To my mind, this is no different than the Supreme Court’s rejection in Matal v. Tam of a government interest in “preventing speech…that offend[s].” Offense, after all, is a psychological impact that can affect mental well-being. But the First Amendment demands that government stay out of the business of deciding whether protected speech is “good” or “bad” for us.

The wholly unestablished nature of the claims made in HB 1181’s notices also cut against the sufficiency of Texas’s interest. In Brown v. Entertainment Merchants Association, California could not draw a direct link between violent video games and “harm to minors,” so it instead relied on “predictive judgments” based on “competing psychological studies” to establish a compelling government interest. But the Supreme Court demanded more than “ambiguous proof,” noting that the case California relied on for a lower burden “applied to intermediate scrutiny to a content-neutral regulation.” (emphasis in original)

While (presuming again that this is in fact a commercial speech regulation) we may be Intermediate Scrutiny Land, we are also in Unquestionably Content-Based Land—and I think that counts for something. In all respects, HB 1181’s notice requirement is a content-based regulation justified by the (state’s theorized) reaction of listeners. See Boos v. Barry, 485 U.S. 312, 321 (1988) (“[I]f the ordinance…was justified by the city’s desire to prevent the psychological damage it felt was associated with viewing adult movies, then analysis of the measure as a content-based statute would have been appropriate.”). While I am doubtful that Texas can ultimately assert any substantial interest here, at the very least any asserted interest must be solidly supported rather than moralistic cherry picking.

In sum, I do not see how any state interest in reducing the consumption (and thus ultimately proliferation) of entirely protected speech can itself be a legitimate one. By extension, I think that invalidates any government interest in protecting recipients of that speech from the psychological effects of that speech—the entire point of expression is to have some kind of impact. Speech can of course have harmful effects at times, and the government is free to use its own speech, on its own time, to encourage citizens to make healthy decisions. But it can’t force speakers to warn recipients that their speech ought not be listened to.


So why do state legislatures keep introducing and passing laws that are undercut by such clear lines of precedent? The “innocent” answer is that they simply do not care: once they’ve completed the part where they “do something,” they can get the media spots and do the chest-pounding and fundraising—whether the law is ultimately struck down is immaterial. The more sinister answer is that, believing that they have a sympathetic Supreme Court, they are actively manufacturing cases in the hopes that they can remake the First Amendment to their liking. Here’s hoping they fail.


1 In contrast, I think that a porn site that provides content (especially if user-uploaded) for free and relies on revenue from advertising is more akin to Twitter than it is to a pay-for-access site for commercial speech purposes.

2 For a good treatment of the Supreme Court’s Zauderer jurisprudence and analysis of its applicability to content moderation transparency laws, see Eric Goldman, Zauderer and Compelled Editorial Transparency: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4246090

3 Notably, some courts have expressed skepticism (without deciding) that a government could even assert “a substantial interest in discouraging consumers from purchasing a lawful product, even one that has been conclusively linked to adverse health consequences [i.e., cigarettes].

4 Unlike cigarettes, the ideas and expression contained within books, films, music, etc (as opposed to the physical medium) are not considered “products” for products liability purposes, and courts have rejected invitations to hold otherwise on First Amendment grounds. See, e.g., Winter v. G.P. Putnam’s Sons, 938 F.2d 1033 (9th Cir. 1991); Gorran v. Atkins Nutritionals, Inc., 464 F. Supp. 2d 315 (S.D.N.Y. 2006).

Originally posted to Ari Cohn’s Substack.

Posted on Techdirt - 18 January 2023 @ 12:08pm

If You Believe In Free Speech, The GOP’s “Weaponization” Subcommittee Is Not Your Friend

“Politics,” the writer Auberon Waugh liked to say, “is for social and emotional misfits.” Its purpose is “to help them overcome these feelings of inferiority and compensate for their personal inadequacies in the pursuit of power.” You could accuse old Bron of painting with a rather broad brush, and you would be right. But he plainly understood the likes of Kevin McCarthy. As the Washington Post’s Ruth Marcus observed last week, two aspects of McCarthy’s bid to become Speaker of the House stand out. First, that he “seems to crave power for power’s sake, not for any higher purposes.” And second, that he “is willing to debase himself so completely to obtain it.”

Of the many concessions McCarthy made to his far-right flank to obtain the Speaker’s gavel, one of the most straightforward was to create a new Select Subcommittee on the Weaponization of the Federal Government. The desire for such an entity “percolat[ed] on the edges of the [party] conference and conservative media,” Politico reported last month, and the calls for it then quickly spread, “getting harder for the speaker hopeful to ignore.” But the hardliners were pushing at an open door: McCarthy had already been promising sweeping investigations of the Department of Justice and the FBI.

It’s amusing that the subcommittee is simply “on” weaponization, leaving onlookers the latitude to decide for themselves whether the body’s position is “pro” or “con.” The subcommittee will likely seek to disrupt the executive branch’s probes of Donald Trump’s interference in the 2020 election, role in the Capitol attack, and defiant mishandling of classified documents. It might also seek to hinder the government’s efforts to prosecute Jan. 6 rioters. In attempting to obstruct federal law enforcement, the House GOP would be engaging in its own forms of “weaponization.” It would be trying to “weaponize” its own authority—which, under our Constitution’s separation of powers, does not extend to meddling in ongoing criminal investigations. And it would be trying to “weaponize” the federal government by compelling it not to enforce the law. A better label might have been the “Select Subcommittee on Weaponizing the Federal Government Our Way.” Or, for brevity’s sake, perhaps “Partisan Hacks Against the Rule of Law.”

It is in this light that we must view another of the subcommittee’s main goals—getting “to the very bottom” (McCarthy’s words) of the federal government’s relationship with Big Tech. Last month Rep. Jim Jordan, the incoming chair of the House Judiciary Committee—and, now, of its “weaponization” subcommittee as well—accused the major tech firms of being “out to get conservatives.” He demanded that those firms preserve records of their “‘collusion’ with the Biden administration to censor conservatives on their platforms.” According to Axios, the subcommittee “will demand copies of White House emails, memos and other communications with Big Tech companies.”

There is nothing inherently wrong with setting up a congressional committee to investigate whether and how the government is influencing online speech and content moderation. After all, Congress has good reason to care about what the government itself is saying, especially if the government is using its own speech to violate the Free Speech Clause. Congress has a constitutional duty to oversee (though not intrude on) the executive branch’s faithful execution of the laws Congress has passed.

Lately, moreover, the executive branch has indeed displayed an unhealthy desire to control constitutionally protected expression. Government officials now routinely jawbone social media platforms over content moderation. There were Surgeon General Vivek Murthy’s guidelines on “health misinformation,” issued—the platforms may have noticed—amid a push by the Biden administration to expose platforms to litigation over “misinformation” by paring back their Section 230 protection. Biden’s then-Press Secretary Jen Psaki announced that the administration was flagging posts for platforms to remove. What’s worse, she declared that a ban from one social media platform should trigger a ban from all platforms. And then there was the notorious “Disinformation Governance Board”—a body whose name was dystopian, whose powers were ill-defined, whose rollout was ham-fisted, and whose brief existence unsettled all but the most sanguine proponents of government power. It can hardly be said that there’s nothing worth investigating.

The First Amendment bars the government from censoring speech it doesn’t like—even speech that might be called “misinformation.” The state may try to influence speech indirectly—it is allowed, within limits, to express its opinion about others’ speech—but that doesn’t mean doing so is a good idea. The government shouldn’t be telling social media platforms what content to allow, much as it shouldn’t be telling newspapers what stories to print.

Misguided though they may be, however, none of the government’s efforts—to this point—have violated the First Amendment. The government has not ordered platforms to remove or ban specific content. It has not issued threats that rise to the level of government coercion. And it has not co-opted the platforms in a manner that would turn them into state actors. If anything, the right’s ongoing lawsuits alleging otherwise have helped reveal a quite different problem: that the platforms are all too receptive to government input. But agreeing with the government does not make one’s actions attributable to the government.

The “Twitter Files”—which helped inspire, and will drive much of, the subcommittee’s investigation—change precisely none of this. Much misunderstood and even more misrepresented, the information released via Elon Musk’s surrogates actually undercuts the narrative that the federal government is dictating the platforms’ editorial decisions. 

We were promised evidence that the FBI and the federal government conspired with platforms to squash the Hunter Biden laptop story. Instead, we learned—as “Twitter Files” player Matt Taibbi himself put it—that “there’s no evidence … of any government involvement.” Messages to Twitter sent by the Biden campaign, we were told, amounted to a bona fide First Amendment violation. But a non-state actor lobbying a non-state actor does not a state action make. Such lobbying by political campaigns is common—and, in many instances, even proper. (Many of the tweets the Biden campaign flagged contained links to leaked nude photos of Hunter Biden. Even political candidates may try to defend their families’ privacy.)

Yet another “Twitter Files” document dump showed Twitter receiving payments from the FBI. This, we heard, definitively revealed the Grand Conspiracy to Censor Conservatives. Except that the payments were simply statutorily mandated reimbursements for expenses Twitter incurred replying to court-ordered requests for investigatory information.

So although there might well be issues regarding government jawboning worth investigating, you can be forgiven for doubting that the House GOP, proceeding through its “weaponization” subcommittee, is up to the task of seriously investigating them. Judging from past performance, the Republicans who control the body will use its hearings to emit great waves of impotent, performative, largely unintelligible sound. “The yells and animal noises” of parliamentary debates, Auberon Waugh wrote, have nothing to do with principles or policy. “They are cries of pain and anger, mingled with hatred and envy, at the spectacle of another group exercising the ‘power’ which the first group covets.” That will describe Republican-run Big Tech hearings to a tee.

The GOP is not fighting to stop so-called “censorship”; it’s fighting to stop so-called “censorship” performed by those they dislike. When Musk suspended some journalists from Twitter—on trumped up charges, no less—many on the right responded with whoops of glee. That Musk had just engaged in precisely the sort of conduct those pundits had long denounced was of no consequence. Indeed, when some on the left pointed out that the suspensions were arbitrary, impulsive, and imposed under false pretenses, their remarks launched a thousand conservative op-eds crowing about progressive hypocrisy. (There should be a long German word for shouting “Hypocrite!” at someone as you pass by him on the flip-flop road.)

Choking on outrage, the contemporary political right has descended into practicing “Who, whom?” politics of the crassest sort. House Republicans have no problem with “weaponizing” the government, so long as they’re the ones doing the “weaponizing.” This explains how they can rail against a government campaign to reduce COVID misinformation on social media while also arguing that Section 230, the law that gives social media platforms the legal breathing room to host sketchy content to begin with, should be scrapped.

If you believe for one moment that Kevin McCarthy, Jim Jordan, and their myrmidons truly support free speech on the Internet, we’ve got beachfront property in Kansas to sell you. There was no limit to Waugh’s disdain for such men. Until the public “accepts that the urge to power is a personality disorder in its own right,” he said, “like the urge to sexual congress with children or the taste for rubber underwear, there will always be a danger of circumstances arising which persuade ordinary people to start listening to politicians … and taking them seriously.” A bit over the top, to be sure—though not in this case.

Posted on Techdirt - 4 May 2022 @ 12:00pm

Musk, Twitter, Bluesky & The Future Of Content Moderation (Part II)

In Part I, we explained why the First Amendment doesn’t get Musk to where he seemingly wants to be: If Twitter were truly, legally the “town square” (i.e., public forum) he wants it to be, it couldn’t do certain things Musk wants (cracking down on spam, authenticating users, banning things equivalent to “shouting fire in a crowded theatre,” etc.). Twitter also couldn’t do the things it clearly needs to do to continue to attract the critical mass of users that make the site worth buying, let alone attract those—eight times as many Americans—who don’t use Twitter every day. 

So what, exactly, should Twitter do to become a more meaningful “de facto town square,” as Musk puts it?

What Objectives Should Guide Content Moderation?

Even existing alternative social media networks claim to offer the kind of neutrality that Musk contemplates—but have failed to deliver. In June 2020, John Matze, Parler’s founder and then its CEO, proudly declared the site to be an “a community town square, an open town square, with no censorship,” adding, “if you can say it on the street of New York, you can say it on Parler.” Yet that same day, Matze also bragged of “banning trolls” from the left.

Likewise, GETTR’s CEO has bragged about tracking, catching, and deleting “left-of-center” content, with little clarity about what that might mean. Musk promises to void such hypocrisy:

Let’s take Musk at his word. The more interesting thing about GETTR, Parler and other alternative apps that claim to be “town squares” is just how much discretion they allow themselves to moderate content—and how much content moderation they do. 

Even in mid-2020, Parler reserved the right to “remove any content and terminate your access to the Services at any time and for any reason or no reason,” adding only a vague aspiration: “although Parler endeavors to allow all free speech that is lawful and does not infringe the legal rights of others.” today, Parler forbids any user to “harass, abuse, insult, harm, defame, slander, disparage, intimidate, or discriminate based on gender, sexual orientation, religion, ethnicity, race, age, national origin, or disability.” Despite claiming that it “defends free speech,” GETTR bans racial slurs such as those by Miller as well as white nationalist codewords

Why do these supposed free-speech-absolutist sites remove perfectly lawful content? Would you spend more or less time on a site that turned a blind eye to racial slurs? By the same token, would you spend more or less time on Twitter if the site stopped removing content denying the Holocaust, advocating new genocides, promoting violence, showing animals being tortured, encouraging teenagers to cut or even kill themselves, and so on? Would you want to be part of such a community? Would any reputable advertiser want to be associated with it? That platforms ostensibly starting with the same goal as Musk have reserved broad discretion to make these content moderation decisions underscores the difficulty in drawing these lines and balancing competing interests.

Musk may not care about alienating advertisers, but all social media platforms moderate some lawful content because it alienates potential users. Musk implicitly acknowledges this imperative on user engagement, at least when it comes to the other half of content moderation: deciding which content to recommend to users algorithmically—an essential feature of any social media site. (Few Twitter users activate the option to view their feeds in reverse-chronological order.) When TED’s Chrius Anderson asked him about a tweet many people have flagged as “obnoxious,” Musk hedged: “obviously in a case where there’s perhaps a lot of controversy, that you would not want to necessarily promote that tweet.” Why? Because, presumably, it could alienate users. What is “obvious” is that the First Amendment would not allow the government to disfavor content merely because it is “controversial” or “obnoxious.”

Today, Twitter lets you block and mute other users. Some claim user empowerment should be enough to address users’ concerns—or that user empowerment just needs to work better. A former Twitter employee tells the Washington Post that Twitter has considered an “algorithm marketplace” in which users can choose different ways to view their feeds. Such algorithms could indeed make user-controlled filtering easier and more scalable. 

But such controls offer only “out of sight, out of mind” comfort. That won’t be enough if a harasser hounds your employer, colleagues, family, or friends—or organizes others, or creates new accounts, to harass you. Even sophisticated filtering won’t change the reality of what content is available on Twitter.

And herein lies the critical point: advertisers don’t want their content to be associated with repugnant content even if their ads don’t appear next to that content. Likewise, most users care what kind of content a site allows even if they don’t see it. Remember, by default, everything said on Twitter is public—unlike the phone network. Few, if anyone, would associate the phone company with what’s said in private telephone communications. But every Tweet that isn’t posted to the rare private account can be seen by anyone. Reporters embed tweets in news stories. Broadcasters include screenshots in the evening news. If Twitter allows odious content, most Twitter users will see some of that one way or another—and they’ll hold Twitter responsible for deciding to allow it.

If you want to find such lawful but awful content, you can find it online somewhere. But is that enough? Should you be able to find it on Twitter, too? These are undoubtedly difficult questions on which many disagree; but they are unavoidable.

What, Exactly, Is the Virtual Town Square?

The idea of a virtual town square isn’t new, but what, precisely, that means has always been fuzzy, and lofty talk in a recent Supreme Court ruling greatly exacerbated that confusion. 

“Through the use of chat rooms,” proclaimed the Supreme Court in Reno v. ACLU (1997), “any person with a phone line can become a town crier with a voice that resonates farther than it could from any soapbox. Through the use of Web pages, mail exploders, and newsgroups, the same individual can become a pamphleteer.” The Court wasn’t saying that digital media were public fora without First Amendment rights. Rather, it said the opposite: digital publishers have the same First Amendment rights as traditional publishers. Thus, the Court struck down Congress’s first attempt to regulate online “indecency” to protect children, rejecting analogies to broadcasting, which rested on government licensing of a “‘scarce’ expressive commodity.” Unlike broadcasting, the Internet empowers anyone to speak; it just doesn’t guarantee them an audience.

In Packingham v. North Carolina (2017), citing Reno’s “town crier” language, the Court waxed even more lyrical: “By prohibiting sex offenders from using [social media], North Carolina with one broad stroke bars access to what for many are the principal sources for knowing current events, checking ads for employment, speaking and listening in the modern public square, and otherwise exploring the vast realms of human thought and knowledge.” This rhetorical flourish launched a thousand conservative op-eds—all claiming that social media were legally public fora like town squares. 

Of course, Packingham doesn’t address that question; it merely said governments can’t deny Internet access to those who have completed their sentences. Manhattan Community Access Corp. v. Halleck (2019) essentially answers the question, albeit in the slightly different context of public access cable channels: “merely hosting speech by others” doesn’t “transform private entities into” public fora. 

The question facing Musk now is harder: what part, exactly, of the Internet should be treated as if it were a public forum—where anyone can say anything “within the bounds of the law”? The easiest way to understand the debate is the Open Systems Interconnection model, which has guided the understanding of the Internet since the 1970s:

Long before “net neutrality” was a policy buzzword, it described the longstanding operational state of the Internet: Internet service (broadband) providers won’t block, throttle or discriminate against lawful Internet content. The sky didn’t fall when the Republican FCC repealed net neutrality rules in 2018. Indeed, nothing really changed: You can still send or receive lawful content exactly as before. ISPs promise to deliver connectivity to all lawful content. The Federal Trade Commission enforces those promises, as do state attorneys general. And, in upholding the FCC’s 2015 net neutrality rules over then-Judge Brett Kavanaugh’s arguments that they violated the First Amendment, the D.C. Circuit noted that the rules applied only to providers that “sell retail customers the ability to go anywhere (lawful) on the Internet.” The rules simply didn’t apply to “an ISP making sufficiently clear to potential customers that it provides a filtered service involving the ISP’s exercise of ‘editorial intervention.’”)

In essence, Musk is talking about applying something like net neutrality principles, developed to govern the uncurated service ISPs offer at layers 1-3, to Twitter, which operates at layer 7—but with a major difference: Twitter can monitor all content, which ISPs can’t do. This means embroiling Twitter in trying to decide what content is lawful in a far, far deeper way than any ISP has ever attempted.

Implementing Twitter’s existing plans to offer users an “algorithm marketplace” would essentially mean creating a new layer of user control on top of Twitter. But Twitter has also been working on a different idea: creating a layer below Twitter, interconnecting all the Internet’s “soapboxes” into one, giant virtual town square while still preserving Twitter as a community within that square that most people feel comfortable participating in.

“Bluesky”: Decentralization While Preserving Twitter’s Brand

Jack Dorsey, former Twitter CEO, has been talking about “decentralizing” social media for over three years—leading some reporters to conclude that Dorsey and Musk “share similar views … promoting more free speech online.” In fact, their visions for Twitter seem to be very different: unlike Musk, Dorsey saw Twitter as a community that, like any community, requires curation.

In late 2019, Dorsey announced that Twitter would fund Bluesky, an independent project intended “to develop an open and decentralized standard for social media.” Bluesky “isn’t going to happen overnight,” Dorsey warned in 2019. “It will take many years to develop a sound, scalable, and usable decentralized standard for social media.” The project’s latest update detailed the many significant challenges facing the effort, but significant progress. 

Twitter has a strong financial incentive to shake up social media: Bluesky would “allow us to access and contribute to a much larger corpus of public conversation.” That’s lofty talk for an obvious business imperative. Recall Metcalfe’s Law: a network’s impact is the square of the number of nodes in the network. Twitter (330 million active users worldwide) is a fraction as large as its “Big Tech” rivals: Facebook (2.4 billion), Instagram (1 billion), YouTube (1.9 billion) and TikTok. So it’s not surprising that Twitter’s market cap is a much smaller fraction of theirs—just 1/16 that of Facebook. Adopting Bluesky should dramatically increase the value of Twitter and smaller companies like Reddit (330 million users) and LinkedIn (560 million users) because Bluesky would allow users of each participating site to interact easily with content posted on other participating sites. Each site would be more an application or a “client” than “platform”—just as Gmail and Outlook both use the same email protocols. 

Dorsey also framed Bluesky as a way to address concerns about content moderation. Days after the January 6 insurrection, Dorsey defended Trump’s suspension from Twitter yet noted concerns about content moderation:

Dorsey acknowledged the need for more “transparency in our moderation operations,” but pointed to Bluesky as a more fundamental, structural solution:

Adopting Bluesky won’t change how each company does its own content moderation, but it would make those decisions much less consequential. Twitter could moderate content on Twitter, but not on the “public conversation layer.” No central authority could control that, just as with email protocols and Bitcoin. Twitter and other participating social networks would no longer be “platforms” for speech so much as applications (or “clients”) for viewing the public conversation layer,  the universal “corpus” of social content.

Four years ago, Twitter banned Alex Jones for repeatedly violating rules against harassment. The conspiracy theorist par excellence moved to Gab, an alternative social network launched in 2017 that claims 15 million monthly visitors (an unverified number). On Gab, Jones now has only a quarter as many followers as he once had on Twitter. And because the site is much smaller overall, he gets much less engagement and attention than he once did. Metcalfe’s Law means fewer people talk about him.

Bluesky won’t get Alex Jones or his posts back on Twitter or other mainstream social media sites, but it might ensure that his content is available on the public conversation layer, where users of any app that doesn’t block him can see it. Thus, Jones could use his Gab account to seamlessly reach audiences on Parler, Getter, Truth Social, or any other site using Bluesky that doesn’t ban him. Each of these sites, in turn, would have a strong incentive to adopt Bluesky because the protocol would make them more viable competitors to mainstream social media. Bluesky would turn Metcalfe’s Law to their advantage: no longer separate, tiny town squares, these sites would be ways of experiencing the same town square—only with a different set of filters. 

But Mecalfe’s Law cuts both ways: even if Twitter and other social media sites implemented Bluesky, so long as Twitter continues to moderate the likes of Alex Jones, the portion of the “town square” enabled by Bluesky that Jones has access to will be limited. Twitter would remain a curated community, a filter (or set of filters) for experiencing the “public conversation layer.” When first announcing Bluesky, Dorsey said the effort would be good for Twitter not only for allowing the company “to access and contribute to a much larger corpus of public conversation” but also because Twitter could “focus our efforts on building open recommendation algorithms which promote healthy conversation.” With user-generated content becoming more interchangeable across services—essentially a commodity—Twitter and other social media sites would compete on user experience.

Given this divergence in visions, it shouldn’t be surprising that Musk has never mentioned Bluesky. If he merely wanted to make Bluesky happen faster, he could pour money into the effort—an independent, open source project—without buying Twitter. He could help implement proposals to run the effort as a decentralized autonomous organization (DAO) to ensure its long-term independence from any effort to moderate content. Instead, Musk is focused on cutting back Twitter’s moderation of content—except where he wants more moderation. 

What Does Political Neutrality Really Mean?

Much of the popular debate over content moderation revolves around the perception that moderation practices are biased against certain political identities, beliefs, or viewpoints. Jack Dorsey responded to such concerns in a 2018 congressional hearing, telling lawmakers: “We don’t consider political viewpoints—period. Impartiality is our guiding principle.” Dorsey was invoking the First Amendment, which bars discrimination based on content, speakers, or viewpoints. Musk has said something that sounds similar, but isn’t quite the same:

The First Amendment doesn’t require neutrality as to outcomes. If user behavior varies across the political spectrum, neutral enforcement of any neutral rule will produce what might look like politically “biased” results.

Take, for example, a study routinely invoked by conservatives that purportedly shows Twitter’s political bias in the 2016 election. Richard Hanania, a political scientist at Columbia University, concluded that Twitter suspended Trump supporters more often than Clinton supporters at a ratio of 22:1. Hanania postulated that this meant Trump supporters would have to be at least four times as likely to violate neutrally applied rules to rule out Twitter’s political bias—and dismissed such a possibility as implausible. But Hanania’s study was based on a tiny sample of only reported (i.e., newsworthy) suspensions—just a small percentage of overall content moderation. And when one bothers to actually look at Hanania’s data—something none of the many conservatives who have since invoked his study seem to have done—one finds exactly those you’d expect to be several times more likely to violate neutrally-applied rules: the American Nazi Party, leading white supremacists including David Duke, Richard Spencer, Jared Taylor, Alex Jones, Charlottesville “Unite the Right” organizer James Allsup, and various Proud Boys. 

Was Twitter non-neutral because it didn’t ban an equal number of “far left” and “far right” users? Or because the “right” was incensed by endless reporting in leading outlets like The Wall Street Journal of a study purporting to show that “conservatives” were being disproportionately “censored”?

There’s no way to assess Musk’s outcome-based conception neutrality without knowing a lot more about objectionable content on the site. We don’t know how many accounts were reported, for what reasons, and what happened to those complaints. There is no clear denominator that allows for meaningful measurements—leaving only self-serving speculation about how content moderation is or is not biased. This is one problem Musk can do something about.

Greater Transparency Would Help, But…

After telling Anderson “I’m not saying that I have all the answers here,” Musk fell back on something simpler than line-drawing in content moderation: increased transparency. If Twitter should “make any changes to people’s tweets, if they’re emphasized or de-emphasized, that action should be made apparent so anyone can see that action’s been taken, so there’s no behind the scenes manipulation, either algorithmically or manually.” Such tweet-by-tweet reporting sounds appealing in principle, but it’s hard to know what it will mean in practice. What kind of transparency will users actually find useful? After all, all tweets are “emphasized or de-emphasized” to some degree; that is simply what Twitter’s recommendation algorithm does.

Greater transparency, implemented well, could indeed increase trust in Twitter’s impartiality. But ultimately, only large-scale statistical analysis can resolve claims of systemic bias. Twitter could certainly help to facilitate such research by providing data—and perhaps funding—to bona fide researchers.

More problematic is Musk’s suggestion that Twitter’s content moderation algorithm should be “open source” so anyone could see it. There is an obvious reason why such algorithms aren’t open source: revealing precisely how a site decides what content to recommend would make it easy to manipulate the algorithm. This is especially true for those most determined to abuse the site: the spambots on whom Musk has declared war. Making Twitter’s content moderation less opaque will have to be done carefully, lest it fosters the abuses that Musk recognizes as making Twitter a less valuable place for conversation.

Public Officials Shouldn’t Be Able to Block Users

Making Twitter more like a public forum is, in short, vastly more complicated than Musk suggests. But there is one easy thing Twitter could do to, quite literally, enforce the First Amendment. Courts have repeatedly found that government officials can violate the First Amendment by blocking commenters on their official accounts. After then-President Trump blocked several users from replying to his tweets, the users sued. The Second Circuit held that Trump violated the First Amendment by blocking users because Trump’s Twitter account was, with respect to what he could do, a public forum. The Supreme Court vacated the Second Circuit’s decision—Trump left office, so the case was moot—but Justice Thomas indicated that some aspects of government officials’ accounts seem like constitutionally protected spaces. Unless a user’s conduct constitutes harassment, government accounts likely can’t block them without violating the First Amendment. Whatever courts ultimately decide, Twitter could easily implement this principle.

Conclusion

Like Musk, we definitely “don’t have all the answers here.” In introducing what we know as the “marketplace of ideas” to First Amendment doctrine, Justice Holmes’s famous dissent in Abrams v. United States (1919) said this of the First Amendment: “It is an experiment, as all life is an experiment.” The same could be said of the Internet, Twitter, and content moderation. 

The First Amendment may help guide Musk’s experimentation with content moderation, but it simply isn’t the precise roadmap he imagines—at least, not for making Twitter the “town square” everyone wants to go participate in actively. Bluesky offers the best of both worlds: a much more meaningful town square where anyone can say anything, but also a community that continues to thrive. 

Berin Szóka (@BerinSzoka) is President of TechFreedom. Ari Cohn (@AriCohn) is Free Speech Counsel at TechFreedom. Both are lawyers focused on the First Amendment’s application to the Internet.

Posted on Techdirt - 4 May 2022 @ 09:30am

Musk, Twitter, Why The First Amendment Can’t Resolve Content Moderation (Part I)

“Twitter has become the de facto town square,” proclaims Elon Musk. “So, it’s really important that people have both the reality and the perception that they’re able to speak freely within the bounds of the law.” When pressed by TED’s Chris Anderson, he hedged: “I’m not saying that I have all the answers here.” Now, after buying Twitter, his position is less clear: “I am against censorship that goes far beyond the law.” Does he mean either position literally?

Musk wants Twitter to stop making contentious decisions about speech. “[G]oing beyond the law is contrary to the will of the people,” he declares. Just following the First Amendment, he imagines, is what the people want. Is it, though? The First Amendment is far, far more absolutist than Musk realizes. 

Remember Neo-Nazis with burning torches and screaming “the Jews will not replace us!”? The First Amendment required Charlottesville to allow that demonstration. Some of them were arrested and prosecuted for committing acts of violence. One even killed a bystander with his car. The First Amendment permits the government to punish violent conduct but—contrary to what Musk believes—almost none of the speech associated with it. 

The Constitution protects “freedom for the thought that we hate,” as Justice Oliver Wendell Holmes declared in a 1929 dissent that has become the bedrock of modern First Amendment jurisprudence. In most of the places where we speak, the First Amendment does not set limits on what speech the host, platform, proprietor, station, or publication may block or reject. The exceptions are few: actual town squares, company-owned towns, and the like—but not social media, as every court to decide the issue has held

Musk wants to treat Twitter as if it were legally a public forum. A laudable impulse—and of course Musk has every legal right to do that. But does he really want to? His own statements indicate not. And on a practical level, it would not make much sense. Allowing anyone to say anything lawful, or even almost anything lawful, would make Twitter a less useful, less vibrant virtual town square than it is today. It might even set the site on a downward spiral from which it never recovers.

Can Musk have it both ways? Can Twitter help ensure that everyone has a soapbox, however appalling their speech, without alienating both users and the advertisers who sustain the site?  Twitter is already working on a way to do just that—by funding Bluesky—but Musk doesn’t seem interested. Nor does he seem interested in other technical and institutional improvements Twitter could make to address concerns about arbitrary content moderation. None of these reforms would achieve what seems to be Musk’s real goal: politically neutral outcomes. We’ll discuss all this in Part II.

How Much Might Twitter’s Business Model Change?

A decade ago, a Twitter executive famously described the company as “the free speech wing of the free speech party.” Musk may imagine returning to some purer, freer version of Twitter when he says “I don’t care about the economics at all.” But in fact, increasing Twitter’s value as a “town square” will require Twitter to continue striking a careful balance between what individual users can say and creating an environment that so many people want to use so regularly.

User Growth. A traditional public forum (like Lee Park in Charlottesville) is indifferent to whether people choose to use it. Its function is simply to provide a space for people to speak. But if Musk didn’t care how many people used Twitter, he’d buy an existing site like Parler or build a new one. He values Twitter for the same reason any network is valuable: network effects. Digital markets have always been ruled by Metcalfe’s Law: the impact of any network is equal to the square of the number of nodes in the network. 

No, not all “nodes” are equal. Twitter is especially popular among journalists, politicians and certain influencers. Yet the site has only 39.6 million active daily U.S. users. That may make Twitter something like ten times larger than Parler, but it’s only one-seventh the size of Facebook—and only the world’s fifteenth-largest social network. To some in the “very online” set, Twitter may seem like everything, but 240 million Americans age 13+ don’t use Twitter every day. Quadrupling Twitter’s user base would make the site still only a little more than half as large as Facebook, but Metcalfe’s law suggests that would make Twitter roughly sixteen times more impactful than it is today. 

Of course, trying to maximize user growth is exactly what Twitter has been doing since 2006. It’s a much harder challenge than for Facebook or other sites premised on existing connections. Getting more people engaged on Twitter requires making them comfortable with content from people they don’t know offline. Twitter moderates harmful content primarily to cultivate a community where the timid can express themselves, where moms and grandpas feel comfortable, too. Very few Americans want to be anywhere near anything like the Charlottesville rally—whether offline or online.

User Engagement. Twitter’s critics allege the site highlights the most polarizing, sensationalist content because it drives engagement on the site. It’s certainly possible that a company less focused on its bottom line might change its algorithms to focus on more boring content. Whether that would make the site more or less useful as a town square is the kind of subjective value judgment that would be difficult to justify under the First Amendment if the government attempted to legislate it.

But maximizing Twitter’s “town squareness” means more than maximizing “time on site”—the gold standard for most sites. Musk will need to account for users’ willingness to actually engage in dialogue on the site. 

https://twitter.com/ARossP/status/1519062065490673670

Short of leaving Twitter altogether, overwhelmed and disgusted users may turn off notifications for “mentions” of them, or limit who can reply to their tweets. As Aaron Ross Powell notes, such a response “effectively turns Twitter from an open conversation to a set of private group chats the public can eavesdrop on.” It might be enough, if Musk truly doesn’t care about the economics, for Twitter to be a place where anything lawful goes and users who don’t like it can go elsewhere. But the realities of running a business are obviously different from those of traditional, government-owned public fora. If Musk wants to keep or grow Twitter’s user base, and maintain high engagement levels, there are a plethora of considerations he’ll need to account for.

Revenue. Twitter makes money by making users comfortable with using the site—and advertisers comfortable being associated with what users say. This is much like the traditional model of any newspaper. No reputable company would buy ads in a newspaper willing to publish everything lawful. These risks are much, much greater online. Newspapers carefully screen both writers before they’re hired and content before it’s published. Digital publishers generally can’t do likewise without ruining the user experience. Instead, users help a mixture of algorithms and human content moderators flag content potentially toxic to users and advertisers. 

Even without going as far as Musk says he wants to, alternative “free speech” platforms like Gab and Parler have failed to attract any mainstream advertisers. By taking Twitter private, Musk could relieve pressure to maximize quarterly earnings. He might be willing to lose money but the lenders financing roughly half the deal definitely aren’t. The interest payments on their loans could exceed Twitter’s 2021 earnings before interest, taxes, depreciation, and amortization. How will Twitter support itself? 

Protected Speech That Musk Already Wants To Moderate

As Musk’s analysts examine whether the purchase is really worth doing, the key question they’ll face is just what it would mean to cut back on content moderation. Ultimately, Musk will find that the First Amendment just doesn’t offer the roadmap he thinks it does. Indeed, he’s already implicitly conceded that by saying he wants to moderate certain kinds of content in ways the First Amendment wouldn’t allow. 

Spam. “If our twitter bid succeeds,” declared Musk in announcing his takeover plans, “we will defeat the spam bots or die trying!” The First Amendment, if he were using it as a guide for moderation, would largely thwart him.

Far from banning spam, as Musk proposes, the 2003 CAN-SPAM Act merely requires email senders to, most notably, include unsubscribe options, honor unsubscribe requests, and accurately label both subject and sender. Moreover, the law defines spam narrowly: “the commercial advertisement or promotion of a commercial product or service.” Why such a narrow approach? 

Even unsolicited commercial messages are protected by the First Amendment so long as they’re truthful. Because truthful commercial speech receives only “intermediate scrutiny,” it’s easier for the government to justify regulating it. Thus, courts have also protected the constitutional right of public universities to block commercial solicitations. 

But, as courts have noted, “the more general meaning” of “spam” “does not (1) imply anything about the veracity of the information contained in the email, (2) require that the entity sending it be properly identified or authenticated, or (3) require that the email, even if true, be commercial in character.” Check any spam folder and you’ll find plenty of messages that don’t obviously qualify as commercial speech, which the Supreme Court has defined as speech which does “no more than propose a commercial transaction.” 

Some emails in your spam folder come from non-profits, political organizations, or other groups. Such non-commercial speech is fully protected by the First Amendment. Some messages you signed up for may inadvertently wind up in your spam filter; plaintiffs regularly sue when their emails get flagged as spam. When it’s private companies like ISPs and email providers making such judgments, the case is easy: the First Amendment broadly protects their exercise of editorial judgment. Challenges to public universities’ email filters have been brought by commercial spammers, so the courts have dodged deciding whether email servers constituted public fora. These courts have implied, however, that if such taxpayer-funded email servers were public fora, email filtering of non-commercial speech would have to be content- and viewpoint-neutral, which may be impossible.

Anonymity. After declaring his intention to “defeat the spam bots,” Musk added a second objective of his plan for Twitter: “And authenticate all real humans.” After an outpouring of concern, Musk qualified his position:

Whatever “balance” Musk has in mind, the First Amendment doesn’t tell him how to strike it. Authentication might seem like a content- and viewpoint-neutral way to fight tweet-spam, but it implicates a well-established First Amendment right to anonymous and pseudonymous speech.

Fake accounts plague most social media sites, but they’re a bigger problem for Twitter since, unlike Facebook, it’s not built around existing offline connections and Twitter doesn’t even try to require users to use their real names. A 2021 study estimated that “between 9% and 15% of active Twitter accounts are bots” controlled by software rather than individual humans. Bots can have a hugely disproportionate impact online. They’re more active than humans and can coordinate their behavior, as that study noted, to “manufacture fake grassroots political support, promote terrorist propaganda and recruitment, manipulate the stock market, and disseminate rumors and conspiracy theories.” Given Musk’s concerns about “cancel culture,” he should recognize that online harassment, especially targeting employers and intimate personal connections, as a way that lawful speech can be wielded against lawful speech.

When Musk talks about “authenticating” humans, it’s not clear what he means. Clearly, “authentication” means more than simply requiring captchas to make it harder for machines to create Twitter accounts. Those have been shown to be defeatable by spambots. Surely, he doesn’t mean making real names publicly visible, as on Facebook. After all, pseudonymous publications have always been a part of American political discourse. Presumably, Musk means Twitter would, instead of merely requiring an email address, somehow verify and log the real identity behind each account. This isn’t really a “middle ground”: pseudonyms alone won’t protect vulnerable users from governments, Twitter employees, or anyone else who might be able to access Twitter’s logs. However such logs are protected, the mere fact of collecting such information would necessarily chill speech by those concerned of being persecuted for their speech. Such authentication would clearly be unconstitutional if a government were to do it.

“Anonymity is a shield from the tyranny of the majority,” ruled the Supreme Court in McIntyre v. Ohio Elections Comm’n (1995). “It thus exemplifies the purpose behind the Bill of Rights and of the First Amendment in particular: to protect unpopular individuals from retaliation . . . at the hand of an intolerant society.” As one lower court put it, “the free exchange of ideas on the Internet is driven in large part by the ability of Internet users to communicate anonymously.” 

We know how these principles apply to the Internet because Congress has already tried to require websites to “authenticate” users. The Child Online Protection Act (COPA) of 1998 required websites to age-verify users before they could access material that could be “harmful to minors.” In practice, this meant providing a credit card, which supposedly proved the user was likely an adult. Courts blocked the law and, after a decade of litigation, the U.S. Court of Appeals for the Eighth Circuit finally struck it down in 2008. The court held that “many users who are not willing to access information non-anonymously will be deterred from accessing the desired information.” The Supreme Court let that decision stand. The United Kingdom now plans to implement its own version of COPA, but First Amendment scholars broadly agree: age verification and user authentication are constitutional non-starters in the United States.

What kind of “balance” might the First Amendment allow Twitter to strike? Clearly, requiring all users to identify themselves wouldn’t pass muster. But suppose Twitter required authentication only for those users who exhibit spambot-like behavior—say, coordinating tweets with other accounts that behave like spambots. This would be different from COPA, but would it be constitutional? Probably not. Courts have explicitly recognized a right to engage send non-commercial spam (unsolicited messages), for example: “were the Federalist Papers just being published today via e-mail,” warned the Virginia Supreme Court in striking down a Virginia anti-spam law, “that transmission by Publius would violate the statute.” 

Incitement. In his TED interview, Musk readily agreed with Anderson that “crying fire in a movie theater” “would be a crime.” No metaphor has done more to sow confusion about the First Amendment. It comes from the Supreme Court’s 1919 Schenck decision, which upheld the conviction of the head of the U.S. Socialist Party for distributing pamphlets criticizing the military draft. Advocating obstructing military recruiting, held the Court, constituted a “clear and present danger.” Justice Oliver Wendell Holmes mentioned “falsely shouting fire in a theatre” as a rhetorical flourish to drive the point home.

But Holmes revised his position just months later when he dissented in a similar case, Abrams v. United States. “[T]he best test of truth,” he wrote, “is the power of the thought to get itself accepted in the competition of the market.” That concept guides First Amendment decisions to this day—not Schenk’s vivid metaphor. Musk wants the open marketplace of ideas Holmes lauded in Abrams—yet also, somehow, Schenck’s much lower standard. 

In Brandenburg v. Ohio (1969), the Court finally overturned Schenck: the First Amendment does not “permit a State to forbid or proscribe advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action.” Thus, a Klansman’s openly racist speech and calls for a march on Washington were protected by the First Amendment. The Brandenburg standard has proven almost impossible to satisfy when speakers are separated from their listeners in both space and time. Even the Unabomber Manifesto wouldn’t qualify—which is why The New York Times and The Washington Post faced no legal liability when they agreed to publish the essay back in 1995 (to help law enforcement stop the serial mail-bomber). 

Demands that Twitter and other social media remove “harmful” speech—such as COVID misinformation—frequently invoke Schenck. Indeed, while many expect Musk will reinstate Trump on Twitter, his embrace of Schenck suggests the opposite: Trump could easily have been convicted of incitement under Schenck’s “clear and present danger” standard.

Self-Harm. Musk’s confusion over incitement may also extend to its close cousin: speech encouraging, or about, self-harm. Like incitement, “speech integral to criminal conduct” isn’t constitutionally protected, but, also like incitement, courts have defined that term so narrowly that the vast majority of content that Twitter currently moderates under its suicide and self-harm policy is protected by the First Amendment.

William Francis Melchert-Dinkel, a veteran nurse with a suicide fetish, claimed to have encouraged dozens of strangers to kill themselves and to have succeeded at least five times. Using fake profiles, Melchert-Dinkel entered into fake suicide pacts (“i wish [we both] could die now while we are quietly in our homes tonite:)”), invoked his medical experience to advise hanging over other methods (“in 7 years ive never seen a failed hanging that is why i chose that”), and asked to watch his victims hang themselves. He was convicted of violating Minnesota’s assisted suicide law in two cases, but the Minnesota Supreme Court voided the statute’s prohibitions on “advis[ing]” and “encourag[ing]” suicide. Only for providing “step-by-step instructions” on hanging could Melchert-Dinkel ultimately be convicted.

In another case, the Massachusetts Supreme Court upheld the manslaughter conviction of Michelle Carter; “she did not merely encourage the victim,” her boyfriend, also age 17, “but coerced him to get back into the truck, causing his death” from carbon monoxide poisoning. Like Melchert-Dinkel, Carter provided specific instructions on completing suicide: “knowing the victim was inside the truck and that the water pump was operating — … she could hear the sound of the pump and the victim’s coughing — [she] took no steps to save him.”

Such cases are the tiniest tip of a very large iceberg of self-harm content. With nearly one in six teens intentionally hurting themselves annually, researchers found 1.2 million Instagram posts in 2018 containing “one of five popular hashtags related to self-injury: #cutting, #selfharm, #selfharmmm, #hatemyself and #selfharmawareness.” More troubling, the rate of such posts nearly doubled across that year. Unlike suicide or assisted suicide, self-harm, even by teenagers, isn’t illegal, so even supplying direct instructions about how to do it it would be constitutionally protected speech. With the possible exception of direct user-to-user instructions about suicide, the First Amendment would require a traditional public forum to allow all this speech. It wouldn’t even allow Twitter to restrict access to self-harm content to adults—for the same reasons COPA’s age-gating requirement for “harmful-to-minors” content was unconstitutional. 

Trade-Offs in Moderating Other Forms of Constitutionally Protected Content

So it’s clear that Musk doesn’t literally mean Twitter users should be able to “speak freely within the bounds of the law.” He clearly wants to restrict some speech in ways that the government could not in a traditional public forum. His invocation of the First Amendment likely refers primarily to moderation of speech considered by some to be harmful—which the government has very limited authority to regulate. Such speech presents one of the most challenging content moderation issues: how a business should balance a desire for free discourse with the need to foster the environment that the most people will want to use for discourse. That has to matter to Musk, however much money he’s willing to lose on supporting a Twitter that alienates advertisers.

Hateful & Offensive Speech. Two leading “free speech” networks moderate, or even ban, hateful or otherwise offensive speech. “GETTR defends free speech,” the company said in January after banning former Blaze TV host Jon Miller, “but there is no room for racial slurs on our platform.” Likewise, Gab bans “doxing,” the exposure of someone’s private information with the intent to encourage others to harass them. These policies clearly aren’t consistent with the First Amendment: hate speech is fully protected by the First Amendment, and so is most speech that might colloquially be considered “harassment” or “bullying.”

In Texas v. Johnson (1989), the Supreme Court struck down a ban on flag burning: “if there is a bedrock principle underlying the First Amendment, it is simply that the government may not prohibit the expression of an idea simply because society finds the idea itself offensive or disagreeable.” In Matal v. Tam (2017), the Supreme Court reaffirmed this principle and struck down a prohibition on offensive trademark registrations: “Speech that demeans on the basis of race, ethnicity, gender, religion, age, disability, or any other similar ground is hateful; but the proudest boast of our free speech jurisprudence is that we protect the freedom to express the thought that we hate.” 

Most famously, in 1978, the American Nazi Party won the right to march down the streets of Skokie, Illinois, a majority-Jewish town where ten percent of the population had survived the Holocaust. The town had refused to issue a permit to march. Displaying the swastika, Skokie’s lawyers argued, amounted to “fighting words”—which the Supreme Court had ruled, in 1942, could be forbidden if they had a “direct tendency to cause acts of violence by the persons to whom, individually, the remark is addressed.” The Illinois Supreme Court disagreed: “The display of the swastika, as offensive to the principles of a free nation as the memories it recalls may be, is symbolic political speech intended to convey to the public the beliefs of those who display it”—not “fighting words.” Even the revulsion of “the survivors of the Nazi persecutions, tormented by their recollections … does not justify enjoining defendants’ speech.”

Protection of “freedom for the thought we hate” in the literal town square is sacrosanct. The American Civil Liberties Union lawyers who defended the Nazis’ right to march in Skokie were Jews as passionately committed to the First Amendment as was Justice Holmes (post-Schenck). But they certainly wouldn’t have insisted the Nazis be invited to join in a Jewish community day parade. Indeed, the Court has since upheld the right of parade organizers to exclude messages they find abhorrent.

Does Musk really intend Twitter to host Nazis and white supremacists? Perhaps. There are, after all, principled reasons for not banning speech, even in a private forum, just because it is hateful. But there are unavoidable trade-offs. Musk will have to decide what balance will optimize user engagement and keep advertisers (and those financing his purchase) satisfied. It’s unlikely that those lines will be drawn entirely consistent with the First Amendment; at most, it can provide a very general guide.

Harassment & Threats. Often, users are banned by social media platforms for “threatening behavior” or “targeted abuse” (e.g., harassment, doxing). The first category may be easier to apply, but even then, a true public forum would be sharply limited in which threats it could restrict. “True threats,” explained the Court in Virginia v. Black (2003), “encompass those statements where the speaker means to communicate a serious expression of an intent to commit an act of unlawful violence to a particular individual or group of individuals.” But courts split on whether the First Amendment requires that a speaker have the subjective intent to threaten the target, or if it suffices that a reasonable recipient would have felt threatened. Maximal protection for free speech means a subjective requirement, lest the law punish protected speech merely because it might be interpreted as a threat. But in most cases, it would be difficult—if not impossible—to establish subjective intent without the kind of access to witnesses and testimony courts have. These are difficult enough issues even for courts; content moderators will likely find it impossible to adhere strictly, or perhaps even approximately, to First Amendment standards.

Targeted abuse and harassment policies present even thornier issues; what is (or should be) prohibited in this area remains among the most contentious aspects of content moderation. While social media sites vary in how they draw lines, all the major sites “[go] far beyond,” as Musk put it, what the First Amendment would permit a public forum to proscribe.

Mere offensiveness does not suffice to justify restricting speech as harassment; such content-based regulation is generally unconstitutional. Many courts have upheld harassment laws insofar as they target not speech but conduct, such as placing repeated telephone calls to a person in the middle of the night or physically stalking someone. Some scholars argue instead that the consistent principle across cases is that proscribable harassment involves an unwanted physical intrusion into a listener’s private space (whether their home or a physical radius around the person) for the purposes of unwanted one-on-one communication. Either way, neatly and consistently applying legal standards of harassment to content moderation would be no small lift.

Some lines are clear. Ranting about a group hatefully is not itself harassment, while sending repeated unwanted direct messages to an individual user might well be. But Twitter isn’t the telephone network. Line-drawing is more difficult when speech is merely about a person, or occurs in the context of a public, multi-party discussion. Is it harassment to be the “reply guy” who always has to have the last word on everything? What about tagging a person in a tweet about them, or even simply mentioning them by name? What if tweets about another user are filled with pornography or violent imagery? First Amendment standards protect similar real-world speech, but how many users want to party to such conversation?

Again, Musk may well want to err on the side of more permissiveness when it comes to moderation of “targeted abuse” or “harassment.”  We all want words to keep their power to motivate; that remains their most important function. As the Supreme Court said in 1949: “free speech… may indeed best serve its high purpose when it induces a condition of unrest … or even stirs people to anger. Speech is often provocative and challenging. It may strike at prejudices and preconceptions and have profound unsettling effects as it presses for the acceptance of an idea.” 

But Musk’s goal is ultimately, in part, to attract users and keep them engaged. To do that, Twitter will have to moderate some content that the First Amendment would not allow the government to punish. Content moderators have long struggled on how to balance these competing interests. The only certainty is that this is, and will continue to be, an extremely difficult tightrope to walk—especially for Musk. 

Obscenity & Pornography. Twitter already allows pornography involving consenting adults. Yet even this is more complicated than simply following the First Amendment. On the one hand, child sexual abuse material (CSAM) is considered obscenity, which the First Amendment simply doesn’t protect. All social media sites ban CSAM (and all mainstream sites proactively filter for, and block, it). On the other hand, nonconsensual pornography involving adults isn’t obscene, and therefore is protected by the First Amendment. Some courts have nonetheless upheld state “revenge porn” laws, but those laws are actually much narrower than Twitter’s flat ban (“You may not post or share intimate photos or videos of someone that were produced or distributed without their consent.”) 

Critical to the Vermont Supreme Court’s decision to uphold the state’s revenge porn law were two features that made the law “narrowly tailored.” First, it required intent to “harm, harass, intimidate, threaten, or coerce the person depicted.” Such an intent standard is a common limiting feature of speech restrictions upheld by courts. Yet none of Twitter’s policies turn on intent. Again, it would be impossible to meaningfully apply intent-based standards at the scale of the Internet and outside the established procedures of courtrooms. Intent is a complex inquiry unto itself; content moderators would find it nearly impossible to make these decisions with meaningful accuracy. Second, the Vermont law excluded  “[d]isclosures of materials that constitute a matter of public concern,” and those “made in the public interest.” Twitter does have a public-interest exception to its policies, yet, Twitter notes:

At present, we limit exceptions to one critical type of public-interest content—Tweets from elected and government officials—given the significant public interest in knowing and being able to discuss their actions and statements. 

It’s unlikely that Twitter would actually allow public officials to post pornographic images of others without consent today, simply because they were public officials. But to “follow the First Amendment,” Twitter would have to go much further than this: it would have to allow anyone to post such images, in the name of the “public interest.” Is that really what Musk means?

Gratuitous Gore. Twitter bans depictions of “dismembered or mutilated humans; charred or burned human remains; exposed internal organs or bones; and animal torture or killing.” All of these are protected speech. Violence is not obscenity, the Supreme Court ruled in Brown v. Entertainment Merchants Association (2011), and neither is animal cruelty, ruled the Court in U.S. v. Stevens (2010). Thus, the Court struck down a California law barring the sale of “violent” video games to minors and requiring that they be labeled “18,” and a federal law criminalizing “crush videos” and other depictions of the torture and killing of animals.

The Illusion of Constitutionalizing Content Moderation

The problem isn’t just that the “bounds of the law” aren’t where Musk may think they are. For many kinds of speech, identifying those bounds and applying them to particular facts is a far more complicated task than any social media site is really capable of. 

It’s not as simple as whether “the First Amendment protects” certain kinds of speech. Only three things we’ve discussed fall outside the protection of the First Amendment altogether: CSAM, non-expressive conduct, and speech integral to criminal conduct. In other cases, speech may be protected in some circumstances, and unprotected in others.

Musk is far from the only person who thinks the First Amendment can provide clear, easy answers to content moderation questions. But invoking First Amendment concepts without doing the kind of careful analysis courts do in applying complex legal doctrines to facts means hiding the ball: it  conceals subjective value judgments behind an illusion of faux-constitutional objectivity. 

This doesn’t mean Twitter couldn’t improve how it makes content moderation decisions, or that it couldn’t come closer to doing something like what courts do in sussing out the “bounds of the law.” Musk would want to start by considering Facebook’s initial efforts to create a quasi-judicial review of the company’s most controversial, or precedent-setting, moderation decisions. In 2018, Facebook funded the creation of an independent Oversight Board, which appointed a diverse panel of stakeholders to assess complaints. The Board has issued 23 decisions in little more than a year, including one on Facebook’s suspension of Donald Trump for posts he made during the January 6 storming of the Capitol, expressing support for the rioters. 

Trump’s lawyers argued the Board should “defer to the legal principles of the nation state in which the leader is, or was governing.” The Board responded that its “decisions do not concern the human rights obligations of states or application of national laws, but focus on Facebook’s content policies, its values and its human rights responsibilities as a business.” The Oversight Board’s charter makes this point very clear. Twitter could, of course, tie its policies to the First Amendment and create its own oversight board, chartered with enforcing the company’s adherence to First Amendment principles. But by now, it should be clear how much more complicated that would be than it might seem. While constitutional protection of speech is clearly established in some areas, new law is constantly being created on the margins—by applying complex legal standards to a never-ending kaleidoscope of new fact patterns. The complexities of these cases keep many lawyers busy for years; it would be naïve to presume that an extra-judicial board will be able to meaningfully implement First Amendment standards.

At a minimum, any serious attempt at constitutionalizing content moderation would require hiring vastly more humans to process complaints, make decisions, and issue meaningful reports—even if Twitter did less content moderation overall. And Twitter’s oversight board would have to be composed of bona fide First Amendment experts. Even then, it may be that the decision of such a board might later be undercut by actual court decisions involving similar facts. This doesn’t mean that attempting to hew to the First Amendment is a bad idea; in some areas, it might make sense, but it will be far more difficult than Musk imagines.

In Part II, we’ll ask what principles, if not the First Amendment, should guide content moderation, and what Musk could do to make Twitter more of a “de facto town square.”

Berin Szóka (@BerinSzoka) is President of TechFreedom. Ari Cohn (@AriCohn) is Free Speech Counsel at TechFreedom. Both are lawyers focused on the First Amendment’s application to the Internet.

Posted on Techdirt - 10 February 2022 @ 03:30pm

The Top Ten Mistakes Senators Made During Today's EARN IT Markup

Today, the Senate Judiciary Committee unanimously approved the EARN IT Act and sent that legislation to the Senate floor. As drafted, the bill will be a disaster. Only by monitoring what users communicate could tech services avoid vast new liability, and only by abandoning, or compromising, end-to-end encryption, could they implement such monitoring. Thus, the bill poses a dire threat to the privacy, security and safety of law-abiding Internet users around the world, especially those whose lives depend on having messaging tools that governments cannot crack. Aiding such dissidents is precisely why it was the U.S. government that initially funded the development of the end-to-end encryption (E2EE) now found in Signal, Whatsapp and other such tools. Even worse, the bill will do the opposite of what it claims: instead of helping law enforcement crack down on child sexual abuse material (CSAM), the bill will actually help the most odious criminals walk free.

As with the July 2020 markup of the last Congress’s version of this bill, the vote was unanimous. This time, no amendments were adopted; indeed, none were even put up for a vote. We knew there wouldn’t be much time for discussion because Sen. Dick Durbin kicked off the discussion by noting that Sen. Lindsey Graham would have to leave soon for a floor vote. 

The Committee didn’t bother holding a hearing on the bill before rushing it to markup. The one and only hearing on the bill occurred just six days after its introduction back in March 2020. The Committee thereafter made major (but largely cosmetic) changes to the bill, leaving its Members more confused than ever about what the bill actually does. Today’s markup was a singular low-point in the history of what is supposed to be one of the most serious bodies in Congress. It showed that there is nothing remotely judicious about the Judiciary Committee; that most of its members have little understanding of the Internet and even less of how the, ahem, judiciary actually works; and, saddest of all, that they simply do not care.

Here are the top ten legal and technical mistakes the Committee made today.

Mistake #1: “Encryption Is not Threatened by This Bill”

Strong encryption is essential to online life today. It protects our commerce and our communications from the prying eyes of criminals, hostile authorian regimes and other malicious actors.

Sen. Richard Blumenthal called encryption a “red herring,” relying on his work with Sen. Leahy’s office to implement language from his 2020 amendment to the previous version of EARN IT (even as he admitted to a reporter that encryption was a target). Leahy’s 2020 amendment aimed to preserve companies’ ability to offer secure encryption in their products by providing that a company could not be found in violation of the law because it utilized secure encryption, doesn’t have the ability to decrypt communications, or fails to undermine the security of their encryption (for example, by building in a backdoor for use by law enforcement). 

But while the 2022 EARN IT Act contains the same list of protected activities, the authors snuck in new language that undermines that very protection. This version of the bill says that those activities can’t be an independent basis of liability, but that courts can consider them as evidence while proving the civil and criminal claims permitted by the bill’s provisions. That’s a big deal. EARN IT opens the door to liability under an enormous number of state civil and criminal laws, some of which require (or could require, if state legislatures so choose) a showing that a company was only reckless in its actions—a far lower showing than federal law’s requirement that a defendant have acted “knowingly.” If a court can consider the use of encryption, or failure to create security flaws in that encryption, as evidence that a company was “reckless,” it is effectively the same as imposing liability for encryption itself. No sane company would take the chance of being found liable for transmitting CSAM; they’ll just stop offering strong encryption instead. 

Mistake #2: The Bill’s Sponsors Readily Conceded that EARN IT Would Coerce Monitoring for CSAM

EARN IT’s sponsors repeatedly complained that tech companies aren’t doing enough to monitor for CSAM—and that their goal was to force them to do more. As Sen. Blumenthal noted, free software (PhotoDNA) makes it easy to detect CSAM, and it’s simply outrageous that some sites aren’t even using it. He didn’t get specific but we will: both Parler and Gettr, the alternative social networks favored by the MAGA right, have refused to use PhotoDNA. When asked about it, Parler’s COO told The Washington Post: “I don’t look for that content, so why should I know it exists?” The Stanford Internet Observatory’s David Thiel responded:

We agree completely—morally. So why, as Berin asked when EARN IT was first introduced, doesn’t Congress just directly mandate the use of such easy filtering tools? The answer lies in understanding why Parler and Gettr can get away with this today. Back in 2008, Congress required tech companies that become aware of CSAM to report it immediately to NCMEC, the quasi-governmental clearinghouse that administers the database of CSAM hashes used by PhotoDNA to identify known CSAM. Instead of requiring companies to monitor for CSAM, Congress said exactly the opposite: nothing in 18 U.S.C. § 2258A “shall be construed to require a provider to monitor [for CSAM].”

Why? Was Congress soft on child predators back then? Obviously not. Just the opposite: they understood that requiring tech companies to conduct searches for CSAM would make them state actors subject to the Fourth Amendment’s warrant requirement—and they didn’t want to jeopardize criminal prosecutions. 

Conceding that the purpose of EARN IT Act is to coerce searches for CSAM is a mistake, a colossal one, because it invites courts to rule that searching wasn’t voluntary.

Mistake #3: The Leahy Amendment Alone Won’t Protect Privacy & Security, or Avoid Triggering the Fourth Amendment

While Sen. Leahy’s 2020 amendment was a positive step towards protecting the privacy and security of online communications, and Lee’s proposal today to revive it is welcome, it was always an incomplete solution. While it protected companies against liability for offering encryption or failing to undermine the security of their encryption, it did not protect the refusal to conduct monitoring of user communications. A company offering E2EE products might still be coerced into compromising the security of its devices by scanning user communications “client-side” (i.e., on the device) prior to encrypting sent communications or after decrypting received communications. 

Apple recently proposed such a technology for such client-side scanning, raising concerns from privacy advocates and civil society groups. For its part, Apple assured that safeguards would limit use of the system to known CSAM to prevent the capability from being abused by foreign governments or rogue actors. But the capacity to conduct such surveillance presents an inherent risk of being exploited by malicious actors. Some companies may be able to successfully safeguard such surveillance architecture from misuse or exploitation. However, resources and approaches will vary across companies, and it is a virtual certainty that not all of them will be successful. And if done under coercion, create a risk that such efforts will be ruled state action requiring a warrant under the Fourth Amendment. 

Our letter to the Committee proposes an easy way to expand the Leahy amendment to ensure that companies won’t be held liable for not monitoring user content: borrow language directly from Section 2258A(f).

Mistake #4: EARN IT’s Sponsors Just Don’t Understand the Fourth Amendment Problem

Sen. Blumenthal insisted, repeatedly, that EARN IT contained no explicit requirement not to use encryption. The original version of the bill would, indeed, have allowed a commission to develop “best practices” that would be “required” as conditions of “earning” back the Section 230 immunity tech companies need to operate—hence the bill’s name. But dropping that concept didn’t really make the bill less coercive because the commission and its recommendations were always a sideshow. The bill has always coerced monitoring of user communications—and, to do that, the abandonment or bypassing of strong encryption—indirectly, through the threat of vast legal liability for not doing enough to stop the spread of CSAM. 

Blumenthal simply misunderstands how the courts assess whether a company is conducting unconstitutional warrantless searches as a “government actor.” “Even when a search is not required by law, … if a statute or regulation so strongly encourages a private party to conduct a search that the search is not ‘primarily the result of private initiative,’ then the Fourth Amendment applies.” U.S. v. Stevenson, 727 F.3d 826, 829 (8th Cir. 2013) (quoting Skinner v. Railway Labor Executives’ Assn, 489 U.S. 602, 615 (1989)). In that case, the court found that AOL was not a government actor because it “began using the filtering process for business reasons: to detect files that threaten the operation of AOL’s network, like malware and spam, as well as files containing what the affidavit describes as “reputational” threats, like images depicting child pornography.” AOL insisted that it “operate[d] its file-scanning program independently of any government program designed to identify either sex-offenders or images of child pornography, and the government never asked AOL to scan Stevenson’s e-mail.” Id. By contrast, every time EARN IT’s supporters explain their bill, they make clear that they intend to force companies to search user communications in ways they’re not doing today.

Mistake #2 Again: EARN IT’s Sponsors Make Clear that Coercion Is the Point

In his opening remarks today, Sen. Graham didn’t hide the ball:

“Our goal is to tell the social media companies ‘get involved and stop this crap. And if you don’t take responsibility for what’s on your platform, then Section 230 will not be there for you.’ And it’s never going to end until we change the game.”

Sen. Chris Coons added that he is “hopeful that this will send a strong signal that technology companies … need to do more.” And so on and so forth.

If they had any idea what they were doing, if they understood the Fourth Amendment issue, these Senators would never admit that they’re using liability as a cudgel to force companies to take affirmative steps to combat CSAM. By making intentions unmistakable, they’ve given the most vile criminals exactly what they need to to challenge the admissibility of CSAM evidence resulting from companies “getting involved” and “doing more.” Though some companies, concerned with negative publicity, may tell courts that they conducted searches of user communications for “business reasons,” we know what defendants will argue: the companies’ “business reason” is avoiding the wide, loose liability that EARN IT subjected them to. EARN IT’s sponsors said so.

Mistake #5: EARN IT’s Sponsors Misunderstanding How Liability Would Work 

Except for Sen. Mike Lee, no one on the Committee seemed to understand what kind of liability rolling back Section 230 immunity, as EARN IT does, would create. Sen. Blumenthal repeatedly claimed that the bill requires actual knowledge. One of the bill’s amendments (the new Section 230(e)(6)(A)) would, indeed, require actual knowledge by enabling civil claims under 18 U.S.C. § 2255 “if the conduct underlying the claim constitutes a violation of section 2252 or section 2252A,” both of which contain knowledge requirements. This amendment is certainly an improvement over the original version of EARN IT, which would have explicitly allowed 2255 claims under a recklessness standard. 

But the two other changes to Section 230 clearly don’t require knowledge. As Sen. Lee pointed out today, a church could be sued, or even prosecuted, simply because someone posted CSAM on its bulletin board. Multiple existing state laws already create liability based on something less than actual knowledge of CSAM. As Lee noted, a state could pass a law creating strict liability for hosting CSAM. Allowing states to hold websites liable for recklessness (or even less) while claiming that the bill requires actual knowledge is simply dishonest. All these less-than-knowledge standards will have the same result: coercing sites into monitoring user communications, and into abandoning strong encryption as an obstacle to such monitoring. 

Blumenthal made it clear that this is precisely what he intends, saying: “Other states may wish to follow [those using the “recklessness” standard]. As Justice Brandeis said, states are the laboratories of democracy … and as a former state attorney general I welcome states using that flexibility. I would be loath to straightjacket them in their adoption of different standards.”

Mistake #6: “This Is a Criminal statute, This Is Not Civil Liability”

So said Sen. Lindsey Graham, apparently forgetting what his own bill says. Sen. Dianne Feinstein added her own misunderstanding, saying that she “didn’t know that there was a blanket immunity in this area of the law.” But if either of those statements were true, the EARN IT Act wouldn’t really do much at all. Section 230 has always explicitly carved out federal criminal law from its immunities; companies can already be charged for knowing distribution of child sexual abuse material (CSAM) or child sexual exploitation (CSE) under federal criminal statutes. Indeed, Backpage and its founders were criminally prosecuted even without SESTA’s 2017 changes to Section 230. If the federal government needs assistance in enforcing those laws, it could adopt Sen. Mike Lee’s amendment to permit state criminal prosecutions when the conduct would constitute a violation of federal law. Better yet, the Attorney General could use an existing federal law (28 U.S.C. § 543) to deputize state, local, and tribal prosecutors as “special attorneys” empowered to prosecute violations of federal law. Why no AG has bothered to do so yet is unclear.

What is clear is that EARN IT isn’t just about criminal law. EARN IT expressly carves out civil claims under certain federal statutes, and also under whatever state laws arguably relate to “the advertisement, promotion, presentation, distribution, or solicitation of child sexual abuse material” as defined by federal law. Those laws can and do vary, not only with respect to the substance of what is prohibited, but also the mental state required for liability. This expansive breadth of potential civil liability is part of what makes this bill so dangerous in the first place.

Mistake #7: “If They Can Censor Conservatives, They Can Stop CSAM!”

As at the 2020 markup, Sen. Lee seemed to understand most clearly how EARN IT would work, the Fourth Amendment problems it raises, and how to fix at least some of them. A former Supreme Court Clerk, Lee has a sharp legal mind, but he seems to misunderstand much of how the bill would work in practice, and how content moderation works more generally.

Lee complained that, if Big Tech companies can be so aggressive in “censoring” speech they don’t like, surely they can do the same for CSAM. He’s mixing apples and oranges in two ways. First, CSAM is the digital equivalent of radioactive waste: if a platform gains knowledge of it, it must take it down immediately and report it to NCMEC, and faces stiff criminal penalties if it doesn’t. And while “free speech” platforms like Parler and Gettr refuse to proactively monitor for CSAM (as discussed below), every mainstream service goes out of its way to stamp out CSAM on unencrypted service. Like AOL in the Stevenson case, they do so for business and reputational reasons.

By contrast no website even tries to block all “conservative” speech; rather, mainstream platforms must make difficult judgment calls about taking down politically charged content, such as Trump’s account only after he incited an insurrection in an attempted coup and misinformation about the 2020 election being stolen. Republicans are mad about where tech companies draw such lines.

Second, social media platforms can only moderate content that they can monitor. Signal can’t moderate user content and that is precisely the point: end-to-end-encryption means that no one other than the parties to a communication can see it. Unlike normal communications, which may be protected by lesser forms of “encryption,” the provider isn’t standing in the middle of the communication and it doesn’t have the keys to unlock the messages that it is passing back and forth. Yes, some users will abuse E2EE to share CSAM, but the alternative is to ban it for everyone. There simply isn’t a middle ground.

There may indeed be more that some tech companies could do about content they can see—both public content like social media posts and private content like messages (protected by something less than E2EE). But their being aggressive about, say, misinformation about COVID or the 2020 election has nothing whatsoever to do with the cold, hard reality that they can’t moderate content protected by strong encryption.

It’s hard to tell whether Lee understands these distinctions. Maybe not. Maybe he’s just looking to wave the bloody shirt of “censorship” again. Maybe he’s saying the same thing everyone else is saying, essentially: “Ah, yes, but if only Facebook, Apple and Google didn’t use end-to-end encryption for their messaging services, then they could monitor those for CSAM just like they monitor and moderate other content!” Proposing to amend the bill to require actual knowledge under both state and federal law suggests he doesn’t want this result, but who knows?

Mistake #8: Assuming the Fourth Amendment Won’t Require Warrants If It Applies

Visibility to the provider relates to one important legal distinction not discussed at all today—but that may well explain why the bill’s sponsors don’t seem to care about Fourth Amendment concerns. It’s an argument Senate staffers have used to defend the bill since its introduction. Even if compulsion through vast legal liability did make tech companies government actors, the Fourth Amendment requires a warrant only for searches of material for which users have a reasonable expectation of privacy. Kyllo v. United States, 533 U.S. 27, 33 (2001); see Katz v. United States, 389 U.S. 347, 361 (1967) (Harlan, J., concurring). Courts long held that users had no such expectations for digital messages like email held by third parties. 

But that began to change in 2010. If searches of emails trigger the Fourth Amendment—and U.S. v. Warshak, 631 F.3d 266 (6th Cir. 2010) said they do—searches of private messaging certainly would. The entire purpose of E2EE is to give users rock-solid expectations of privacy in their communications. More recently, the Supreme Court has said that, “given the unique nature of cell phone location records, the fact that the information is held by a third party does not by itself overcome the user’s claim to Fourth Amendment protection.” Carpenter v. United States, 138 S. Ct. 2206, 2217 (2018). These cases draw the line Sen. Lee is missing: no, of course users don’t have reasonable expectations of privacy in public social media posts—which is what he’s talking about when he points to “censorship” of conservative speech. EARN IT could avoid the Fourth Amendment by focusing on content providers can see, but it doesn’t, because it’s intended to force companies to be able to see all user communications.

Mistake #9: What They didn’t Discuss: Anonymous Speech

The Committee didn’t discuss how EARN IT would affect speech protected by the First Amendment. No, of course CSAM isn’t protected speech, but the bill would affect lawful speech by law-abiding citizens—primarily by restricting anonymous speech. Critically, EARN IT doesn’t just create liability for trafficking in CSAM. The bill also creates liability for failing to stop communications that “solicit” or “promote” CSAM. Software like PhotoDNA can flag CSAM (by matching cryptographic hashes to known images in NCMEC’s database) but identifying “solicitation” or “promotion” is infinitely more complicated. Every flirtatious conversation between two adult users could be “solicitation” of CSAM—or it might be two adults doing adult things. (Adults sext each other—a lot. Get over it!) But “on the Internet, nobody knows you’re a dog”—and there’s no sure way to distinguish between adults and children. 

The federal government tried to do just that in the Communications Decency Act (CDA) of 1996 (nearly all of which, except Section 230, was struck down) and the Child Online Protection Act (COPA) of 1998. Both laws were struck down as infringing on the First Amendment right to accessing lawful content anonymously. EARN IT accomplishes much the same thing indirectly, the same way it attacks encryption: basing liability on anything less than knowledge means you can be sued for not actively monitoring, or for not age-verifying users, especially when the risks are particularly high (such as when you “should have known” you were dealing with minor users). 

Indeed, EARN IT is even more constitutionally suspect. At least COPA focused on content deemed “harmful to minors.” Instead of requiring age-gating for sites that offered porn and sex-related content (e.g., LGBTQ teen health), EARN IT would affect all users of private communications services, regardless of the nature of the content they access or exchange. Again, the point of E2EE is that the service provider has no way of knowing whether messages are innocent chatter or CSAM. 

EARN IT could raise other novel First Amendment problems. Companies could be held liable not only for failing to age-verify all users—a clear First Amendment violation— but also for failing to bar minors from using E2EE services so that their communications can be monitored or failing to use client-side monitoring on minors’ devices, and even failing to segregate adults from minors so they can’t communicate with each other. 

Without the Lee Amendment, EARN IT leaves states free to base liability on explicitly requiring age-verification or limits on what minors can do. 

Mistake #10: Claiming the Bill Is “Narrowly Crafted”

If you’ve read this far, Sen. Blumenthal’s stubborn insistence that this bill is a “narrowly targeted approach” should make you laugh—or sigh. If he truly believes that, either he hasn’t adequately thought about what this bill really does or he’s so confident in his own genius that he can simply ignore the chorus of protest from civil liberties groups, privacy advocates, human rights activists, minority groups, and civil society—all of whom are saying that this bill is bad policy.

If he doesn’t truly believe what he’s saying, well… that’s another problem entirely.

Bonus Mistake!: A Postscript About the Real CSAM problem

Lee never mentioned that the only significant social media services that don’t take basic measures to identify and block CSAM are Parler, Gettr and other fringe sites celebrated by Republicans as “neutral public fora” for “free speech.” Has any Congressional Republican sent letters to these sites asking why they refuse to use PhotoDNA? 

Instead, Lee did join Rep. Ken Buck in March 2021 to interrogate Apple about its decision to take down the Parler app. Answer: Parler hadn’t bothered setting any meaningful content moderation system. Only after Parler agreed to start doing some moderation of what appeared in its Apple app (but not its website) did Apple reinstate the app.

Posted on Techdirt - 9 July 2021 @ 10:41am

Section 230 Continues To Not Mean Whatever You Want It To

In the annals of Section 230 crackpottery, the “publisher or platformcanard reigns supreme. Like the worst (or perhaps best) game of “Broken Telephone” ever, it has morphed into a series of increasingly bizarre theories about a law that is actually fairly short and straightforward.

Last week, this fanciful yarn took an even more absurd turn. It began on Friday, when Facebook began to roll out test warnings about extremism as part of its anti-radicalization efforts and in response to the Christchurch Call for Action campaign. There appears to be two iterations of the warnings: one asks the user whether they are concerned that someone they know is becoming an extremist, a second warns the user that they may have been exposed to extremist content (allegedly appearing while users were viewing specific types of content). Both warnings provide a link to support resources to combat extremism.

As it is wont to do, the Internet quickly erupted into an indiscriminate furor. Talking heads and politicians raged about the “Orwellian environment” and “snitch squads” that Facebook is creating, and the conservative media eagerly lapped it up (ignoring, of course, that nobody is forced to use Facebook or to pay any credence to their warnings). That’s not to say there is no valid criticism to be lodged?surely the propriety of the warnings and definition of “extremist” are matters on which people can reasonably disagree, and those are conversations worth having in a reasoned fashion.

But then someone went there. It was inevitable, really, given that Section 230 has become a proxy for “things social media platforms do that I don’t like.” And Section 230 Truthers never miss an opportunity to make something wrongly about the target of their eternal ire.

Notorious COVID (and all-around) crank Alex Berenson led the charge, boosted by the usual media crowd, tweeting:

Yeah, I’m becoming an extremist. An anti-@Facebook extremist. “Confidential help is available?” Who do they think they are?

Either they’re a publisher and a political platform legally liable for every bit of content they host, or they need to STAY OUT OF THE WAY. Zuck’s choice.

That is, to be diplomatic, deeply stupid.

Like decent toilet paper, the inanity of this tweet is two-ply. First (setting aside the question of what exactly “political platform” means) is the mundane reality, explained ad nauseum, that Facebook needs not?in fact?make any such choice. It bears repeating: Section 230 provides that websites are not liable as the publishers of content provided by others. There are no conditions or requirements. Period. End of story. The law would make no sense otherwise; the entire point of Section 230 was to facilitate the ability for websites to engage in “publisher” activities (including deciding what content to carry or not carry) without the threat of innumerable lawsuits over every piece of content on their sites.

Of course, that’s exactly what grinds 230 Truthers’ gears: they don’t like that platforms can choose which content to permit or prohibit. But social media platforms would have a First Amendment right to do that even without Section 230, and thus what the anti-230 crowd really wants is to punish platforms for exercising their own First Amendment rights.

Which leads us to the second ply, where Berenson gives up this game in spectacular fashion because Section 230 isn’t even relevant. Facebook’s warnings are its own content, which is not immunized under Section 230 in the first place. Facebook is liable as the publisher of content it creates; always has been, always will be. If Facebook’s extremism warnings were somehow actionable (as rather nonspecific opinions, they aren’t) it would be forced to defend a lawsuit on the merits.

It simply makes no sense at all. Even if you (very wrongly) believe that Section 230 requires platforms to host all content without picking and choosing, that is entirely unrelated to a platform’s right to use its own speech to criticize or distance itself from certain content. And that’s all Facebook did. It didn’t remove or restrict access to content; Facebook simply added its own additional speech. If there’s a more explicit admission that the real goal is to curtail platforms’ own expression, it’s difficult to think of.

Punishing speakers for their expression is, of course, anathema to the First Amendment. In halting enforcement of Florida’s new social media law, U.S. District Judge Robert Hinkle noted that Florida would prohibit platforms from appending their own speech to users’ posts, compounding the statute’s constitutional infirmities. Conditioning Section 230 immunity on a platform’s forfeiture of its completely separate First Amendment right to use its own voice would fare no better.

Suppose Democrats introduced a bill that conditioned the immunity provided to the firearms industry by the PLCAA on industry members refraining from speaking out out or lobbying against gun control legislation. Inevitably, and without a hint of irony, many of the people urging fundamentally the same thing for social media platforms would find newfound outrage at the brazen attack on First Amendment rights.

At the end of the day, despite all their protestations, what people like Berenson want is not freedom of speech. Quite the opposite. They want to dragoon private websites into service as their free publishing house and silence any criticism by those websites with the threat of financial ruin. It’s hard to think of anything less free speech-y, or intellectually honest, than that.

Ari Cohn is Free Speech Counsel at TechFreedom

More posts from ari.cohn >>