Mike Masnick's Techdirt Profile

Mike Masnick

About Mike Masnick Techdirt Insider

Mike is the founder and CEO of Floor64 and editor of the Techdirt blog.

He can be found on Twitter at www.twitter.com/mmasnick

Posted on Techdirt - 23 May 2022 @ 03:46pm

Jehovah’s Witnesses Run Away From Bogus Copyright Claims After Judge Realizes They’re Just Trying To Intimidate Critics

Earlier this year, we wrote about how the Watch Tower Bible and Tract Society, better known as the Jehovah’s Witnesses — despite a long history of litigating many, many important 1st Amendment cases — had thrown away its reputation as a staunch defender of free speech rights by massively abusing the DMCA 512(h) subpoena process to attempt to identify (and intimidate) critics of the religion.

In the last few years, Watch Tower had filed over 70 of these DMCA subpoenas, insisting that it needed to identify the alleged infringers in order to bring a copyright lawsuit. Of course, the real reason was almost certainly just to find out the names of the organization’s critics. This belief is supported by the fact that Watch Tower basically never followed up any of these subpoenas with the actual lawsuit it claimed it needed the information to bring.

Indeed, the case that we’re talking about here is the exception to the rule, and it appeared that Watch Tower only sued because a court agreed to quash the subpoena. Public Citizen Litigation Group’s Paul Levy represented the target of the lawsuit, using the pseudonym Kevin McFree, and the latest update is that the Jehovah’s Witnesses have turned tail and run right out of court, once they realized they were likely about to be in serious trouble for their abusive efforts to intimidate and silence a critic.

The full story is really quite a read. This is only a part of it but should give you plenty of reasons to want to read the whole thing.

During the hearing, Watch Tower’s counsel made the outrageous statement that Watch Tower’s litigation strategies were confined by a lack of “significant funds,” and that its approach to the litigation was guided by “significant economic motivations.” (bottom of page 18 of the transcript).  Because Watch Tower’s 990T forms are publicly available as required by law, it is a matter of public knowledge that Watch Tower has more than a billion dollars in assets. Watch Tower is fortunate that it never made this representation about limited resources in a signed document.

At our suggestion, the Court asked the parties to agree on a briefing schedule for the planned motion to quash, but that proved not to be possible because Watch Tower made clear that it was going to try to pursue discovery having nothing to do with its copyright claims. Rather, Watch Tower told us that it planned to use the infringement action to pursue the question of how McFree had obtained the previously unpublished videos. The Watch Tower headquarters is a leaky sieve and it wants to identify the leakers.  Beyond that, there may have been a massive hack of Watch Tower’s computer systems several years ago.  Watch Tower made clear that it was planning to seek discovery on those issues as part of its opposition to the planned motion to quash. It demanded a briefing schedule  that would have allowed it to postpone explaining how it could obtain McFree’s identifying information despite the res judicata defense until it had had the opportunity to pursue discovery.  At the same time, it told us that it was willing to drop its lawsuit with prejudice so long as McFree was willing to agree that he would never use any of Watch Tower’s materials before Watch Tower’s own publication of those materials without Watch Tower’s consent.

Of course, under the Supreme Court’s decision in Bartnicki v. Vopper, McFree has every right to use leaked unpublished materials, even if obtained from people violating a confidentiality contract, and even unpublished materials obtained by illegal hacking, so long as McFree had no involvement in the hacking. And although the possible hack of Watch Tower’s computer might well have been actionable under the Computer Fraud and Abuse Act, the statute of limitations on that cause of action expired years ago. So it became apparent that Watch Tower was trying to leverage a barred copyright claim, and the threat of identifying McFree, to obtain relief and or discovery on a different subject entirely – a possible abuse of process. McFree rejected this proposed settlement outright, and we warned Watch Tower that if it persisted in the litigation, we might file a document blocking it from a voluntary dismissal without prejudice, thus locking it into litigation that it was sure to lose.  We urge it to drop the case immediately.

And yet, interestingly, Watch Tower was not pursuing identical copyright claims against another YouTube user, Lloyd Evans (blogging as John Cedars), who used the same unpublished videos that McFree had used (and many more). Watch Tower did not go after Evans because it knew that he was not going to take any guff. Watch Tower represented that the reason it had pursued the anonymous Kevin McFree for his use of the unpublished 2018 video instead of filing an infringement action against Lloyd Evans was that it had not known of his use of such unpublished videos. In fact, Watch Tower’s inhouse counsel submitted an affidavit averring that Watch Tower did not learn about a a particular Cedars video until September 2020, when McFree mentioned the video in his papers.

But we learned, in the course of investigating the case, that Watch Tower sent Evans a demand letter in 2018 pertaining to his use of leaked and unpublished convention videos from that year. That letter cited the URL for Evans’ You Tube channel and made clear that Watch Tower was monitoring its content. It is hard to believe Watch Tower’s assertion that it did not know until 2020 about Evans’ use of the same material on which it was pursuing McFree.

And finally, in the course of investigating the case to prepare for briefing, we obtained useful information to address Watch Tower’s false assertion that it wanted to identify alleged infringers only for the purpose of pursuing copyright claims against them. Watch Tower succeeded in using a DMCA subpoena obtaining the identity of a previously identified blogger who specialized in attacking child abuse within the group, and Watch Tower’s refusal to report abuse to local authorities. Shortly thereafter, it initiated disfellowship proceedings against him. It is quite possible that Watch Tower did not need the information it obtained under the DMCA (because this blogger’s identifying information had become available elsewhere), but even so it never sued him for copyright infringement and it never otherwise used his identity to enforce its copyright. Watch Tower had got what it wanted — revenge.

The end result of all this, however, is that Watch Tower agreed to dismiss the lawsuit with prejudice, which also allows it to avoid claims to recover attorney’s fees. And there was a very real risk of having to pay up, as copyright is one realm in which the Supreme Court has been open to making frivolous lawsuit filers have to pay up.

While it may be disappointing that this case didn’t lead to a final ruling from the judge, hopefully by admonishing the Jehovah’s Witnesses for their longstanding abuse of the DMCA subpoena process, one hopes that how this case played out has been something of a warning to the group not to continue abusing copyright law to attack and silence critics. I assume, at least, that Paul Levy will continue watching.

Posted on Techdirt - 23 May 2022 @ 11:22am

11th Circuit Disagrees With The 5th Circuit (But Actually Explains Its Work): Florida’s Social Media Bill Still (Mostly) Unconstitutional

Well, well. As we still wait to see what the Supreme Court will do about the 5th Circuit’s somewhat bizarre, and reasonless reinstatement of Texas’ ridiculously bad social media content moderation bill, the 11th Circuit has come out with what might be a somewhat rushed decision going mostly in the other direction, and saying that most of Florida’s content moderation bill is, as the lower court said, unconstitutional. It’s worth reading the entire decision, which may take a bit longer than the 5th Circuit’s one sentence reinstatement of the law, as it makes a lot of good points. I still think that the court is missing some important points about the parts of the law that it has reinstated (around transparency), but we’ll have another post on that shortly (and I hope those mistakes may be fixed with more briefing).

As for what the court got right: it tossed the key parts of the law around moderation, saying that those were an easy call as unconstitutional, just like the lower court said. The government cannot mandate how a website handles content moderation. The ruling opens strong:

Not in their wildest dreams could anyone in the Founding generation have imagined Facebook, Twitter, YouTube, or TikTok. But “whatever the challenges of applying the Constitution to ever-advancing technology, the basic principles of freedom of speech and the press, like the First Amendment’s command, do not vary when a new and different medium for communication appears.” Brown v. Ent. Merchs. Ass’n, 564 U.S. 786, 790 (2011) (quotation marks omitted). One of those “basic principles”—indeed, the most basic of the basic—is that “[t]he Free Speech Clause of the First Amendment constrains governmental actors and protects private actors.” Manhattan Cmty. Access Corp. v. Halleck, 139 S. Ct. 1921, 1926 (2019). Put simply, with minor exceptions, the government can’t tell a private person or entity what to say or how to say it.

The court effectively laughs off Florida’s argument that social media is no longer considered a “private actor” and effectively mocks the claims, made by Florida, that “the ‘big tech’ oligarchs in Silicon Valley” are trying to “silence conservative speech in favor of a ‘radical leftist’ agenda.” The 1st Amendment protects companies’ right to moderate how they see fit:

We hold that it is substantially likely that social-media companies—even the biggest ones—are “private actors” whose rights the First Amendment protects, Manhattan Cmty., 139 S. Ct. at 1926, that their so-called “content-moderation” decisions constitute protected exercises of editorial judgment, and that the provisions of the new Florida law that restrict large platforms’ ability to engage in content moderation unconstitutionally burden that prerogative. We further conclude that it is substantially likely that one of the law’s particularly onerous disclosure provisions—which would require covered platforms to provide a “thorough rationale” for each and every content-moderation decision they make—violates the First Amendment. Accordingly, we hold that the companies are entitled to a preliminary injunction prohibiting enforcement of those provisions.

As noted above, the court also does say that there are a few disclosure/transparency provisions that it finds “far less burdensome” are “unlikely” to violate the 1st Amendment, and vacates that part of the lower court ruling. I still think this is incorrect, but, as noted, we’ll explain that part in another post.

For the most part, this is a fantastic ruling, explaining clearly why content moderation is protected by the 1st Amendment. And, because I know that some supporters of Florida in our comments kept insisting that the lower court decision was only because it was a “liberal activist” judge, I’ll note that this ruling was written by Judge Kevin Newsom, who was appointed to the court by Donald Trump (and the other two judges on the panel were also nominated by Republican Presidents).

The ruling kicks off by noting, correctly, that social media is mostly made up of speech by third parties, and also (thankfully!) recognizing that it’s not just the giant sites, but smaller sites as well:

At their core, social-media platforms collect speech created by third parties—typically in the form of written text, photos, and videos, which we’ll collectively call “posts”—and then make that speech available to others, who might be either individuals who have chosen to “follow” the “post”-er or members of the general public. Social-media platforms include both massive websites with billions of users—like Facebook, Twitter, YouTube, and TikTok— and niche sites that cater to smaller audiences based on specific interests or affiliations—like Roblox (a child-oriented gaming network), ProAmericaOnly (a network for conservatives), and Vegan Forum (self-explanatory)

It’s good that they recognize that these kinds of laws impact smaller companies as well.

From there the court makes “three important points”: private websites are not the government, social media is different than a newspaper, and social media are not “dumb pipes” like traditional telecom services:

Three important points about social-media platforms: First—and this would be too obvious to mention if it weren’t so often lost or obscured in political rhetoric—platforms are private enterprises, not governmental (or even quasi-governmental) entities. No one has an obligation to contribute to or consume the content that the platforms make available. And correlatively, while the Constitution protects citizens from governmental efforts to restrict their access to social media, see Packingham v. North Carolina, 137 S. Ct. 1730, 1737 (2017), no one has a vested right to force a platform to allow her to contribute to or consume social-media content.

Second, a social-media platform is different from traditional media outlets in that it doesn’t create most of the original content on its site; the vast majority of “tweets” on Twitter and videos on YouTube, for instance, are created by individual users, not the companies that own and operate Twitter and YouTube. Even so, platforms do engage in some speech of their own: A platform, for example, might publish terms of service or community standards specifying the type of content that it will (and won’t) allow on its site, add addenda or disclaimers to certain posts (say, warning of misinformation or mature content), or publish its own posts.

Third, and relatedly, social-media platforms aren’t “dumb pipes”: They’re not just servers and hard drives storing information or hosting blogs that anyone can access, and they’re not internet service providers reflexively transmitting data from point A to point B. Rather, when a user visits Facebook or Twitter, for instance, she sees a curated and edited compilation of content from the people and organizations that she follows. If she follows 1,000 people and 100 organizations on a particular platform, for instance, her “feed”—for better or worse—won’t just consist of every single post created by every single one of those people and organizations arranged in reverse-chronological order. Rather, the platform will have exercised editorial judgment in two key ways: First, the platform will have removed posts that violate its terms of service or community standards—for instance, those containing hate speech, pornography, or violent content. See, e.g., Doc. 26-1 at 3–6; Facebook Community Standards, Meta, https://transparency.fb.com/policies/community-standards (last accessed May 15, 2022). Second, it will have arranged available content by choosing how to prioritize and display posts—effectively selecting which users’ speech the viewer will see, and in what order, during any given visit to the site.

Each of these points is important and effectively dispenses with much of the nonsense we’ve seen people claim in the past. First, it tosses aside the incorrect and misleading argument that some have read into Packingham’s decision that notes the internet is a “public square.” Here, the judges correctly note that Packingham only stands for the rule that the government cannot restrict their access to social media, and not that it can force private companies to host them.

Also, I love the fact that the court makes the “not a dumb pipe” argument, and even uses the line “reflexively transmitting data from point A to point B.” That’s nearly identical to the language that I’ve used in explaining why it makes no sense to call social media a common carrier.

Next, the court points out, again accurately, that the purpose of a social media website is to act as an “intermediary” between users, but also (and this is important) in crafting different types of online communities, including focusing on niches:

Accordingly, a social-media platform serves as an intermediary between users who have chosen to partake of the service the platform provides and thereby participate in the community it has created. In that way, the platform creates a virtual space in which every user—private individuals, politicians, news organizations, corporations, and advocacy groups—can be both speaker and listener. In playing this role, the platforms invest significant time and resources into editing and organizing—the best word, we think, is curating—users’ posts into collections of content that they then disseminate to others. By engaging in this content moderation, the platforms develop particular market niches, foster different sorts of online communities, and promote various values and viewpoints.

This is also an important point that is regularly ignored or overlooked. It’s the point that the authors of Section 230 have tried to drive home in explaining why they wrote the law in the first place. When they talk about “diversity of political discourse” in the law, they never meant “all on the same site,” but rather giving websites the freedom to cater to different audiences. It’s fantastic that this panel recognizes that fact.

When we get to the meat of the opinion, explaining the decision, the court again makes a bunch of very strong, and very correct points, about the impact of a law like Florida’s.

Social-media platforms like Facebook, Twitter, YouTube, and TikTok are private companies with First Amendment rights, see First Nat’l Bank of Bos. v. Bellotti, 435 U.S. 765, 781–84 (1978), and when they (like other entities) “disclos[e],” “publish[],” or “disseminat[e]” information, they engage in “speech within the meaning of the First Amendment.” Sorrell v. IMS Health Inc., 564 U.S. 552, 570 (2011) (quotation marks omitted). More particularly, when a platform removes or deprioritizes a user or post, it makes a judgment about whether and to what extent it will publish information to its users—a judgment rooted in the platform’s own views about the sorts of content and viewpoints that are valuable and appropriate for dissemination on its site. As the officials who sponsored and signed S.B. 7072 recognized when alleging that “Big Tech” companies harbor a “leftist” bias against “conservative” perspectives, the companies that operate social-media platforms express themselves (for better or worse) through their content-moderation decisions. When a platform selectively removes what it perceives to be incendiary political rhetoric, pornographic content, or public-health misinformation, it conveys a message and thereby engages in “speech” within the meaning of the First Amendment.

Laws that restrict platforms’ ability to speak through content moderation therefore trigger First Amendment scrutiny. Two lines of precedent independently confirm this commonsense conclusion: first, and most obviously, decisions protecting exercises of “editorial judgment”; and second, and separately, those protecting inherently expressive conduct.

The key point here: the court recognizes that content moderation is about “editorial judgment” and, as such, easily gets 1st Amendment protection. It cites case after case holding this, focusing heavily on the ruling in Turner v. FCC. This is actually important, as some people (notably FCC commissioner Brendan Carr) trying to tear down Section 230’s protections have ridiculously tried to argue that the ruling in Turner supports their views. But those people are wrong, as the court clearly notes:

So too, in Turner Broadcasting Systems, Inc. v. FCC, the Court held that cable operators—companies that own cable lines and choose which stations to offer their customers—“engage in and transmit speech.” 512 U.S. at 636. “[B]y exercising editorial discretion over which stations or programs to include in [their] repertoire,” the Court said, they “seek to communicate messages on a wide variety of topics and in a wide variety of formats.” Id. (quotation marks omitted); see also Ark. Educ. TV Comm’n v. Forbes, 523 U.S. 666, 674 (1998) (“Although programming decisions often involve the compilation of the speech of third parties, the decisions nonetheless constitute communicative acts.”). Because cable operators’ decisions about which channels to transmit were protected speech, the challenged regulation requiring operators to carry broadcast-TV channels triggered First Amendment scrutiny

(Just as an aside, this also applies to all the nonsense we’ve heard people claim in trying to argue that OAN can force DirecTV to continue to carry it).

Either way, the court drives home: content moderation is editorial judgment.

Social-media platforms’ content-moderation decisions are, we think, closely analogous to the editorial judgments that the Supreme Court recognized in Miami Herald, Pacific Gas, Turner, and Hurley. Like parade organizers and cable operators, social-media companies are in the business of delivering curated compilations of speech created, in the first instance, by others. Just as the parade organizer exercises editorial judgment when it refuses to include in its lineup groups with whose messages it disagrees, and just as a cable operator might refuse to carry a channel that produces content it prefers not to disseminate, social-media platforms regularly make choices “not to propound a particular point of view.” Hurley, 515 U.S. at 575. Platforms employ editorial judgment to convey some messages but not others and thereby cultivate different types of communities that appeal to different groups. A few examples:

  • YouTube seeks to create a “welcoming community for viewers” and, to that end, prohibits a wide range of content, including spam, pornography, terrorist incitement, election and public-health misinformation, and hate speech.
  • Facebook engages in content moderation to foster “authenticity,” “safety,” “privacy,” and “dignity,” and accordingly, removes or adds warnings to a wide range of content—for example, posts that include what it considers to be hate speech, fraud or deception, nudity or sexual activity, and public-health misinformation
  • Twitter aims “to ensure all people can participate in the public conversation freely and safely” by removing content, among other categories, that it views as embodying hate, glorifying violence, promoting suicide, or containing election misinformation.
  • Roblox, a gaming social network primarily for children, prohibits “[s]ingling out a user or group for ridicule or abuse,” any sort of sexual content, depictions of and support for war or violence, and any discussion of political parties or candidates.
  • Vegan Forum allows non-vegans but “will not tolerate members who promote contrary agendas.”

It also notes that this 1st Amendment right enables forums focused on specific political agendas as well:

And to be clear, some platforms exercise editorial judgment to promote explicitly political agendas. On the right, ProAmericaOnly promises “No Censorship | No Shadow Bans | No BS | NO LIBERALS.” And on the left, The Democratic Hub says that its “online community is for liberals, progressives, moderates, independent[s] and anyone who has a favorable opinion of Democrats and/or liberal political views or is critical of Republican ideology.”

All such decisions about what speech to permit, disseminate, prohibit, and deprioritize—decisions based on platforms’ own particular values and views—fit comfortably within the Supreme Court’s editorial-judgment precedents.

As for Florida’s argument that since most content on social media is not vetted first, there is no editorial judgment in content moderation, the court says that’s obviously incorrect.

With respect, the State’s argument misses the point. The “conduct” that the challenged provisions regulate—what this entire appeal is about—is the platforms’ “censorship” of users’ posts—i.e., the posts that platforms do review and remove or deprioritize. The question, then, is whether that conduct is expressive. For reasons we’ve explained, we think it unquestionably is.

There’s also a good footnote debunking the claim that content moderation isn’t expressive because the rules aren’t intending to “convey a particularized message.” As the court notes, that’s just silly:

To the extent that the states argue that social-media platforms lack the requisite “intent” to convey a message, we find it implausible that platforms would engage in the laborious process of defining detailed community standards, identifying offending content, and removing or deprioritizing that content if they didn’t intend to convey “some sort of message.” Unsurprisingly, the record in this case confirms platforms’ intent to communicate messages through their content-moderation decisions—including that certain material is harmful or unwelcome on their sites. See, e.g., Doc. 25-1 at 2 (declaration of YouTube executive explaining that its approach to content moderation “is to remove content that violates [its] policies (developed with outside experts to prevent real-world harms), reduce the spread of harmful misinformation . . . and raise authoritative and trusted content”); Facebook Community Standards, supra (noting that Facebook moderates content “in service of” its “values” of “authenticity,” “safety,” “privacy,” and “dignity”).

From there, the court digs into the idea that the two favorite cases cited regularly by both Florida and Texas in defense of these laws has any weight here. The two cases are Rumsfeld v. FAIR (regarding military recruitment on a college campus) and Pruneyard v. Robins (regarding a shopping mall where people wanted to hand out petitions). We’ve explained in detail in the past why neither case works here, but we’ll let the 11th Circuit panel handle the details here:

We begin with the “hosting” cases. The first decision to which the State points, PruneYard, is readily distinguishable. There, the Supreme Court affirmed a state court’s decision requiring a privately owned shopping mall to allow members of the public to circulate petitions on its property. 447 U.S. at 76–77, 88. In that case, though, the only First Amendment interest that the mall owner asserted was the right “not to be forced by the State to use [its] property as a forum for the speech of others.” Id. at 85. The Supreme Court’s subsequent decisions in Pacific Gas and Hurley distinguished and cabined PruneYard. The Pacific Gas plurality explained that “[n]otably absent from PruneYard was any concern that access to this area might affect the shopping center owner’s exercise of his own right to speak: the owner did not even allege that he objected to the content of the pamphlets.” 475 U.S. at 12 (plurality op.); see also id. at 24 (Marshall, J., concurring in the judgment) (“While the shopping center owner in PruneYard wished to be free of unwanted expression, he nowhere alleged that his own expression was hindered in the slightest.”); Hurley, 515 U.S. at 580 (noting that the “principle of speaker’s autonomy was simply not threatened in” PruneYard). Because NetChoice asserts that S.B. 7072 interferes with the platforms’ own speech rights by forcing them to carry messages that contradict their community standards and terms of service, PruneYard is inapposite.

Nice, simple, and straightforward. As for Rumsfeld v. FAIR, that is also easily different:

FAIR may be a bit closer, but it, too, is distinguishable. In that case, the Supreme Court upheld a federal statute—the Solomon Amendment—that required law schools, as a condition to receiving federal funding, to allow military recruiters the same access to campuses and students as any other employer. 547 U.S. at 56. The schools, which had restricted recruiters’ access because they opposed the military’s “Don’t Ask, Don’t Tell” policy regarding gay servicemembers, protested that requiring them to host recruiters and post notices on their behalf violated the First Amendment. Id. at 51. But the Court held that the law didn’t implicate the First Amendment because it “neither limit[ed] what law schools may say nor require[d] them to say anything.” Id. at 60. In so holding, the Court rejected two arguments for why the First Amendment should apply—(1) that the Solomon Amendment unconstitutionally required law schools to host the military’s speech, and (2) that it restricted the law schools’ expressive conduct. Id. at 60–61.

[….]

FAIR isn’t controlling here because social-media platforms warrant First Amendment protection on both of the grounds that the Court held that law-school recruiting services didn’t.

First, S.B. 7072 interferes with social-media platforms’ own “speech” within the meaning of the First Amendment. Social-media platforms, unlike law-school recruiting services, are in the business of disseminating curated collections of speech. A social-media platform that “exercises editorial discretion in the selection and presentation of” the content that it disseminates to its users “engages in speech activity.” Ark. Educ. TV Comm’n, 523 U.S. at 674; see Sorrell, 564 U.S. at 570 (explaining that the “dissemination of information” is “speech within the meaning of the First Amendment”); Bartnicki v. Vopper, 532 U.S. 514, 527 (2001) (“If the acts of ‘disclosing’ and ‘publishing’ information do not constitute speech, it is hard to imagine what does fall within that category.” (cleaned up)). Just as the must-carry provisions in Turner “reduce[d] the number of channels over which cable operators exercise[d] unfettered control” and therefore triggered First Amendment scrutiny, 512 U.S. at 637, S.B. 7072’s content-moderation restrictions reduce the number of posts over which platforms can exercise their editorial judgment. Because a social-media platform itself “spe[aks]” by curating and delivering compilations of others’ speech—speech that may include messages ranging from Facebook’s promotion of authenticity, safety, privacy, and dignity to ProAmericaOnly’s “No BS | No LIBERALS”—a law that requires the platform to disseminate speech with which it disagrees interferes with its own message and thereby implicates its First Amendment rights.

Second, social-media platforms are engaged in inherently expressive conduct of the sort that the Court found lacking in FAIR. As we were careful to explain in FLFNB I, FAIR “does not mean that conduct loses its expressive nature just because it is also accompanied by other speech.” 901 F.3d at 1243–44. Rather, “[t]he critical question is whether the explanatory speech is necessary for the reasonable observer to perceive a message from the conduct.” Id. at 1244. And we held that an advocacy organization’sfood-sharing events constituted expressive conduct from which, “due to the context surrounding them, the reasonable observer would infer some sort of message”—even without reference to the words “Food Not Bombs” on the organization’s banners. Id. at 1245. Context, we held, is what differentiates “activity that is sufficiently expressive [from] similar activity that is not”—e.g., “the act of sitting down” from “the sit-in by African Americans at a Louisiana library” protesting segregation. Id. at 1241 (citing Brown v. Louisiana, 383 U.S. 131, 141–42 (1966)).

Unlike the law schools in FAIR, social-media platforms’ content-moderation decisions communicate messages when they remove or “shadow-ban” users or content. Explanatory speech isn’t “necessary for the reasonable observer to perceive a message from,” for instance, a platform’s decision to ban a politician or remove what it perceives to be misinformation. Id. at 1244. Such conduct—the targeted removal of users’ speech from websites whose primary function is to serve as speech platforms—conveys a message to the reasonable observer “due to the context surrounding” it. Id. at 1245; see also Coral Ridge, 6 F.4th at 1254. Given the context, a reasonable observer witnessing a platform remove a user or item of content would infer, at a minimum, a message of disapproval. Thus, social-media platforms engage in content moderation that is inherently expressive notwithstanding FAIR

The court then takes a further hatchet to both FAIR and Pruneyard:

The State asserts that Pruneyard and FAIR—and, for that matter, the Supreme Court’s editorial-judgment decisions—establish three “guiding principles” that should lead us to conclude that S.B. 7072 doesn’t implicate the First Amendment. We disagree.

The first principle—that a regulation must interfere with the host’s ability to speak in order to implicate the First Amendment— does find support in FAIR. See 547 U.S. at 64. Even so, the State’s argument—that S.B. 7072 doesn’t interfere with platforms’ ability to speak because they can still affirmatively dissociate themselves from the content that they disseminate—encounters two difficulties. As an initial matter, in at least one key provision, the Act defines the term “censor” to include “posting an addendum,” i.e., a disclaimer—and thereby explicitly prohibits the very speech by which a platform might dissociate itself from users’ messages. Fla. Stat. § 501.2041(1)(b). Moreover, and more fundamentally, if the exercise of editorial judgment—the decision about whether, to what extent, and in what manner to disseminate third-party content—is itself speech or inherently expressive conduct, which we have said it is, then the Act does interfere with platforms’ ability to speak. See Pacific Gas, 475 U.S. at 10–12, 16 (plurality op.) (noting that if the government could compel speakers to “propound . . . messages with which they disagree,” the First Amendment’s protection “would be empty, for the government could require speakers to affirm in one breath that which they deny in the next”).

The State’s second principle—that in order to trigger First Amendment scrutiny a regulation must create a risk that viewers or listeners might confuse a user’s and the platform’s speech—finds little support in our precedent. Consumer confusion simply isn’t a prerequisite to First Amendment protection. In Miami Herald, for instance, even though no reasonable observer would have mistaken a political candidate’s statutorily mandated right-to-reply column for the newspaper reversing its earlier criticism, the Supreme Court deemed the paper’s editorial judgment to be protected. See 418 U.S. at 244, 258. Nor was there a risk of consumer confusion in Turner: No reasonable person would have thought that the cable operator there endorsed every message conveyed by every speaker on every one of the channels it carried, and yet the Court stated categorically that the operator’s editorial discretion was protected. See 512 U.S. at 636–37. Moreover, it seems to us that the State’s confusion argument boomerangs back around on itself: If a platform announces a community standard prohibiting, say, hate speech, but is then barred from removing or even disclaiming posts containing what it perceives to be hate speech, there’s a real risk that a viewer might erroneously conclude that the platform doesn’t consider those posts to constitute hate speech.

The State’s final principle—that in order to receive First Amendment protection a platform must curate and present speech in such a way that a “common theme” emerges—is similarly flawed. Hurley held that “a private speaker does not forfeit constitutional protection simply by combining multifarious voices, or by failing to edit their themes to isolate an exact message as the exclusive subject matter of the speech.” 515 U.S. at 569–70; see FLFNB I, 901 F.3d at 1240 (citing Hurley for the proposition that a “particularized message” isn’t required for conduct to qualify for First Amendment protection). Moreover, even if one could theoretically attribute a common theme to a parade, Turner makes clear that no such theme is required: It seems to us inconceivable that one could ascribe a common theme to the cable operator’s choice there to carry hundreds of disparate channels, and yet the Court held that the First Amendment protected the operator’s editorial discretion….

In short, the State’s reliance on PruneYard and FAIR and its attempts to distinguish the editorial-judgment line of cases are unavailing.

How about the “common carrier” argument? Nope. Not at all.

The first version of the argument fails because, in point of fact, social-media platforms are not—in the nature of things, so to speak—common carriers. That is so for at least three reasons.

First, social-media platforms have never acted like common carriers. “[I]n the communications context,” common carriers are entities that “make a public offering to provide communications facilities whereby all members of the public who choose to employ such facilities may communicate or transmit intelligence of their own design and choosing”—they don’t “make individualized decisions, in particular cases, whether and on what terms to deal.” FCC v. Midwest Video Corp., 440 U.S. 689, 701 (1979) (cleaned up). While it’s true that social-media platforms generally hold themselves open to all members of the public, they require users, as preconditions of access, to accept their terms of service and abide by their community standards. In other words, Facebook is open to every individual if, but only if, she agrees not to transmit content that violates the company’s rules. Social-media users, accordingly, are not freely able to transmit messages “of their own design and choosing” because platforms make—and have always made—“individualized” content- and viewpoint-based decisions about whether to publish particular messages or users.

Second, Supreme Court precedent strongly suggests that internet companies like social-media platforms aren’t common carriers. While the Court has applied less stringent First Amendment scrutiny to television and radio broadcasters, the Turner Court cabined that approach to “broadcast” media because of its “unique physical limitations”—chiefly, the scarcity of broadcast frequencies. 512 U.S. at 637–39. Instead of “comparing cable operators to electricity providers, trucking companies, and railroads—all entities subject to traditional economic regulation”—the Turner Court “analogized the cable operators [in that case] to the publishers, pamphleteers, and bookstore owners traditionally protected by the First Amendment.” U.S. Telecom Ass’n v. FCC, 855 F.3d 381, 428 (D.C. Cir. 2017) (Kavanaugh, J., dissental); see Turner, 512 U.S. at 639. And indeed, the Court explicitly distinguished online from broadcast media in Reno v. American Civil Liberties Union, emphasizing that the “vast democratic forums of the Internet” have never been “subject to the type of government supervision and regulation that has attended the broadcast industry.” 521 U.S. 844, 868–69 (1997). These precedents demonstrate that social-media platforms should be treated more like cable operators, which retain their First Amendment right to exercise editorial discretion, than traditional common carriers.

Finally, Congress has distinguished internet companies from common carriers. The Telecommunications Act of 1996 explicitly differentiates “interactive computer services”—like social-media platforms—from “common carriers or telecommunications services.” See, e.g., 47 U.S.C. § 223(e)(6) (“Nothing in this section shall be construed to treat interactive computer services as common carriers or telecommunications carriers.”). And the Act goes on to provide protections for internet companies that are inconsistent with the traditional common-carrier obligation of indiscriminate service. In particular, it explicitly protects internet companies’ ability to restrict access to a plethora of material that they might consider “objectionable.” Id. § 230(c)(2)(A). Federal law’s recognition and protection of social-media platforms’ ability to discriminate among messages—disseminating some but not others—is strong evidence that they are not common carriers with diminished First Amendment rights.

Okay, but what if Florida just declares them to be common carriers? No, no, that’s not how any of this works either:

If social-media platforms are not common carriers either in fact or by law, the State is left to argue that it can force them to become common carriers, abrogating or diminishing the First Amendment rights that they currently possess and exercise. Neither law nor logic recognizes government authority to strip an entity of its First Amendment rights merely by labeling it a common carrier. Quite the contrary, if social-media platforms currently possess the First Amendment right to exercise editorial judgment, as we hold it is substantially likely they do, then any law infringing that right—even one bearing the terminology of “common carri[age]”—should be assessed under the same standards that apply to other laws burdening First-Amendment-protected activity. See Denver Area Educ. Telecomm. Consortium, Inc. v. FCC, 518 U.S. 727, 825 (1996) (Thomas, J., concurring in the judgment in part and dissenting in part) (“Labeling leased access a common carrier scheme has no real First Amendment consequences.”);Cablevision Sys. Corp. v. FCC, 597 F.3d 1306, 1321–22 (D.C. Cir. 2010) (Kavanaugh, J., dissenting) (explaining that because video programmers have a constitutional right to exercise editorial discretion, “the Government cannot compel [them] to operate like ‘dumb pipes’ or ‘common carriers’ that exercise no editorial control”); U.S. Telecom Ass’n, 855 F.3d at 434 (Kavanaugh, J., dissental) (“Can the Government really force Facebook and Google . . . to operate as common carriers?”)

Okay, then, how about that these websites are somehow so important that it magically means the state can regulate speech on them. Lol, nope, says the court:

The State seems to argue that even if platforms aren’t currently common carriers, their market power and public importance might justify their “legislative designation . . . as common carriers.” Br. of Appellants at 36; see Knight, 141 S. Ct. at 1223 (Thomas, J., concurring) (noting that the Court has suggested that common-carrier regulations “may be justified, even for industries not historically recognized as common carriers, when a business . . . rises from private to be a public concern” (quotation marks omitted)). That might be true for an insurance or telegraph company, whose only concern is whether its “property” becomes “the means of rendering the service which has become of public interest.” Knight, 141 S. Ct. at 1223 (Thomas, J., concurring) (quoting German All. Ins. Co. v. Lewis, 233 U.S. 389, 408 (1914)). But the Supreme Court has squarely rejected the suggestion that a private company engaging in speech within the meaning of the First Amendment loses its constitutional rights just because it succeeds in the marketplace and hits it big. See Miami Herald, 418 U.S. at 251, 258.

In short, because social-media platforms exercise—and have historically exercised—inherently expressive editorial judgment, they aren’t common carriers, and a state law can’t force them to act as such unless it survives First Amendment scrutiny.

So many great quotes in all of this.

Anyway, once the court has made it clear that content moderation is protected by the 1st Amendment, that’s not the end of the analysis. Because there are some cases in which the state can still regulate, but first it must pass strict scrutiny. And here, the court says, we’re not even close:

We’ll start with S.B. 7072’s content-moderation restrictions. While some of these provisions are likely subject to strict scrutiny, it is substantially likely that none survive even intermediate scrutiny. When a law is subject to intermediate scrutiny, the government must show that it “is narrowly drawn to further a substantial governmental interest . . . unrelated to the suppression of free speech.” FLFNB II, 11 F.4th at 1291. Narrow tailoring in this context means that the regulation must be “no greater than is essential to the furtherance of [the government’s] interest.” O’Brien, 391 U.S. at 377.

We think it substantially likely that S.B. 7072’s content-moderation restrictions do not further any substantial governmental interest—much less any compelling one. Indeed, the State’s briefing doesn’t even argue that these provisions can survive heightened scrutiny. (The State seems to have wagered pretty much everything on the argument that S.B. 7072’s provisions don’t trigger First Amendment scrutiny at all.) Nor can we discern any substantial or compelling interest that would justify the Act’s significant restrictions on platforms’ editorial judgment. We’ll briefly explain and reject two possibilities that the State might offer.

As for the argument that the state has to protect those poor, poor conservatives against “unfair” censorship, the court points out that’s not how this works:

The State might theoretically assert some interest in counteracting “unfair” private “censorship” that privileges some viewpoints over others on social-media platforms. See S.B. 7072 § 1(9). But a state “may not burden the speech of others in order to tilt public debate in a preferred direction,” Sorrell, 564 U.S. at 578–79, or “advance some points of view,” Pacific Gas, 475 U.S. at 20 (plurality op.). Put simply, there’s no legitimate—let alone substantial—governmental interest in leveling the expressive playing field. Nor is there a substantial governmental interest in enabling users—who, remember, have no vested right to a social-media account—to say whatever they want on privately owned platforms that would prefer to remove their posts: By preventing platforms from conducting content moderation—which, we’ve explained, is itself expressive First-Amendment-protected activity—S.B. 7072 “restrict[s] the speech of some elements of our society in order to enhance the relative voice of others”—a concept “wholly foreign to the First Amendment.” Buckley v. Valeo, 424 U.S. 1, 48–49 (1976). At the end of the day, preventing “unfair[ness]” to certain users or points of view isn’t a substantial governmental interest; rather, private actors have a First Amendment right to be “unfair”—which is to say, a right to have and express their own points of view. Miami Herald, 418 U.S. 258.

How about enabling more speech? That’s not the government’s job either:

The State might also assert an interest in “promoting the widespread dissemination of information from a multiplicity of sources.” Turner, 512 U.S. at 662. Just as the Turner Court held that the must-carry provisions served the government’s substantial interest in ensuring that American citizens were able to access their “local broadcasting outlets,” id. at 663–64, the State could argue that S.B. 7072 ensures that political candidates and journalistic enterprises are able to communicate with the public, see Fla. Stat. §§ 106.072(2); 501.2041(2)(f), (j). But it’s hard to imagine how the State could have a “substantial” interest in forcing large platforms—and only large platforms—to carry these parties’ speech: Unlike the situation in Turner, where cable operators had “bottleneck, or gatekeeper control over most programming delivered into subscribers’ homes,” 512 U.S. at 623, political candidates and large journalistic enterprises have numerous ways to communicate with the public besides any particular social-media platform that might prefer not to disseminate their speech—e.g., other more-permissive platforms, their own websites, email, TV, radio, etc. See Reno, 521 U.S. at 870 (noting that unlike the broadcast spectrum, “the internet can hardly be considered a ‘scarce’ expressive commodity” and that “[t]hrough the use of Web pages, mail exploders, and newsgroups, [any] individual can become a pamphleteer”). Even if other channels aren’t as effective as, say, Facebook, the State has no substantial (or even legitimate) interest in restricting platforms’ speech—the messages that platforms express when they remove content they find objectionable—to “enhance the relative voice” of certain candidates and journalistic enterprises. Buckley, 424 U.S. at 48–49

Another nice bit of language: the court says that the government can’t force websites to not use an algorithm to rank content (which is a big deal as many states are trying to do just that):

Finally, there is likely no governmental interest sufficient to justify forcing platforms to show content to users in a “sequential or chronological” order, see § 501.2041(2)(f), (g)—a requirement that would prevent platforms from expressing messages through post-prioritization and shadow banning.

Finally, there’s a great footnote that recognizes the problems we pointed out with regards to Texas’ law and the livestream of the mass murderer in Buffalo. The Court recognizes how the same issue could apply in Florida:

Even worse, S.B. 7072 would seemingly prohibit Facebook or Twitter from removing a video of a mass shooter’s killing spree if it happened to be reposted by an entity that qualifies for “journalistic enterprise” status.

And, that’s basically it. As noted up top, there are a few, fairly minor provisions that the court says should not be subject to the injunction, and we’ll have another post on that shortly. But for now, this is a pretty big win for the 1st Amendment, actual free speech, and the rights of private companies to moderate as they see fit.

Hilariously, Florida is pretending it won the ruling, because of the few smaller provisions that are no longer subject to the injunction. But this is a near complete loss for the state, and a huge win for free speech.

Posted on Techdirt - 20 May 2022 @ 12:22pm

No, Twitter Doesn’t Want To ‘Censor’ Anyone, It Just Wants Everyone To Stop Attacking Each Other

Last month I wrote about how, contrary to the weird narrative, Twitter has actually been among the most aggressive companies fighting for free speech online. While many people criticize it, they are wrong, or just uninformed. Mostly, they think (falsely) that because Twitter doesn’t want some speech that you like on their site, it somehow means they’re against free speech. The reality is a lot more complicated, of course. As we pointed out, former Reddit CEO Yishan Wong’s long thread about content moderation highlighted that people doing content moderation generally aren’t making decisions based on politics, they just want people to stop fighting all the time.

Recently, the Washington Post had an excellent article about Twitter’s Vijaya Gadde, the company’s top lawyer, who also runs their trust and safety efforts, that talks about how she is a strong defender of free speech, who also recognizes that, to support free speech, you have to come up with plans to deal with abusive, malignant users. That doesn’t mean automatically banning them, but exploring the solution space to see what kinds of programs you can put in place to limit the destructive nature of some users.

I recognize that this is a space filled with people who insist their emotional beliefs are the be all and end all when it comes to content moderation, but it would be nice if at least some of those people were willing to actually read through articles like this, that highlight how many different trade-offs and nuances there are in these discussions.

Twitter colleagues describe Gadde’s work as difficult but necessary and unmotivated by political ideology. Defenders say her team, known as the trust and safety organization, has worked painstakingly to rein in coronavirus misinformation, bullying and other harmful speech on the site, moves that necessarily limit some forms of expression. They have also disproportionately affected right-leaning accounts.

But Gadde also has tried to balance the desire to protect users with the values of a company built on the principle of radical free speech, they say. She pioneered strategies for flagging harmful content without removing it, adopting warning labels and “interstitials,” which cover up tweets that break Twitter’s rules and give people control over what content they see — strategies copied by Twitter’s much larger rival, Facebook.

The article also details how she has lead the company’s aggressive pushback against foreign laws that are real attacks on free speech:

For years, she has been the animating force pushing Twitter to champion free expression abroad. In India and Turkey, for example, her team has resisted demands to remove content critical of repressive governments. In 2014, Gadde made Twitter the only Silicon Valley company to sue the U.S. government over gag orders on what tech companies could say publicly about federal requests for user data related to national security. (Five other companies settled.)

Contrast that with Elon Musk, who quickly endorsed the EU’s approach to platform regulation at a time when Twitter, under Gadde’s leadership, has been pushing back against parts of that plan, by noting how it conflicts with basic free speech concepts.

The article highlights, as we have tried to do for years, that content moderation is a complicated and nuanced topic, that doesn’t fit neatly with the arguments around “free speech.” Part of this is that social media isn’t just about speech, but about being able to get your speech in front of a specific audience. People mostly don’t care if you spout bullshit nonsense on your own website where only those who seek it out can find it, but due to the nature of Twitter and how it connects users, it allows people to inject their speech into the notifications of others — and that creates elements for abuse and harassment, that actually harm free speech, by driving people out of the wider discussion entirely.

There is, obviously, some level of balance here. Not all criticism, hell, most criticism isn’t abusive or harassing, even if it may feel that way to those on the receiving end of it. But anyone trying to build an inclusive and trustworthy forum needs to recognize that bad actors push thoughtful users away. And at least some plan needs to be in place to deal with that.

But, part of that, is that Twitter’s DNA has always been to favor more speech over less, and the company really only pushes back in fairly extreme cases when pushed to the edge, and where no other decision is reasonably tolerable, if the site wants to keep users.

Even as the company took action to limit hate speech and harassment, Gadde resisted calls to police mere misinformation and falsehoods — including by the new president.

“As much as we and many of the individuals might have deeply held beliefs about what is true and what is factual and what’s appropriate, we felt that we should not as a company be in the position of verifying truth,” Gadde said on a 2018 Slate podcast, responding to a question about right-wing media host Alex Jones, who had promoted the falsehood on his show, Infowars, that the Sandy Hook school shooting was staged.

The company was slammed for statements like this at the time, but believed strongly that it was drawing the line in a place that made the most sense to be broadly inclusive. Of course, that line moves over time as the context and the world around us moved. In the early days of the pandemic, with people dropping dead everywhere, at some point, most people are going to realize that spreading more information that leads to more people dying feels morally disturbing.

It’s not out of any political beliefs, or a desire to “censor” viewpoints. It’s just a basic moral stance on how to help the public stay alive.

The company, also under her leadership, pushed for alternative tools to dealing with misinformation, rather than the go to move of taking down content:

Meanwhile, Gadde and her team were working with engineers to develop a warning label to cover up tweets — even from world leaders such as Trump — if they broke the company’s rules. Users would see the tweet only if they chose to click on it. They saw it as a middle ground between banning accounts and removing content and leaving it up.

In May 2020, as Trump’s reelection campaign got underway, Twitter decided to slap a fact-checking label on a Trump tweet that falsely claimed that mail-in ballots are fraudulent — the first action by a technology company to punish Trump for spreading misinformation. Days later, the company acted again, covering up a Trump tweet about protests over the death of George Floyd that warned “when the looting starts, the shooting starts.” More such actions followed.

And while some people insisted that this was a form of “censorship,” it was actually the opposite. It was literally “more speech” responding to speech that Twitter felt was problematic. Twitter was one of the first companies to use this approach as an alternative to removing speech… and yet it still resulted in very angry people insisting it was proof of censorship.

Anyway, there’s a lot more in the article, but it’s a really good and thorough look not just at the various tradeoffs and nuances at play, but also how Twitter’s current management made some of those decisions, not to try to silence voices, but quite the opposite.

Posted on Techdirt - 20 May 2022 @ 09:33am

If You Think Free Speech Is Defined By Your Ability To Be An Asshole Without Consequence, You Don’t Understand Free Speech (But You Remain An Asshole)

One of the more frustrating things about the various “debates” regarding “free speech” lately, is how little they are actually about free speech. Quite often, they are actually about people who are quite upset about having to face social consequences for their own free speech. But facing social consequences has always been part of free speech. Indeed, it’s part of the vaunted “marketplace of ideas.” If people think your ideas aren’t worth shit, they may ignore or shun you… or encourage others to do the same.

Over at The Bulwark, Prof. Nicholas Grossman has a really good article exploring Elon Musk’s attempt at reframing the debate over free speech. It is well worth reading. The crux of the argument that Grossman makes (in great detail that you should go read to have it all make sense) is that when you break down what Musk actually seems to be thinking about free speech, his definition hews quite similarly to what a lot of trolls think free speech means: the right to be a total asshole without consequence.

The article highlights what many of us have said before (disclaimer, it does link to some of my writing on the subject), that the real underlying question is not actually about free speech, but where society should draw the line on what is, and what is not, acceptable in public company. And that’s really what this is all about. Free speech, as a concept, has to fall back on whether or not the government suppresses speech. For all the talk about social consequences of free speech, or whether or not there is a “culture of free speech” or “principles of free speech,” everyone has some level of internal voice that notes what kind of speech they feel goes too far for polite company — even if they don’t think such speech should be illegal.

But, then, the question becomes, if there is some speech that I, personally, don’t wish to associate with, should others be forced to do so? And that’s where the debates over content moderation actually live. In that space that says “where should the line be drawn” for what is acceptable and what is not. And when you look closely at the actual debate, it always comes down to “I want to be a disrespectful asshole to people I don’t like, and I don’t want to face any consequences for it.”

As Grossman aptly notes, a private company deciding whether or not to host your content, isn’t really a free speech issue at all. Every platform agrees that some moderation is necessary. Every platform that has tried to do otherwise, changes course, often within days.

Multiple Twitter alternatives have been tried, all vowing to be “free speech” platforms that don’t moderate content. Every one of them—Gab, Parler, Gettr, etc.—has ended up moderating speech and enforcing rules, because what their “unfettered free speech” resulted in was doxxing, promotion of violence, and various other depravities that underscored why content moderation became the norm on the internet in the first place. And all these alternative platforms have flopped as businesses because “Twitter for people who want to post things you can’t post on Twitter” isn’t appealing to most users.

For business reasons, if nothing else, Twitter under Elon Musk would still moderate content. It might, however, change which users it prioritizes.

On top of that, he demolishes the idea that content moderation is about “leftists” trying to “censor” conservative voices:

This supposed bias is an article of faith for large swaths of the right, but when serious researchers have gone looking for it, they don’t find empirical support. A 2021 study found that, across seven advanced democratic countries, Twitter’s algorithm boosts posts by right-wing politicians and parties a little more than posts by left-wing politicians and parties. Another 2021 study set loose some politically neutral “drifter bots” on Twitter and found strong evidence of conservative bias, but “from user interactions (and abuse) rather than platform algorithms.”

Content moderation decisions can be haphazard, not least because the Big Tech business model means a small number of employees rely on algorithms and user reporting to oversee far more content than they can possibly handle. Public perception of these decisions often derives from a few anecdotes repeated by interested parties, and doesn’t match the data. For example, a 2022 paper found strong support in the U.S.—from both Democrats and Republicans—for social media companies taking action against misinformation. Of accounts banned or suspended for misinformation, more were conservative than liberal, but there was no evidence of political bias in enforcement decisions. Every banned or suspended account had clearly violated terms of service, it’s just that people on the right happened to break misinformation rules more often.

So, if there’s no actual evidence of bias, and everyone (even Musk) recognize that there needs to be some level of moderation, what is this “debate” really about. As Grossman highlights, it basically all comes down to whether or not you can be a total asshole without having a social media site say “that crosses our line of what we feel is appropriate here.” He uses the example of the Babylon Bee, whose Twitter suspension for misgendering someone has been pointed to as the catalyst for Musk to decide to buy Twitter.

But is that actually a “free speech” issue?

Of course not. You can be an asshole all you want, and you can disrespect people in obnoxious ways, proudly highlighting your own moral degeneracy all you want. You just can’t expect everyone else to support you in doing it, and not tell you when they feel your behavior has crossed their specific line, their terms of providing service.

So, yes, Elon Musk can take over Twitter, and then he can have every right to change the rules to whatever he wants. Just like Gab and Parler and GETTR and Truth Social and others have every right to set their own rules as well. But none of those are actually battles about “free speech.” They’re battles about where private entities draw the line of what they feel is and is not appropriate on their own property.

And when you look at it that way, you realize that none of Musk’s arguments are actually for free speech. They’re for his desire to redraw the line to allow more assholes on one site, without consequence. And, as Grossman notes, this insistence that it’s about free speech, really really distorts the underlying principles of free speech.

Twitter is a private company, and its rules are up to its owners, whether that’s Elon Musk or anyone else. As a supporter of the First Amendment, I accept that, even if I don’t agree with their choices. But as someone who greatly values free speech—not just legal protections from government, but a culture that fosters expression and dialogue—I refuse to cede the concept of free speech to those who think a defining feature is trolls trying to drive trans people and other minorities off social media.

And that’s exactly right. I’ll fight more than anyone to actually protect the 1st Amendment, and your rights to say what you want and to be an asshole on your own property. But there is nothing “free speech” about just demanding that private entities draw the line for “what level of asshole do we allow” somewhere more assholish.

Posted on Techdirt - 19 May 2022 @ 01:38pm

Homeland Security Once Again Demonstrates Its Own Incompetence, ‘Pauses’ Orwellian Named Disinfo Board

All of this was easily predictable for, well, basically anyone. The already Orwellian-named Department of Homeland Security last month announced the even more Orwellian-named Disinformation Governance Board, with no details, no explanation, and no nothing, other than naming a somewhat controversial researcher to lead it. We called out just how ridiculous the whole thing was at the time, for a variety of reasons, but just to recap:

  1. In theory, a government commission to better understand the nature and flow of disinformation and how to counter it could be a useful thing, but the details absolutely matter. Any attempt to actually limit protected speech (and, yes, disinformation is protected speech) would be an obvious 1st Amendment violation.
  2. Launching anything around disinformation without explaining, in great detail, all of those important details just leads the field wide open to, well, disinformation to flood in and fill the gaps. Which is exactly what happened. Tons of people — some with good motivations, but plenty with laughably bad motivations — immediately filled the void with claims that this board was about censoring speech, again, something that would be unconstitutional, if true, but which the failure of DHS to explain itself meant that it was open to speculation.
  3. Whoever decided to name the damn thing the Disinformation Governance Board deserves to be fired, and should never have anything to do with disinformation ever again. Everything about the name screams that it would be about controlling speech.

In the weeks that followed, DHS continued to insist that everyone screaming at them were getting it all wrong, but then refused to ever explain what the board was going to do. Once again, this is like step one in countering disinformation: knowing that if official sources refuse to explain things, conspiracy theories and nonsense will always step in to fill in the void.

I am still flabbergasted that the people setting up a board about disinformation didn’t understand or expect any of this.

Of course, if the purpose of the board was actually to educate the DHS itself on its own complete lack of comprehension about disinformation, maybe that would have been useful, because it’s now quite obvious that DHS was completely incompetent here and in way over its head. Of course, this is the DHS we’re talking about and “completely incompetent” and “in way over its head” are descriptors that can be applied quite frequently to the Department over the past two decades of its existence.

Either way, the Disinformation Governance Board is now on pause, and, even in trying to walk back the project, DHS has flubbed basically everything yet again. The whole thing sounds like a clusterfuck of incompetence:

Now, just three weeks after its announcement, the Disinformation Governance Board is being “paused,” according to multiple employees at DHS, capping a back-and-forth week of decisions that changed during the course of reporting of this story. On Monday, DHS decided to shut down the board, according to multiple people with knowledge of the situation. By Tuesday morning, Jankowicz had drafted a resignation letter in response to the board’s dissolution.

But Tuesday night, Jankowicz was pulled into an urgent call with DHS officials who gave her the choice to stay on, even as the department’s work was put on hold because of the backlash it faced, according to multiple people with knowledge of the call. Working groups within DHS focused on mis-, dis- and mal-information have been suspended. The board could still be shut down pending a review from the Homeland Security Advisory Council. On Wednesday morning, Jankowicz officially resigned from her role within the department.

Incredibly, in this Washington Post article about the Board effectively shutting down, three weeks after the Board was announced, we get the first statements from DHS finally trying to explain what the Board was supposed to do, which is the kind of thing that anyone with any understanding of anything would have, maybe, been prepared to explain on day one.

The board was created to study best practices in combating the harmful effects of disinformation and to help DHS counter viral lies and propaganda that could threaten domestic security. Unlike the “Ministry of Truth” in George Orwell’s “1984” that became a derogatory comparison point, neither the board nor Jankowicz had any power or ability to declare what is true or false, or compel Internet providers, social media platforms or public schools to take action against certain types of speech. In fact, the board itself had no power or authority to make any operational decisions.

“The Board’s purpose has been grossly mischaracterized; it will not police speech,” the DHS spokesperson said. “Quite the opposite, its focus is to ensure that freedom of speech is protected.”

So, um, if it wasn’t designed to police speech, then WHY THE FUCK did you call it a “governance” board, and why did you not have CLEAR, DETAILED, AND THOROUGH explanations for what the board was set up to do, and what authority it had, ON DAY ONE?

Also, all of this really only served to demonstrate how DHS has no fucking clue how to counter disinformation (which, ironically, supports the reason it needed a board, not for “governance” but to educate its own ignorant self):

As she endured the attacks, Jankowicz herself was told to stay silent. After attempting to defend herself on Twitter April 27, she was told by DHS officials to not issue any further public statements, according to multiple people close to her.

Democratic lawmakers, legislative staff and other administration employees who sought to defend Jankowicz were caught flat-footed. Administration officials did not brief the relevantcongressional staff and committees ahead of the board’s launch, and members of Congress who had expressed interest in disinformation weren’t given a detailed explanation about how it would operate. A fact sheet released by DHS on May 2 did nothing to quell the outrage that had been building on the Internet, nor did it clarify much of what the board would actually be doing or Jankowicz’s role in it.

DHS staffers have also grown frustrated. With the department’s suspension of intra-departmental working groups focused on mis-, dis- and mal-information, some officials said it was an overreaction that gave too much credence to bad-faith actors. A 15-year veteran of the department, who spoke on the condition of anonymity because he was not authorized to comment publicly, called the DHS response to the controversy “mind-boggling.” “I’ve never seen the department react like this before,” he said.

Indeed, the rest of the WaPo article contains way more information about how to counter disinformation campaigns, including effectively detailing how incredibly, predictably incompetent DHS was in just about every move it made here.

“The irony is that Nina’s role was to come up with strategies for the department to counter this type of campaign, and now they’ve just succumbed to it themselves,” said one Hill staffer with knowledge of the situation who spoke on the condition of anonymity because they were not authorized to speak on the issue. “They didn’t even fight, they just rolled over.”

And, of course, as the article also notes, this is going to make it that much more difficult for DHS to even educate itself to correct this kind of error, because who is going to be willing to come work for DHS to explain to it how not to get played this way when the person they brought in last time is now facing disingenuous death threats, all because of DHS’s own botched rollout?

My only complaint with the WaPo article is that it argues, incorrectly, that it was only disingenuous Trumpists who complained about the Board. That was the vocal part of it, for sure, but plenty of people who simply believe in basic civil rights, like us, complained about the setup of the board, and the lack of clear explanations of what the board was there to do, noting how it could be quite problematic, but without details, we had no idea.

Posted on Techdirt - 19 May 2022 @ 09:32am

NY Launches Ridiculous, Blatantly Unconstitutional ‘Investigations’ Into Twitch, Discord; Deflecting Blame From NY’s Own Failings

I recognized that lots of people are angry and frustrated over the mass murdering jackass who killed ten people at a Buffalo grocery store last weekend. I’m angry and frustrated about it as well. But, the problem with anger and frustration is that it often leads people to lash out in irrational ways, and to “do something” even if that “something” is counterproductive and destructive. In this case, we’ve already seen politicians and the media trying to drive the conversation away from larger issues around racism, mental health, law enforcement, social safety nets and more… and look for something to blame.

While they seem to recognize that they can’t actually blame news outlets that have fanned the flames of divisiveness and bigotry and hatred — because of the 1st Amendment — for whatever reason, they refuse to apply that basic recognition to newer media, such as video games and the internet.

We already discussed how NY’s governor, Kathy Hochul, seemed really focused on blaming internet companies for her own state’s failures to stop the shooter, and now her Attorney General, Letitia James, has made it official: she’s opening investigations into Twitch, 4chan, 8chan, and Discord, claiming that those were the platforms used by the murderer. James notes that she’s doing this directly in response to a request from Hochul.

“The terror attack in Buffalo has once again revealed the depths and danger of the online forums that spread and promote hate,” said Attorney General James. “The fact that an individual can post detailed plans to commit such an act of hate without consequence, and then stream it for the world to see is bone-chilling and unfathomable. As we continue to mourn and honor the lives that were stolen, we are taking serious action to investigate these companies for their roles in this attack. Time and time again, we have seen the real-world devastation that is borne of these dangerous and hateful platforms, and we are doing everything in our power to shine a spotlight on this alarming behavior and take action to ensure it never happens again.”

It has been reported that the shooter posted online for months about his hatred for specific groups, promoted white supremacist theories, and even discussed potential plans to terrorize an elementary school, church, and other locations he believed would have a considerable community of Black people to attack. Those postings included detailed information about plans to carry out an attack in a predominantly Black neighborhood in Buffalo and his visits to the site of the shooting in the weeks prior. The shooter also streamed the attack on another social media platform, which was accessible to the public, and posted a 180-page manifesto online about his bigoted views.

She claims that these investigations are authorized by the very broad law granting the AG the powers to investigate issues related to “the public peace, public safety and public justice.” Except, the 1st Amendment does not allow regulation of speech, and that’s what this investigation actually is.

Imagine the (quite reasonable) outrage if James announced she was opening an investigation into Fox News. Or, if you’re on the other side of the political aisle, imagine if Texas AG Ken Paxton announced an investigation into MSNBC. You’d immediately argue that those were politically motivated intimidation techniques, designed to suppress the free speech rights of those organizations.

The same is true here.

Or, if you’re going to argue that these websites are somehow different than news channels, let’s try this on for size. If you’re okay with James doing this investigation, are you similarly okay with Paxton investigating Discord, Facebook, Twitter and other such sites for groups forming to help women get an abortion? Or how would you feel if Florida’s Ashley Moody began investigating these sites for helping schoolchildren get access to books that are being banned.

You’d be correctly outraged, as you should in either case.

Anything that you could possibly “blame” any of these sites for is obviously protected by the 1st Amendment. First off, it’s almost guaranteed that none of these organizations had detailed knowledge of this one terrible person’s screeds and planning. Even in a world absent Section 230, the lack of actual knowledge by these platforms would mean that they could not be held liable, under the 1st Amendment.

Then again, we’re in a world where we do have Section 230, and that further makes this plan for an investigation ridiculous, because it seems quite clear that this investigation is an attempt to hold websites liable for the speech of one of its users. And that’s not allowed under 230.

Of course, you might argue that it’s not an attempt to hold them liable for his speech, but his murderous actions. But you’d still be wrong, because he didn’t use any of these websites to murder people. He may have used them to talk about whatever hateful ideology he has, and his plans, but that’s not (in any way) the same thing.

Meanwhile, it’s difficult to look at this and not think that AG James and Governor Hochul are hyping this all up to deflect from their own government’s failings. It’s now been widely reported that the shooter had made previous threats that law enforcement investigated. It’s also been reported that the weapon he used in the shooting included a high capacity magazine that is illegal in NY. Also, and this may be the most damning of all: there are reports that someone in the grocery store called 911 and the dispatcher HUNG UP ON THEM.

In other words, there appear to be multiple examples of how NY’s own law enforcement failed here. And I guess it’s not surprising that the Governor and the highest law enforcement officer of the state would rather pin the blame elsewhere, than reflect on how they, themselves, failed.

But, that lack of introspection is how we continue failing.

Posted on Techdirt - 18 May 2022 @ 10:45am

Everyone’s Got Terrible Regulations For Internet Companies: Senator Bennet Wants A ‘Digital Platform Commission’

Everyone these days seems to want to regulate social media. I mean, the reality is that social media is such a powerful tool for expression that everyone wants to regulate that expression. Sure, they can couch it in whatever fancy justifications they want, but at the end of the day, they’re still trying to regulate speech. And this is across the political spectrum. While, generally speaking, the Republican proposals are more unhinged, it’s not like the Democratic proposals are actually any more reasonable or constitutional.

The latest on the Democratic side to throw his hat in the ring is Colorado Senator Michael Bennet, who has a bizarre proposal for a new Digital Platform Commission. He compares it to the FDA, the FCC, and the more recent Consumer Financial Protection Bureau (CFPB). Except, this is different. Regulating social media means regulating speech. So the FDA and the CFPB examples are not relevant. The only one that actually involved regulating businesses engaged in the transmission of expression is the FCC, and that’s part of the reason why the FCC’s mandate has always been quite narrow, covering specific areas where there government can get involved: e.g., in regulating scarce spectrum.

Defenders of the law claim that it’s not there to regulate speech, but so much of what the bill tiptoes around is that Bennet is unhappy with what is clearly 1st Amendment protected activity by these websites. The bill kicks off by arguing that these websites have disseminated disinformation and hate speech — both of which are protected speech. It complains about them “abetting the collapse of trusted local journalism” which is a weird way to say “local journalism outfits failed to adapt.” It blames the websites for “radicalizing individuals to violence.” But you could just as easily say that about Fox News or OAN, but hopefully most people recognize how promulgating regulations in response to those organizations would be a serious 1st Amendment problem. It trots out a line claiming that social media has “enabled addiction” which is a claim that is often made, but without any real support or evidence.

It’s basically one big moral panic.

Anyway, what would this new Commission regulate? Well, a lot of stuff that touches on speech, even as it tries to pretend otherwise. Among other things, this Commission would be asked to issue rules on:

requirements for recommendation systems and other algorithmic processes of systemically important digital platforms to ensure that the algorithmic processes are fair, transparent, and without harmful, abusive, anticompetitive, or deceptive bias;

So, recommendations are opinions, and opinions are speech. That’s regulating speech. Also, given what we’ve seen with things like Texas’ social media law, which uses similar language, it’s not at all difficult to predict how a commission like this under a future Trump administration would push rules about “deceptive bias.”

I am perplexed at how a Democratic Senator could possibly write a law like this and not consider how a Trump administration would abuse it.

There’s a lot more in the bill, including other ideas that wouldn’t directly impact speech, but the whole thing is ridiculous. It’s setting up an entire new regulatory agency over social media. We know what happens in situations like this. You get regulatory capture (witness how often the FCC is controlled by telecom interests leading to a lack of competition).

You also get a death of innovation. Regulated industries are slow, lumbering, wasteful entities where it’s difficult, if not impossible, to generate new startups and competition. Effectively this bill would hand over most internet innovation to foreign companies.

It’s a ridiculously dangerous move.

I am perplexed. The last few years we’ve seen non-stop unhinged moral panic from both parties about the internet. As we noted last year, both parties are playing into this, because both parties are trying to twist the internet to their own interests. That’s not what the internet is for. The internet is supposed to be an open network for the public. Not managed by captured bureaucrats.

Posted on Techdirt - 17 May 2022 @ 10:47am

Author Of Texas’ Social Media Law Admits That He Meant The Law To Exempt Any Moderation Decisions Protected By Section 230 (That’s Everything)

Well, this is awkward. Yesterday I wrote about how there was a strong argument that Twitch’s removal of the mass murderer in Buffalo’s livestream of his murder spree violated Texas’s ridiculous social media law. The main saving grace for Twitch would be that it was possible (though it’s unclear) its userbase was just under the 50 million US average monthly users required to trigger the law. However, even if the law didn’t reach Twitch, it definitely reaches Facebook and Twitter, two other platforms that have been trying (and not always succeeding) to remove the video.

That said, it was a bit surprising when the main author of the bill, Briscoe Cain, showed up in my Twitter mentions to insist that the bill does not prevent Twitch from removing the video. His answer was revealing, though not in the way he meant it to be.

If you can’t see the image, Cain says that “HB20 specifically authorizes social media platforms to censor that kind of content.” Then he posts a screenshot of two laws. First he posts the part of HB20 (Section 143A.006) that says:

This chapter does not prohibit a social media platform from censoring expression that:

(1) the social media platform is specifically authorized to censor by federal law;

And he highlights the “federal law” part. Then he, somewhat amazingly, posts a screen shot of the Good Samaritan section of Section 230, and specifically highlights the “excessively violent” part of 230(c)(2).

No provider or user of an interactive computer services shall be held liable on account of–

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protect.

So, there are many, many, many problems with this, but let’s get to the biggest one. Mainly: he is admitting that any moderation choices that are protected under Section 230 are exempt from his law, because he’s claiming that his law incorporates Section 230. Which is all moderation choices. Which means he is admitting that his law actually does nothing at all. Or, at best, that it’s a kind of “trigger law” that really only matters if Section 230 is repealed or massively reformed.

Considering that, in defending the law, the State of Texas explicitly claimed that HB20 is not preempted by Section 230, this is quite an admission. Here was the argument the state made, which the author of the bill now concedes as false:

Section 230 simply does not preempt H.B. 20. This is so for two reasons. Preemption is a specific concept: “Congress enacts a law that imposes restrictions or confers rights on private actors; a state law confers rights or imposes restrictions that conflict with the federal law; and therefore the federal law takes precedence and the state law is preempted.” The “restrictions” that H.B. 20 imposes on interactive computer services do not conflict with the “rights”—immunity from damages liability for third party content hosted— Section 230 confers on them.

So, HB20 is not preempted by 230, but since 230 protects the moderation choices and HB20 is preempted by federal law… it does?

Anyway, Cain’s argument is even dumber. Note what HB20 says: that it does not prohibit moderation choices (which he falsely calls censorship) if the website is “specifically authorized to censor by federal law.” The implication of his claim, then, is that he thinks (incorrectly) that moderation only exists on social media platforms because 230 “authorizes” them to moderate.

That is very, very wrong. The 1st Amendment is what allows websites to moderate. They have their own 1st Amendment rights that allow for editorial discretion and a right not to associate with anyone or any idea. Section 230 simply provides a procedural setup that allows bogus mistargeted lawsuits to get kicked out of court quickly.

But just the fact that Briscoe Cain thinks that social media websites need 230 to “authorize” them to moderate raises questions about his competence as an actual legislator to understand literally any of this.

Of course, when people started to confront him over this, he refused to give a direct answer, and started claiming that people had trouble reading his law. I don’t believe that’s true. The actual problem is that Cain apparently doesn’t even understand the law he has written, and how it intersects with both Section 230 and the 1st Amendment.

Yet another reminder: we should elect fewer stupid people.

Posted on Techdirt - 16 May 2022 @ 12:20pm

Blaming Social Media And Section 230 For Mass Shootings Is Ridiculous; Stop It

In the past, we’ve talked about how much of politicians’ obsession with regulating internet companies seems to stem from it being an easy way to deflect attention from their own policy failings. So many aspects of the complaints about social media are really just because social media has shined an extraordinarily bright light on the inability of the government to actually deal with underlying societal issues around mental health, social safety nets, criminal law… that then bubble up elsewhere. And it’s a lot easier for politicians to just point the finger at social media, rather than to admit their own failings.

This past weekend’s mass murder in Buffalo is just the latest example of this. We had already mentioned this, in passing, in our story on how Twitch taking down the live stream likely violated Texas’ social media content moderation law, but NY Governor Kathy Hochul seems to be doing everything possible to deflect any responsibility of the horrific incident, and pointing all the blame at social media.

Even though Twitch apparently took down the livestream in about two minutes, that wasn’t good enough for Hochul, who said that if it wasn’t down in a second, it was a problem:

The governor blasted social media platforms following the shooting, demanding companies be more vigilant in monitoring their content.

“This execution of innocent human beings could be live-streamed on social media platforms and not taken down within a second says to me that there is a responsibility out there,” she said.

She then went on Meet the Press this weekend to continue to deflect any attention from any of this other issues around mental health, law enforcement, etc., all of which are clearly much more central to this issue. But all of those implicate her actual failures. So instead, she focused on the evils of social media. Of course, it was Chuck Todd who brought it up, pointing the finger at Section 230.

CHUCK TODD: Well, let’s talk about holding these internet companies responsible. Obviously, there’s this law on the books that allows the internet to, sort of, escape liability on so many things that, frankly, we, as television broadcasters, cannot escape the same liability. Do you think they should be held responsible for the easy spread of this propaganda?

So, first of all, this entire line of questioning is bullshit. He’s obviously referring to Section 230, but he’s wrong. There is no law that holds TV broadcasters liable for the spread of propaganda. Propaganda is protected under the 1st Amendment, and lots of people are noting that many of the shooter’s ideas were, in fact, mainstreamed not on social media, but by people like Tucker Carlson.

So even if there were no Section 230, there is no cause of action for spreading propaganda.

Hochul, of course, is happy to take the lifeline and use it to blame social media for her state government’s own failings:

GOV. KATHY HOCHUL: I hold them responsible for not monitoring and alerting law enforcement. That’s exactly the issue here, is that it is fomenting. People are sharing these ideas. They’re sharing videos of other attacks. And they’re all copycat. They all want to be the next great white hope that’s going to inspire the next attack. We can’t let that continue. And we know where it’s occurring. It’s not happening in the basement of a KKK meeting anymore where you have a limited number of people who are succumbing to these evil influences. This is happening globally. They’re looking at what happened in New Zealand and what happened in Pittsburgh and what happened in South– they read this. They absorb this. This becomes part of their mentality. And they share it with others through the internet. And that’s the responsibility of the internet and of the individuals who are responsible are the ones who own these companies. And I’m going to be talking to them directly.

Look, lots of us can agree that this kind of speech is troubling, and the ability of these ideas to catch hold speaks poorly to a lot of things. But again, the speech is protected by the 1st Amendment, and you can’t just magically make that disappear. The real issue, again, gets back to things that actually are under Governor Hochul’s mandate: improving mental health care, and improving education to make people less susceptible to this kind of nonsense.

But rather than talking about that, it’s easier to point blame at the internet. Bizarrely, Chuck Todd (after insisting, falsely, in his previous question that TV broadcasters can be held liable for spreading propaganda) then points out that TV commentators can’t in fact be held liable for spreading propaganda, because of this pesky free speech thing.

CHUCK TODD: We also have TV commentators and some political figures that, sort of, appease this right-wing extremism. Sort of, you know, anybody that pushes back, maybe they come after it on speech grounds, freedom of speech or things like this, that it certainly seems as if there is a growing virus on the far right here that is spreading dangerously.

So, now you admit that the 1st Amendment is, indeed, what prevents people or companies from being held liable for propaganda (but you still got your false dig in at the internet). But Hochul then pulls out basically all the ridiculous 1st Amendment tropes, including “I support the 1st Amendment, but…” and “fire in a crowded theater.”

GOV. KATHY HOCHUL: And they need to be held accountable as well. And any government leader that does not condemn this and condemn it today is a coward, and they’re also partially responsible. So let’s just be real honest about the role of elected leaders. And what they need to be doing is calling this out and not coddling this behavior and saying that, “Well, that’s just young people and they’re sharing their ideas.” Yeah, I’ll protect the First Amendment any day of the week. But you don’t protect hate speech. You don’t protect incendiary speech. You’re not allowed to scream “fire” in a crowded theater. There are limitations on speech. And right now, we have seen this run rampant. And as a result, I have ten dead neighbors in this community. And it hurts. And we’re going to do something about it.

Whether you like it or not, hate speech is absolutely protected under the 1st Amendment. And that’s probably for a good reason, because elsewhere we see time and time again how hate speech laws are abused to silence people criticizing the government or the police.

If you want to do something about this, focus on things you actually can do: mental health, education, social safety nets so people don’t feel abandoned. These are the things you’re supposed to be doing as government officials. Helping society. Not blaming speech you don’t like.

And it’s not just Kathy Hochul trying to deflect and point the blame finger elsewhere. Rep. Debbie Wasserman Schultz’s first instinct was to blame Section 230:

Same with Senator Tim Kaine who blamed… “Big Tech” even though the shooter himself said he was radicalized on 4chan, which I don’t recall being included with the big tech companies in any listing.

Again, all of this is deflection. Big tech is an easy punching bag, even if there is no evidence it has anything to do with anything. And, it ignores that the 1st Amendment protects even speech we dislike.

These politicians have failed us, more broadly, by failing to protect the most vulnerable in society. They’ve failed to put in place the kind of educational resources, mental health care, and societal safety nets to help those who most need it. And, now, when the results of those failures explode like this, they want to blame social media, because it’s a hell of a lot easier than looking at their own failings.

Posted on Techdirt - 16 May 2022 @ 10:46am

Did Twitch Violate Texas’ Social Media Law By Removing Mass Murderer’s Live Stream Of His Killing Spree?

As you’ve no doubt heard, on Saturday there was yet another horrific shooting, this one in Buffalo, killing 10 people and wounding more. From all current evidence, the shooter, a teenager, was a brainwashed white nationalist, spewing nonsense and hate in a long manifesto that repeated bigoted propaganda found in darker corners of the internet… and on Fox News’ evening shows. He also streamed the shooting rampage live on Twitch, and apparently communicated some of his plans via Discord and 4chan.

Twitch quickly took down the stream and Discord is apparently investigating. All of this is horrible, of course. But, it seems worth noting that it’s quite possible Twitch’s removal could violate Texas’ ridiculously fucked up social media law. Honestly, the only thing that might save the two companies (beyond the fact that it’s unlikely someone would go to court over this… we think) is that both Twitch and Discord might be just ever so slightly below the 50 million average monthly US users required to trigger the law. But that’s not entirely clear (another reason why this law is stupid: it’s not even clear who is covered by it).

A year ago, Discord reported having 150 million monthly active users, though that’s worldwide. The question is how many of them are in the US. Is it more than a third? Twitch apparently has a very similar 140 million monthly active users globally. At least one report says that approximately 21% of Twitch’s viewership is in the US. That same report says that Twitch’s US MAUs are at 44 million.

Of course the Texas law, HB20, defines user quite broadly, and also says once you have over 50 million in a single month you’re covered. So it’s quite possible both companies are covered.

Focusing on Twitch: taking down the streamer’s account might violate the law. Remember that the law says that you cannot “censor” based on viewpoint. And anyone in the state of Texas can bring a lawsuit claiming they were deprived of content based on viewpoint. Some will argue back that a livestream of a killing spree isn’t about viewpoint, but remember, this idiot teenager made it clear he was doing this as part of his political views. At the very least, there’s a strong argument that any effort to take down his manifesto (if not the livestream) could be seen as violating the law.

And just to underline that this is what the Texas legislature wanted, you may recall that we wrote about a series of amendments that were proposed when this law was being debated. And one of the amendments said that the law would not block the removal of content that “directly or indirectly promotes or supports any international or domestic terrorist group or any international or domestic terrorist acts.” AND THE LEGISLATURE VOTED IT DOWN.

So, yes, the Texas legislature made it abundantly clear that this law should block the ability of website to remove such content.

And, due to the way the law is structured, it’s not just those who were moderated who can sue, but anyone who feels their “ability to receive the expression of another person” was denied over the viewpoint of the speaker. So, it appears that a white nationalist in Texas could (right now) sue Twitch and demand that it reinstate the video, and Twitch would have to defend its reasons for removing the video, and convince a court it wasn’t over “viewpoints” (or that Twitch still has fewer than 50 million monthly average users, and that it has never passed that threshold).

Seems kinda messed up either way.

Of course, I should also note that NY’s governor is already suggesting (ridiculously) that Twitch should be held liable for not taking the video down fast enough.

Gov. Hochul said the fact that the live-stream was not taken down sooner demonstrates a responsibility those who provide the platforms have, morally and ethically, to ensure hate cannot exist there. She also said she hopes it will also demonstrate a legal responsibility for those providers.

“The fact that this act of barbarism, this execution of innocent human beings could be live-streamed on social media platforms and not taken down within a second says to me that there is a responsibility out there … to ensure that such hate cannot populate these sites.”

So, it’s possible that Twitch could face legal fights in New York for being too slow to take down the video and in Texas for taking down the video at all.

It would be kind of nice if politicians on both sides of the political aisle remembered how the 1st Amendment actually works, and focused the blame on those actually responsible, not the social media tools that are used to communicate.

More posts from Mike Masnick >>