Musk, Twitter, Bluesky & The Future Of Content Moderation (Part II)

from the there-are-ways... dept

In Part I, we explained why the First Amendment doesn’t get Musk to where he seemingly wants to be: If Twitter were truly, legally the “town square” (i.e., public forum) he wants it to be, it couldn’t do certain things Musk wants (cracking down on spam, authenticating users, banning things equivalent to “shouting fire in a crowded theatre,” etc.). Twitter also couldn’t do the things it clearly needs to do to continue to attract the critical mass of users that make the site worth buying, let alone attract those—eight times as many Americans—who don’t use Twitter every day. 

So what, exactly, should Twitter do to become a more meaningful “de facto town square,” as Musk puts it?

What Objectives Should Guide Content Moderation?

Even existing alternative social media networks claim to offer the kind of neutrality that Musk contemplates—but have failed to deliver. In June 2020, John Matze, Parler’s founder and then its CEO, proudly declared the site to be an “a community town square, an open town square, with no censorship,” adding, “if you can say it on the street of New York, you can say it on Parler.” Yet that same day, Matze also bragged of “banning trolls” from the left.

Likewise, GETTR’s CEO has bragged about tracking, catching, and deleting “left-of-center” content, with little clarity about what that might mean. Musk promises to void such hypocrisy:

Let’s take Musk at his word. The more interesting thing about GETTR, Parler and other alternative apps that claim to be “town squares” is just how much discretion they allow themselves to moderate content—and how much content moderation they do. 

Even in mid-2020, Parler reserved the right to “remove any content and terminate your access to the Services at any time and for any reason or no reason,” adding only a vague aspiration: “although Parler endeavors to allow all free speech that is lawful and does not infringe the legal rights of others.” today, Parler forbids any user to “harass, abuse, insult, harm, defame, slander, disparage, intimidate, or discriminate based on gender, sexual orientation, religion, ethnicity, race, age, national origin, or disability.” Despite claiming that it “defends free speech,” GETTR bans racial slurs such as those by Miller as well as white nationalist codewords

Why do these supposed free-speech-absolutist sites remove perfectly lawful content? Would you spend more or less time on a site that turned a blind eye to racial slurs? By the same token, would you spend more or less time on Twitter if the site stopped removing content denying the Holocaust, advocating new genocides, promoting violence, showing animals being tortured, encouraging teenagers to cut or even kill themselves, and so on? Would you want to be part of such a community? Would any reputable advertiser want to be associated with it? That platforms ostensibly starting with the same goal as Musk have reserved broad discretion to make these content moderation decisions underscores the difficulty in drawing these lines and balancing competing interests.

Musk may not care about alienating advertisers, but all social media platforms moderate some lawful content because it alienates potential users. Musk implicitly acknowledges this imperative on user engagement, at least when it comes to the other half of content moderation: deciding which content to recommend to users algorithmically—an essential feature of any social media site. (Few Twitter users activate the option to view their feeds in reverse-chronological order.) When TED’s Chrius Anderson asked him about a tweet many people have flagged as “obnoxious,” Musk hedged: “obviously in a case where there’s perhaps a lot of controversy, that you would not want to necessarily promote that tweet.” Why? Because, presumably, it could alienate users. What is “obvious” is that the First Amendment would not allow the government to disfavor content merely because it is “controversial” or “obnoxious.”

Today, Twitter lets you block and mute other users. Some claim user empowerment should be enough to address users’ concerns—or that user empowerment just needs to work better. A former Twitter employee tells the Washington Post that Twitter has considered an “algorithm marketplace” in which users can choose different ways to view their feeds. Such algorithms could indeed make user-controlled filtering easier and more scalable. 

But such controls offer only “out of sight, out of mind” comfort. That won’t be enough if a harasser hounds your employer, colleagues, family, or friends—or organizes others, or creates new accounts, to harass you. Even sophisticated filtering won’t change the reality of what content is available on Twitter.

And herein lies the critical point: advertisers don’t want their content to be associated with repugnant content even if their ads don’t appear next to that content. Likewise, most users care what kind of content a site allows even if they don’t see it. Remember, by default, everything said on Twitter is public—unlike the phone network. Few, if anyone, would associate the phone company with what’s said in private telephone communications. But every Tweet that isn’t posted to the rare private account can be seen by anyone. Reporters embed tweets in news stories. Broadcasters include screenshots in the evening news. If Twitter allows odious content, most Twitter users will see some of that one way or another—and they’ll hold Twitter responsible for deciding to allow it.

If you want to find such lawful but awful content, you can find it online somewhere. But is that enough? Should you be able to find it on Twitter, too? These are undoubtedly difficult questions on which many disagree; but they are unavoidable.

What, Exactly, Is the Virtual Town Square?

The idea of a virtual town square isn’t new, but what, precisely, that means has always been fuzzy, and lofty talk in a recent Supreme Court ruling greatly exacerbated that confusion. 

“Through the use of chat rooms,” proclaimed the Supreme Court in Reno v. ACLU (1997), “any person with a phone line can become a town crier with a voice that resonates farther than it could from any soapbox. Through the use of Web pages, mail exploders, and newsgroups, the same individual can become a pamphleteer.” The Court wasn’t saying that digital media were public fora without First Amendment rights. Rather, it said the opposite: digital publishers have the same First Amendment rights as traditional publishers. Thus, the Court struck down Congress’s first attempt to regulate online “indecency” to protect children, rejecting analogies to broadcasting, which rested on government licensing of a “‘scarce’ expressive commodity.” Unlike broadcasting, the Internet empowers anyone to speak; it just doesn’t guarantee them an audience.

In Packingham v. North Carolina (2017), citing Reno’s “town crier” language, the Court waxed even more lyrical: “By prohibiting sex offenders from using [social media], North Carolina with one broad stroke bars access to what for many are the principal sources for knowing current events, checking ads for employment, speaking and listening in the modern public square, and otherwise exploring the vast realms of human thought and knowledge.” This rhetorical flourish launched a thousand conservative op-eds—all claiming that social media were legally public fora like town squares. 

Of course, Packingham doesn’t address that question; it merely said governments can’t deny Internet access to those who have completed their sentences. Manhattan Community Access Corp. v. Halleck (2019) essentially answers the question, albeit in the slightly different context of public access cable channels: “merely hosting speech by others” doesn’t “transform private entities into” public fora. 

The question facing Musk now is harder: what part, exactly, of the Internet should be treated as if it were a public forum—where anyone can say anything “within the bounds of the law”? The easiest way to understand the debate is the Open Systems Interconnection model, which has guided the understanding of the Internet since the 1970s:

Long before “net neutrality” was a policy buzzword, it described the longstanding operational state of the Internet: Internet service (broadband) providers won’t block, throttle or discriminate against lawful Internet content. The sky didn’t fall when the Republican FCC repealed net neutrality rules in 2018. Indeed, nothing really changed: You can still send or receive lawful content exactly as before. ISPs promise to deliver connectivity to all lawful content. The Federal Trade Commission enforces those promises, as do state attorneys general. And, in upholding the FCC’s 2015 net neutrality rules over then-Judge Brett Kavanaugh’s arguments that they violated the First Amendment, the D.C. Circuit noted that the rules applied only to providers that “sell retail customers the ability to go anywhere (lawful) on the Internet.” The rules simply didn’t apply to “an ISP making sufficiently clear to potential customers that it provides a filtered service involving the ISP’s exercise of ‘editorial intervention.’”)

In essence, Musk is talking about applying something like net neutrality principles, developed to govern the uncurated service ISPs offer at layers 1-3, to Twitter, which operates at layer 7—but with a major difference: Twitter can monitor all content, which ISPs can’t do. This means embroiling Twitter in trying to decide what content is lawful in a far, far deeper way than any ISP has ever attempted.

Implementing Twitter’s existing plans to offer users an “algorithm marketplace” would essentially mean creating a new layer of user control on top of Twitter. But Twitter has also been working on a different idea: creating a layer below Twitter, interconnecting all the Internet’s “soapboxes” into one, giant virtual town square while still preserving Twitter as a community within that square that most people feel comfortable participating in.

“Bluesky”: Decentralization While Preserving Twitter’s Brand

Jack Dorsey, former Twitter CEO, has been talking about “decentralizing” social media for over three years—leading some reporters to conclude that Dorsey and Musk “share similar views … promoting more free speech online.” In fact, their visions for Twitter seem to be very different: unlike Musk, Dorsey saw Twitter as a community that, like any community, requires curation.

In late 2019, Dorsey announced that Twitter would fund Bluesky, an independent project intended “to develop an open and decentralized standard for social media.” Bluesky “isn’t going to happen overnight,” Dorsey warned in 2019. “It will take many years to develop a sound, scalable, and usable decentralized standard for social media.” The project’s latest update detailed the many significant challenges facing the effort, but significant progress. 

Twitter has a strong financial incentive to shake up social media: Bluesky would “allow us to access and contribute to a much larger corpus of public conversation.” That’s lofty talk for an obvious business imperative. Recall Metcalfe’s Law: a network’s impact is the square of the number of nodes in the network. Twitter (330 million active users worldwide) is a fraction as large as its “Big Tech” rivals: Facebook (2.4 billion), Instagram (1 billion), YouTube (1.9 billion) and TikTok. So it’s not surprising that Twitter’s market cap is a much smaller fraction of theirs—just 1/16 that of Facebook. Adopting Bluesky should dramatically increase the value of Twitter and smaller companies like Reddit (330 million users) and LinkedIn (560 million users) because Bluesky would allow users of each participating site to interact easily with content posted on other participating sites. Each site would be more an application or a “client” than “platform”—just as Gmail and Outlook both use the same email protocols. 

Dorsey also framed Bluesky as a way to address concerns about content moderation. Days after the January 6 insurrection, Dorsey defended Trump’s suspension from Twitter yet noted concerns about content moderation:

Dorsey acknowledged the need for more “transparency in our moderation operations,” but pointed to Bluesky as a more fundamental, structural solution:

Adopting Bluesky won’t change how each company does its own content moderation, but it would make those decisions much less consequential. Twitter could moderate content on Twitter, but not on the “public conversation layer.” No central authority could control that, just as with email protocols and Bitcoin. Twitter and other participating social networks would no longer be “platforms” for speech so much as applications (or “clients”) for viewing the public conversation layer,  the universal “corpus” of social content.

Four years ago, Twitter banned Alex Jones for repeatedly violating rules against harassment. The conspiracy theorist par excellence moved to Gab, an alternative social network launched in 2017 that claims 15 million monthly visitors (an unverified number). On Gab, Jones now has only a quarter as many followers as he once had on Twitter. And because the site is much smaller overall, he gets much less engagement and attention than he once did. Metcalfe’s Law means fewer people talk about him.

Bluesky won’t get Alex Jones or his posts back on Twitter or other mainstream social media sites, but it might ensure that his content is available on the public conversation layer, where users of any app that doesn’t block him can see it. Thus, Jones could use his Gab account to seamlessly reach audiences on Parler, Getter, Truth Social, or any other site using Bluesky that doesn’t ban him. Each of these sites, in turn, would have a strong incentive to adopt Bluesky because the protocol would make them more viable competitors to mainstream social media. Bluesky would turn Metcalfe’s Law to their advantage: no longer separate, tiny town squares, these sites would be ways of experiencing the same town square—only with a different set of filters. 

But Mecalfe’s Law cuts both ways: even if Twitter and other social media sites implemented Bluesky, so long as Twitter continues to moderate the likes of Alex Jones, the portion of the “town square” enabled by Bluesky that Jones has access to will be limited. Twitter would remain a curated community, a filter (or set of filters) for experiencing the “public conversation layer.” When first announcing Bluesky, Dorsey said the effort would be good for Twitter not only for allowing the company “to access and contribute to a much larger corpus of public conversation” but also because Twitter could “focus our efforts on building open recommendation algorithms which promote healthy conversation.” With user-generated content becoming more interchangeable across services—essentially a commodity—Twitter and other social media sites would compete on user experience.

Given this divergence in visions, it shouldn’t be surprising that Musk has never mentioned Bluesky. If he merely wanted to make Bluesky happen faster, he could pour money into the effort—an independent, open source project—without buying Twitter. He could help implement proposals to run the effort as a decentralized autonomous organization (DAO) to ensure its long-term independence from any effort to moderate content. Instead, Musk is focused on cutting back Twitter’s moderation of content—except where he wants more moderation. 

What Does Political Neutrality Really Mean?

Much of the popular debate over content moderation revolves around the perception that moderation practices are biased against certain political identities, beliefs, or viewpoints. Jack Dorsey responded to such concerns in a 2018 congressional hearing, telling lawmakers: “We don’t consider political viewpoints—period. Impartiality is our guiding principle.” Dorsey was invoking the First Amendment, which bars discrimination based on content, speakers, or viewpoints. Musk has said something that sounds similar, but isn’t quite the same:

The First Amendment doesn’t require neutrality as to outcomes. If user behavior varies across the political spectrum, neutral enforcement of any neutral rule will produce what might look like politically “biased” results.

Take, for example, a study routinely invoked by conservatives that purportedly shows Twitter’s political bias in the 2016 election. Richard Hanania, a political scientist at Columbia University, concluded that Twitter suspended Trump supporters more often than Clinton supporters at a ratio of 22:1. Hanania postulated that this meant Trump supporters would have to be at least four times as likely to violate neutrally applied rules to rule out Twitter’s political bias—and dismissed such a possibility as implausible. But Hanania’s study was based on a tiny sample of only reported (i.e., newsworthy) suspensions—just a small percentage of overall content moderation. And when one bothers to actually look at Hanania’s data—something none of the many conservatives who have since invoked his study seem to have done—one finds exactly those you’d expect to be several times more likely to violate neutrally-applied rules: the American Nazi Party, leading white supremacists including David Duke, Richard Spencer, Jared Taylor, Alex Jones, Charlottesville “Unite the Right” organizer James Allsup, and various Proud Boys. 

Was Twitter non-neutral because it didn’t ban an equal number of “far left” and “far right” users? Or because the “right” was incensed by endless reporting in leading outlets like The Wall Street Journal of a study purporting to show that “conservatives” were being disproportionately “censored”?

There’s no way to assess Musk’s outcome-based conception neutrality without knowing a lot more about objectionable content on the site. We don’t know how many accounts were reported, for what reasons, and what happened to those complaints. There is no clear denominator that allows for meaningful measurements—leaving only self-serving speculation about how content moderation is or is not biased. This is one problem Musk can do something about.

Greater Transparency Would Help, But…

After telling Anderson “I’m not saying that I have all the answers here,” Musk fell back on something simpler than line-drawing in content moderation: increased transparency. If Twitter should “make any changes to people’s tweets, if they’re emphasized or de-emphasized, that action should be made apparent so anyone can see that action’s been taken, so there’s no behind the scenes manipulation, either algorithmically or manually.” Such tweet-by-tweet reporting sounds appealing in principle, but it’s hard to know what it will mean in practice. What kind of transparency will users actually find useful? After all, all tweets are “emphasized or de-emphasized” to some degree; that is simply what Twitter’s recommendation algorithm does.

Greater transparency, implemented well, could indeed increase trust in Twitter’s impartiality. But ultimately, only large-scale statistical analysis can resolve claims of systemic bias. Twitter could certainly help to facilitate such research by providing data—and perhaps funding—to bona fide researchers.

More problematic is Musk’s suggestion that Twitter’s content moderation algorithm should be “open source” so anyone could see it. There is an obvious reason why such algorithms aren’t open source: revealing precisely how a site decides what content to recommend would make it easy to manipulate the algorithm. This is especially true for those most determined to abuse the site: the spambots on whom Musk has declared war. Making Twitter’s content moderation less opaque will have to be done carefully, lest it fosters the abuses that Musk recognizes as making Twitter a less valuable place for conversation.

Public Officials Shouldn’t Be Able to Block Users

Making Twitter more like a public forum is, in short, vastly more complicated than Musk suggests. But there is one easy thing Twitter could do to, quite literally, enforce the First Amendment. Courts have repeatedly found that government officials can violate the First Amendment by blocking commenters on their official accounts. After then-President Trump blocked several users from replying to his tweets, the users sued. The Second Circuit held that Trump violated the First Amendment by blocking users because Trump’s Twitter account was, with respect to what he could do, a public forum. The Supreme Court vacated the Second Circuit’s decision—Trump left office, so the case was moot—but Justice Thomas indicated that some aspects of government officials’ accounts seem like constitutionally protected spaces. Unless a user’s conduct constitutes harassment, government accounts likely can’t block them without violating the First Amendment. Whatever courts ultimately decide, Twitter could easily implement this principle.


Like Musk, we definitely “don’t have all the answers here.” In introducing what we know as the “marketplace of ideas” to First Amendment doctrine, Justice Holmes’s famous dissent in Abrams v. United States (1919) said this of the First Amendment: “It is an experiment, as all life is an experiment.” The same could be said of the Internet, Twitter, and content moderation. 

The First Amendment may help guide Musk’s experimentation with content moderation, but it simply isn’t the precise roadmap he imagines—at least, not for making Twitter the “town square” everyone wants to go participate in actively. Bluesky offers the best of both worlds: a much more meaningful town square where anyone can say anything, but also a community that continues to thrive. 

Berin Szóka (@BerinSzoka) is President of TechFreedom. Ari Cohn (@AriCohn) is Free Speech Counsel at TechFreedom. Both are lawyers focused on the First Amendment’s application to the Internet.

Filed Under: , , , , ,
Companies: bluesky, twitter

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Musk, Twitter, Bluesky & The Future Of Content Moderation (Part II)”

Subscribe: RSS Leave a comment
Stephen T. Stone (profile) says:

Bluesky won’t get Alex Jones or his posts back on Twitter or other mainstream social media sites, but it might ensure that his content is available on the public conversation layer, where users of any app that doesn’t block him can see it.

This is how the Fediverse works. When Gab moved to the Fediverse, it was near-instantly defederated from the vast majority of the Fediverse. Anyone on Gab-friendly instances can still follow people on Gab and such, but everyone else is spared all their bullshit.

(Anecdotally: I’ve seen a few recent Fediverse newcomers talk about how much nicer their social media experience on their preferred instance/the Fediverse in general.)

Stephen T. Stone (profile) says:

Re: Re:

You say that as if Twitter DMs aren’t open for reading by Twitter admins.

Whataboutism aside: Yes, messages like that have gone around the Fediverse with each new activity/sign-up spike. The fact that they can read DMs is no guarantee that they’re actively reading DMs for any reason other than administrative duties⁠—and from what I hear, even trying to read DMs is a pain in the ass.

If you can’t trust an instance admin to not read your DMs, a simple solution exists: Don’t DM people on the Fediverse. End-to-end encrypted messaging apps exist, after all.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Anonymous Coward says:


A shame techdirt doesn’t have the same rules.

Considering you posted as an AC, how do you propose that Techdirt does that?

Considering you never created a user account to post here, how could your account be banned?

This comment has been deemed insightful by the community.
Christenson says:

Re: Techdirt Rules

Techdirt doesn’t need an anti-harrassment/abuse/discrimination policy explicitly, because enough of the community tends to flag such comments so they get hidden. This makes the audience for the bad posts very small, and the psychological rewards for making such comments are largely removed. Not that we don’t have a few trolls somewhere, (Hamilton, Out-Of-The-Blue) but I claim those are unusually persistent trolls.

There’s lots of ways to skin this particular cat, particularly on a relatively small, focused website like Techdirt.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re:

If this site has degraded to a point where you can’t stand it on sight, why not let it die a natural death? For someone who thinks that the dwindling is meritorious of mockery, you seem to insist on letting it occupy office space in your head that you keep coming back to read things you don’t want to.

This comment has been deemed insightful by the community.
That Anonymous Coward (profile) says:

“For Twitter to deserve public trust, it must be politically neutral, which effectively means upsetting the far right and the far left equally”

The problem is that the far right is upset that the sun rose this morning and will scream about it for hours.
The amount of upset isn’t tied to reality.

Waves at Ari again

Thad (profile) says:


Yeah, it’s the usual false-equivalent bullshit.

The right is upset because trans people exist. The left is upset because the right bullies and intimidates trans people. If you think the solution to this problem is to upset both sides equally, then congratulations, you’re probably a cishet white dude who’s not used to people telling him he’s wrong.

Stephen T. Stone (profile) says:

Re: Re: Re:

The right doesn’t hate trans people.

They’re the ones making laws that ban one or maybe two trans athletes from competing in all of high school athletics in an entire state. They’re the ones making laws that ban even the merest mention of trans people existing in public schools. They’re the ones banning books written by/about/for trans people from libraries both school and public. They’re the ones (who were) passing bills to stop trans people from using certain public bathrooms out of the fear of a crime wave that doesn’t exist and never existed before those bills were passed.

But sure, tell me again how the political right in the United States doesn’t hate trans people. 🤨

Naughty Autie says:

Re: Re: Re:

Actually, you’ve met plenty of trans people, but because ‘gender critical’ folk like you insist on calling them ‘men in skirts’ and ‘deluded autistic girls’ alongside not recognising the existence of non-binary people, you’ve just never seen them. That’s not at all the same thing.

kwe (profile) says:

Seven Layer Cake

I can’t believe you dragged the ISO seven layer cake model out of the closet to explain “platforms” versus “applications”.
The Internet has never been coded or properly explained using this model. I know of only one attempt, over thirty years ago, to develop code according to this model. Yes, there is a physical, network, and internet layer, but all else is just “application”. Creating a “platform” is creating an application under another application. Creating platforms is better understood as a process of standardization and decentralization of control. Like the difference between Twitter and a Twitter API. The layer cake doesn’t help or explain anything.

Lostinlodos (profile) says:

Basic commentary


While I admit I don’t fully understand the methodology here, it sounds very much like a return to the cross platform methods used by CIS and in the later days of AOL.
Offering somewhat direct access to another service offsite.
If true this could finally be a replacement for the wide open and occasionally wild early internet and pre-internet days.
That’s a very good thing.
The one thing people of my generation and experience miss, and the reason we bytch so much about “just hide it” is because it worked. Very well.

Nobody had anything forced upon them. You search. You go where you want! On site. Off site.
Usenet systems, BBS diallers, repositories!
Multiple search methods including (legally questionable) absolutely uncensored search.

Filter farming is a wise choice when coupled with free search.
Let the user choose what to see.

“would make it easy to manipulate the algorithm”

I’ve discussed this before, at length, here and elsewhere. What this breaks down to is a raw opinion. Which is more important to someone: the full and complete transparency, and the volume of constant changes to such requirements, or the reduced work of a closed system.
From what I can tell, Musk is truly all-in on the former.
Tell people exactly what they can’t do and update the can’ts as they are worked around.

blocking government

This is a difficult idea. Because, as is so often mentioned here, you have other choices. I fully support the rights of any user, even government users, to block others. Mainly because you can write a letter. Make a phone call. Attend a town hall or meeting, or go to their office. A block on twitter is not a block on speech by the government, but a decision on location of speech by an official. You should be allowed to block on twitter just as much as you should be allowed to block people from your front yard.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »