The boiling frog syndrome suggests that if a frog jumps into a pot of boiling water, it immediately jumps out — but if a frog jumps into a slowly heating pot, it senses no danger and gets cooked. Mark Zuckerberg’s Facebook has been gradually coming to a boil of dysfunction for a decade – some are horrified, but many fail to see any serious problem. Now Elon Musk has jumped into a Twitter that he may quickly bring to a boil. Many expect either him – or hordes of non-extremist Twitter users – to jump out.
The frog syndrome may not be true of frogs, and Musk may not bring Twitter to an immediate boil, but the deeper problem that could boil us all is “platform law:” Social media, notably Twitter, have become powerful platforms that are bringing our new virtual “public square” to a raging boil. Harmful and polarizing disinformation and hate speech are threatening democracy here, and around the world.
The apparent problem is censorship versus free speech (whatever those may mean) — but the deeper problem is who sets the rules for what can be said, to what audience? Now we are facing a regime of platform law, where these private platforms have nearly unlimited power to set and enforce rules for censoring who can say what, with little transparency or oversight, even though they are fast becoming essential services. Are we to trust that to a few billionaire owners or Wall Street? Pseudo-independent oversight boards? The slowly and erratically turning wheels of government? “Self-sovereign” users or communities of users that may self-organize, but may also run wild as mobs? Or some new hybrid of some or all those that can offer both freedom and order?
Musk now brings this problem to a boil for all users to see. Either democracies will see the urgency and act, or they will die. Even if the boiling is slow and takes decades, leaving this power to control speech in this new public square in the hands of private businesses or governments will leave “a loaded gun on the table,” ready to be picked up by any would-be authoritarian.
It will take time and much sorting out, but some hybrid control is the only feasible solution that can preserve democracy. There are many ideas leading toward that rebirth — the optimistic scenario is that Musk could foster that.
Twitter has already begun to consider a step in that direction with Bluesky, an independent project funded by Jack Dorsey, and consistent with Mike Masnick’s proposals for “Protocols, Not Platforms.” Variations include Cory Doctorow’s adversarial interoperability, and Ethan Zuckerman’s Digital Public Infrastructure. A “middleware” architecture proposed by Francis Fukuyama’s Stanford group would let users select from an open market of delegated filtering services to work as their agents, to feed them what they want from the platforms. Any of these would shift power from the platforms to each user, to control what each sees – a variation on ideas also proposed by Stephen Wolfram, Ben Thompson, and me, among others.
Interestingly, it has been largely forgotten that the much-debated 1996 law that enabled the current legal regime, Section 230, also said “It is the policy of the United States… to encourage the development of technologies which maximize user control over what information is received by individuals.” True, there are significant challenges in this approach. The most fundamental is that doing filtering (ranking and recommending) well requires access to sensitive personal data from the platforms. But promising solutions are emerging.
A path to achieving this is outlined in a series in Tech Policy Press by Chris Riley and me. The central idea is to put primary control of what each of us sees in our own hands, choosing from an open market of composable sets of filtering services that suit our individual desires. Complementing that would be a light hand of regulation to ensure minimal constraints on illegal content, while leaving the criteria for handling “lawful but awful” content to services that the users choose.
But that alone is not enough. What traditionally kept “awful” content from us was neither a censoring authority nor direct user control — but a rich ecosystem of mediating services that did filtering the old-fashioned way: Publishers, communities, and other institutions served asan open network of curators serving more or less specific audiences — that we were free to choose or bypass. Now that open meditating infrastructure is being disintermediated by the social media platforms. We had freedom ofimpression — but are now losing it to platform control.
Real freedom of speech requires re-mediating that kind of infrastructure for indirect user-control. There are already legislative efforts in the US and Europe to mandate interoperability — and some include user “delegatability” — to open the platforms and break up monopolies of platform law. Creation of a layer of delegated user agents can create an opening for an open infrastructure of mediating services, to support filtering, as well as other aspects of social media propagation. This can enable traditional mediating institutions to re-integrate into this online ecosystem and regain their important role — for those who value what they can offer. It can also enable platform support for new breeds of mediating services to emerge and find an important place in our media ecosystem. Some fear that this user control might worsen filter bubble echo chambers, but how many of us really want to close our eyes and remain ignorant and stupid? Individual agency in choosing from a diversity of information sources has always been the hallmark of successful societies.
In this way social media can restore the original promise of the internet as a generative base for a vibrant and open next level of society.
Observers have dismissed Musk as a “mischievous trickster god” and naïve about freedom of speech. Maybe we are all cooked. But maybe (depending on how much pot he smokes?), he might support the nascent potential of Twitter to change the game for the better – or spur the rest of us to take the pot off the burner.
In Part I, we explained why the First Amendment doesn’t get Musk to where he seemingly wants to be: If Twitter were truly, legally the “town square” (i.e., public forum) he wants it to be, it couldn’t do certain things Musk wants (cracking down on spam, authenticating users, banning things equivalent to “shouting fire in a crowded theatre,” etc.). Twitter also couldn’t do the things it clearly needs to do to continue to attract the critical mass of users that make the site worth buying, let alone attract those—eight times as many Americans—who don’t use Twitter every day.
So what, exactly, should Twitter do to become a more meaningful “de facto town square,” as Musk puts it?
What Objectives Should Guide Content Moderation?
Even existing alternative social media networks claim to offer the kind of neutrality that Musk contemplates—but have failed to deliver. In June 2020, John Matze, Parler’s founder and then its CEO, proudly declared the site to be an “a community town square, an open town square, with no censorship,” adding, “if you can say it on the street of New York, you can say it on Parler.” Yet that same day, Matze also bragged of “banning trolls” from the left.
Likewise, GETTR’s CEO has bragged about tracking, catching, and deleting “left-of-center” content, with little clarity about what that might mean. Musk promises to void such hypocrisy:
For Twitter to deserve public trust, it must be politically neutral, which effectively means upsetting the far right and the far left equally
Let’s take Musk at his word. The more interesting thing about GETTR, Parler and other alternative apps that claim to be “town squares” is just how much discretion they allow themselves to moderate content—and how much content moderation they do.
Even in mid-2020, Parler reserved the right to “remove any content and terminate your access to the Services at any time and for any reason or no reason,” adding only a vague aspiration: “although Parler endeavors to allow all free speech that is lawful and does not infringe the legal rights of others.” today, Parler forbids any user to “harass, abuse, insult, harm, defame, slander, disparage, intimidate, or discriminate based on gender, sexual orientation, religion, ethnicity, race, age, national origin, or disability.” Despite claiming that it “defends free speech,” GETTR bans racial slurs such as those by Miller as well as white nationalist codewords.
Why do these supposed free-speech-absolutist sites remove perfectly lawful content? Would you spend more or less time on a site that turned a blind eye to racial slurs? By the same token, would you spend more or less time on Twitter if the site stopped removing content denying the Holocaust, advocating new genocides, promoting violence, showing animals being tortured, encouraging teenagers to cut or even kill themselves, and so on? Would you want to be part of such a community? Would any reputable advertiser want to be associated with it? That platforms ostensibly starting with the same goal as Musk have reserved broad discretion to make these content moderation decisions underscores the difficulty in drawing these lines and balancing competing interests.
Musk may not care about alienating advertisers, but all social media platforms moderate some lawful content because it alienates potential users. Musk implicitly acknowledges this imperative on user engagement, at least when it comes to the other half of content moderation: deciding which content to recommend to users algorithmically—an essential feature of any social media site. (Few Twitter users activate the option to view their feeds in reverse-chronological order.) When TED’s Chrius Anderson asked him about a tweet many people have flagged as “obnoxious,” Musk hedged: “obviously in a case where there’s perhaps a lot of controversy, that you would not want to necessarily promote that tweet.” Why? Because, presumably, it could alienate users. What is “obvious” is that the First Amendment would not allow the government to disfavor content merely because it is “controversial” or “obnoxious.”
Today, Twitter lets you block and mute other users. Some claim user empowerment should be enough to address users’ concerns—or that user empowerment just needs to work better. A former Twitter employee tells the Washington Post that Twitter has considered an “algorithm marketplace” in which users can choose different ways to view their feeds. Such algorithms could indeed make user-controlled filtering easier and more scalable.
But such controls offer only “out of sight, out of mind” comfort. That won’t be enough if a harasser hounds your employer, colleagues, family, or friends—or organizes others, or creates new accounts, to harass you. Even sophisticated filtering won’t change the reality of what content is available on Twitter.
And herein lies the critical point: advertisers don’t want their content to be associated with repugnant content even if their ads don’t appear next to that content. Likewise, most users care what kind of content a site allows even if they don’t see it. Remember, by default, everything said on Twitter is public—unlike the phone network. Few, if anyone, would associate the phone company with what’s said in private telephone communications. But every Tweet that isn’t posted to the rare private account can be seen by anyone. Reporters embed tweets in news stories. Broadcasters include screenshots in the evening news. If Twitter allows odious content, most Twitter users will see some of that one way or another—and they’ll hold Twitter responsible for deciding to allow it.
If you want to find such lawful but awful content, you can find it online somewhere. But is that enough? Should you be able to find it on Twitter, too? These are undoubtedly difficult questions on which many disagree; but they are unavoidable.
What, Exactly, Is the Virtual Town Square?
The idea of a virtual town square isn’t new, but what, precisely, that means has always been fuzzy, and lofty talk in a recent Supreme Court ruling greatly exacerbated that confusion.
“Through the use of chat rooms,” proclaimed the Supreme Court in Reno v. ACLU (1997), “any person with a phone line can become a town crier with a voice that resonates farther than it could from any soapbox. Through the use of Web pages, mail exploders, and newsgroups, the same individual can become a pamphleteer.” The Court wasn’t saying that digital media were public fora without First Amendment rights. Rather, it said the opposite: digital publishers have the same First Amendment rights as traditional publishers. Thus, the Court struck down Congress’s first attempt to regulate online “indecency” to protect children, rejecting analogies to broadcasting, which rested on government licensing of a “‘scarce’ expressive commodity.” Unlike broadcasting, the Internet empowers anyone to speak; it just doesn’t guarantee them an audience.
In Packingham v. North Carolina (2017), citing Reno’s “town crier” language, the Court waxed even more lyrical: “By prohibiting sex offenders from using [social media], North Carolina with one broad stroke bars access to what for many are the principal sources for knowing current events, checking ads for employment, speaking and listening in the modern public square, and otherwise exploring the vast realms of human thought and knowledge.” This rhetorical flourish launched a thousand conservative op-eds—all claiming that social media were legally public fora like town squares.
Of course, Packingham doesn’t address that question; it merely said governments can’t deny Internet access to those who have completed their sentences. Manhattan Community Access Corp. v. Halleck (2019) essentially answers the question, albeit in the slightly different context of public access cable channels: “merely hosting speech by others” doesn’t “transform private entities into” public fora.
The question facing Musk now is harder: what part, exactly, of the Internet should be treated as if it were a public forum—where anyone can say anything “within the bounds of the law”? The easiest way to understand the debate is the Open Systems Interconnection model, which has guided the understanding of the Internet since the 1970s:
Long before “net neutrality” was a policy buzzword, it described the longstanding operational state of the Internet: Internet service (broadband) providers won’t block, throttle or discriminate against lawful Internet content. The sky didn’t fall when the Republican FCC repealed net neutrality rules in 2018. Indeed, nothing really changed: You can still send or receive lawful content exactly as before. ISPs promise to deliver connectivity to all lawful content. The Federal Trade Commission enforces those promises, as do state attorneys general. And, in upholding the FCC’s 2015 net neutrality rules over then-Judge Brett Kavanaugh’s arguments that they violated the First Amendment, the D.C. Circuit noted that the rules applied only to providers that “sell retail customers the ability to go anywhere (lawful) on the Internet.” The rules simply didn’t apply to “an ISP making sufficiently clear to potential customers that it provides a filtered service involving the ISP’s exercise of ‘editorial intervention.’”)
In essence, Musk is talking about applying something like net neutrality principles, developed to govern the uncurated service ISPs offer at layers 1-3, to Twitter, which operates at layer 7—but with a major difference: Twitter can monitor all content, which ISPs can’t do. This means embroiling Twitter in trying to decide what content is lawful in a far, far deeper way than any ISP has ever attempted.
Implementing Twitter’s existing plans to offer users an “algorithm marketplace” would essentially mean creating a new layer of user control on top of Twitter. But Twitter has also been working on a different idea: creating a layer below Twitter, interconnecting all the Internet’s “soapboxes” into one, giant virtual town square while still preserving Twitter as a community within that square that most people feel comfortable participating in.
“Bluesky”: Decentralization While Preserving Twitter’s Brand
Jack Dorsey, former Twitter CEO, has been talking about “decentralizing” social media for over three years—leading some reporters to conclude that Dorsey and Musk “share similar views … promoting more free speech online.” In fact, their visions for Twitter seem to be very different: unlike Musk, Dorsey saw Twitter as a community that, like any community, requires curation.
In late 2019, Dorsey announced that Twitter would fund Bluesky, an independent project intended “to develop an open and decentralized standard for social media.” Bluesky “isn’t going to happen overnight,” Dorsey warned in 2019. “It will take many years to develop a sound, scalable, and usable decentralized standard for social media.” The project’s latest update detailed the many significant challenges facing the effort, but significant progress.
Twitter has a strong financial incentive to shake up social media: Bluesky would “allow us to access and contribute to a much larger corpus of public conversation.” That’s lofty talk for an obvious business imperative. Recall Metcalfe’s Law: a network’s impact is the square of the number of nodes in the network. Twitter (330 million active users worldwide) is a fraction as large as its “Big Tech” rivals: Facebook (2.4 billion), Instagram (1 billion), YouTube (1.9 billion) and TikTok. So it’s not surprising that Twitter’s market cap is a much smaller fraction of theirs—just 1/16 that of Facebook. Adopting Bluesky should dramatically increase the value of Twitter and smaller companies like Reddit (330 million users) and LinkedIn (560 million users) because Bluesky would allow users of each participating site to interact easily with content posted on other participating sites. Each site would be more an application or a “client” than “platform”—just as Gmail and Outlook both use the same email protocols.
Dorsey also framed Bluesky as a way to address concerns about content moderation. Days after the January 6 insurrection, Dorsey defended Trump’s suspension from Twitter yet noted concerns about content moderation:
Dorsey acknowledged the need for more “transparency in our moderation operations,” but pointed to Bluesky as a more fundamental, structural solution:
Adopting Bluesky won’t change how each company does its own content moderation, but it would make those decisions much less consequential. Twitter could moderate content on Twitter, but not on the “public conversation layer.” No central authority could control that, just as with email protocols and Bitcoin. Twitter and other participating social networks would no longer be “platforms” for speech so much as applications (or “clients”) for viewing the public conversation layer, the universal “corpus” of social content.
Four years ago, Twitter banned Alex Jones for repeatedly violating rules against harassment. The conspiracy theorist par excellence moved to Gab, an alternative social network launched in 2017 that claims 15 million monthly visitors (an unverified number). On Gab, Jones now has only a quarter as many followers as he once had on Twitter. And because the site is much smaller overall, he gets much less engagement and attention than he once did. Metcalfe’s Law means fewer people talk about him.
Bluesky won’t get Alex Jones or his posts back on Twitter or other mainstream social media sites, but it might ensure that his content is available on the public conversation layer, where users of any app that doesn’t block him can see it. Thus, Jones could use his Gab account to seamlessly reach audiences on Parler, Getter, Truth Social, or any other site using Bluesky that doesn’t ban him. Each of these sites, in turn, would have a strong incentive to adopt Bluesky because the protocol would make them more viable competitors to mainstream social media. Bluesky would turn Metcalfe’s Law to their advantage: no longer separate, tiny town squares, these sites would be ways of experiencing the same town square—only with a different set of filters.
But Mecalfe’s Law cuts both ways: even if Twitter and other social media sites implemented Bluesky, so long as Twitter continues to moderate the likes of Alex Jones, the portion of the “town square” enabled by Bluesky that Jones has access to will be limited. Twitter would remain a curated community, a filter (or set of filters) for experiencing the “public conversation layer.” When first announcing Bluesky, Dorsey said the effort would be good for Twitter not only for allowing the company “to access and contribute to a much larger corpus of public conversation” but also because Twitter could “focus our efforts on building open recommendation algorithms which promote healthy conversation.” With user-generated content becoming more interchangeable across services—essentially a commodity—Twitter and other social media sites would compete on user experience.
Given this divergence in visions, it shouldn’t be surprising that Musk has never mentioned Bluesky. If he merely wanted to make Bluesky happen faster, he could pour money into the effort—an independent, open source project—without buying Twitter. He could help implement proposals to run the effort as a decentralized autonomous organization (DAO) to ensure its long-term independence from any effort to moderate content. Instead, Musk is focused on cutting back Twitter’s moderation of content—except where he wants more moderation.
What Does Political Neutrality Really Mean?
Much of the popular debate over content moderation revolves around the perception that moderation practices are biased against certain political identities, beliefs, or viewpoints. Jack Dorsey responded to such concerns in a 2018 congressional hearing, telling lawmakers: “We don’t consider political viewpoints—period. Impartiality is our guiding principle.” Dorsey was invoking the First Amendment, which bars discrimination based on content, speakers, or viewpoints. Musk has said something that sounds similar, but isn’t quite the same:
For Twitter to deserve public trust, it must be politically neutral, which effectively means upsetting the far right and the far left equally
The First Amendment doesn’t require neutrality as to outcomes. If user behavior varies across the political spectrum, neutral enforcement of any neutral rule will produce what might look like politically “biased” results.
Take, for example, a study routinely invoked by conservatives that purportedly shows Twitter’s political bias in the 2016 election. Richard Hanania, a political scientist at Columbia University, concluded that Twitter suspended Trump supporters more often than Clinton supporters at a ratio of 22:1. Hanania postulated that this meant Trump supporters would have to be at least four times as likely to violate neutrally applied rules to rule out Twitter’s political bias—and dismissed such a possibility as implausible. But Hanania’s study was based on a tiny sample of only reported (i.e., newsworthy) suspensions—just a small percentage of overall content moderation. And when one bothers to actually look at Hanania’s data—something none of the many conservatives who have since invoked his study seem to have done—one finds exactly those you’d expect to be several times more likely to violate neutrally-applied rules: the American Nazi Party, leading white supremacists including David Duke, Richard Spencer, Jared Taylor, Alex Jones, Charlottesville “Unite the Right” organizer James Allsup, and various Proud Boys.
Was Twitter non-neutral because it didn’t ban an equal number of “far left” and “far right” users? Or because the “right” was incensed by endless reporting in leading outlets like The Wall Street Journal of a study purporting to show that “conservatives” were being disproportionately “censored”?
There’s no way to assess Musk’s outcome-based conception neutrality without knowing a lot more about objectionable content on the site. We don’t know how many accounts were reported, for what reasons, and what happened to those complaints. There is no clear denominator that allows for meaningful measurements—leaving only self-serving speculation about how content moderation is or is not biased. This is one problem Musk can do something about.
Greater Transparency Would Help, But…
After telling Anderson “I’m not saying that I have all the answers here,” Musk fell back on something simpler than line-drawing in content moderation: increased transparency. If Twitter should “make any changes to people’s tweets, if they’re emphasized or de-emphasized, that action should be made apparent so anyone can see that action’s been taken, so there’s no behind the scenes manipulation, either algorithmically or manually.” Such tweet-by-tweet reporting sounds appealing in principle, but it’s hard to know what it will mean in practice. What kind of transparency will users actually find useful? After all, all tweets are “emphasized or de-emphasized” to some degree; that is simply what Twitter’s recommendation algorithm does.
Greater transparency, implemented well, could indeed increase trust in Twitter’s impartiality. But ultimately, only large-scale statistical analysis can resolve claims of systemic bias. Twitter could certainly help to facilitate such research by providing data—and perhaps funding—to bona fide researchers.
More problematic is Musk’s suggestion that Twitter’s content moderation algorithm should be “open source” so anyone could see it. There is an obvious reason why such algorithms aren’t open source: revealing precisely how a site decides what content to recommend would make it easy to manipulate the algorithm. This is especially true for those most determined to abuse the site: the spambots on whom Musk has declared war. Making Twitter’s content moderation less opaque will have to be done carefully, lest it fosters the abuses that Musk recognizes as making Twitter a less valuable place for conversation.
Public Officials Shouldn’t Be Able to Block Users
Making Twitter more like a public forum is, in short, vastly more complicated than Musk suggests. But there is one easy thing Twitter could do to, quite literally, enforce the First Amendment. Courts haverepeatedlyfound that government officials can violate the First Amendment by blocking commenters on their official accounts. After then-President Trump blocked several users from replying to his tweets, the users sued. The Second Circuit held that Trump violated the First Amendment by blocking users because Trump’s Twitter account was, with respect to what he could do, a public forum. The Supreme Court vacated the Second Circuit’s decision—Trump left office, so the case was moot—but Justice Thomas indicated that some aspects of government officials’ accounts seem like constitutionally protected spaces. Unless a user’s conduct constitutes harassment, government accounts likely can’t block them without violating the First Amendment. Whatever courts ultimately decide, Twitter could easily implement this principle.
Conclusion
Like Musk, we definitely “don’t have all the answers here.” In introducing what we know as the “marketplace of ideas” to First Amendment doctrine, Justice Holmes’s famous dissent in Abramsv. United States (1919) said this of the First Amendment: “It is an experiment, as all life is an experiment.” The same could be said of the Internet, Twitter, and content moderation.
The First Amendment may help guide Musk’s experimentation with content moderation, but it simply isn’t the precise roadmap he imagines—at least, not for making Twitter the “town square” everyone wants to go participate in actively. Bluesky offers the best of both worlds: a much more meaningful town square where anyone can say anything, but also a community that continues to thrive.
Berin Szóka (@BerinSzoka) is President of TechFreedom. Ari Cohn (@AriCohn) is Free Speech Counsel at TechFreedom. Both are lawyers focused on the First Amendment’s application to the Internet.
With Elon Musk now Twitter’s largest shareholder, and joining the company’s board, there have been some (perhaps reasonable) concerns about the influence he would have on the platform — mainly based on his childlike understanding of free speech, in which speech that he likes should obviously be allowed, and speech that he dislikes should obviously be punished. That’s not to say he won’t have some good ideas for the platform. Before his infamous poll about free speech on Twitter, he had done another poll asking whether or not Twitter’s algorithm should be open sourced.
And, that’s a lot more interesting, because it’s an idea that many people have discussed for a while, including Twitter founder, Jack Dorsey, who has talked a lot about creating algorithmic choice for users of the website, in part, based on Dorsey and Twitter’s decision to embrace my vision of a world of protocols over platforms.
Of course, it’s not nearly as easy as just “open sourcing” the algorithm. Once again, Musk’s simplification of a complex issue is a bit on the childlike side of things, even if the underlying idea is valuable. But you can’t just open source the algorithm, without a whole bunch of other things being in place. To just throw the doors open (1) wouldn’t really work because it wouldn’t mean much, and (2) without taking other steps first, it would basically open up the system for gaming by trolls and malicious users.
Either way, I’ve continued to follow what’s been happening with Project Bluesky, the Twitter-created project to try to build a protocol-based system. Last month, the NY Times had a good (if brief) update on the project, noting how Twitter could have gone down that route initially, but chose not to. Reversing course is a tricky move, but one that is doable.
What’s been most interesting to me is how Bluesky has been progressing. Some have complained that it’s basically done nothing, but watching over things, it appears what’s actually happening is that the people working on it are being deliberate and careful, rather than rushing in and breaking things in typical Silicon Valley fashion. There are lots of other projects out there that haven’t truly caught on. And whenever I mention things like Bluesky, people quickly rush in to point to things like Mastodon or other projects — which, to me, are only partial steps towards the vision of a protocol-based future, rather than really driving the effort forward in a way that is widely adopted.
We’re building on existing protocols and technologies but are not committed to any stack in its entirety. We see use cases for blockchains, but Bluesky is not a blockchain, and we believe the adoption of social web protocols should be independent of any blockchain.
And, after recently announcing its key initial hires, the Bluesky team has revealed some aspect of the plan, in what it’s calling a self-authenticating social protocol. As it notes, for all the existing projects out there, none truly match the protocol/not platform vision. But that doesn’t mean they can’t work within that ecosystem, or that there aren’t useful things to build on and connect with:
There are many projects that have created protocols for decentralizing discourse, including ActivityPub and SSB for social, Matrix and IRC for chat, and RSS for blogging. While each of these are successful in their own right, none of them fully met the goals we had for a network that enables global long-term public conversations at scale.
The focus of Bluesky is to fill in the gaps, to make a protocol-based system a reality. And the Bluesky team sees the main gaps being portability, scalability, and trust. To build that, they see the key initial need being that self-authenticating piece:
The conceptual framework we’ve adopted for meeting these objectives is the “self-authenticating protocol.” In law, a “self-authenticating” document requires no extrinsic evidence of authenticity. In computer science, an “authenticated data structure” can have its operations independently verifiable. When resources in a network can attest to their own authenticity, then that data is inherently live – that is, canonical and transactable – no matter where it is located. This is a departure from the connection-centric model of the Web, where information is host-certified and therefore becomes dead when it is no longer hosted by its original service. Self-authenticating data moves authority to the user and therefore preserves the liveness of data across every hosting service.
As they note, this self-authenticating protocol can help provide that missing portability, scalability and trust:
Portability is directly satisfied by self-authenticating protocols. Users who want to switch providers can transfer their dataset at their convenience, including to their own infrastructure. The UX for how to handle key management and username association in a system with cryptographic identifiers has come a long way in recent years, and we plan to build on emerging standards and best practices. Our philosophy is to give users a choice: between self-sovereign solutions where they have more control but also take on more risk, and custodial services where they gain convenience but give up some control.
Self-authenticating data provides a scalability advantage by enabling store-and-forward caches. Aggregators in a self-authenticating network can host data on behalf of smaller providers without reducing trust in the data’s authenticity. With verifiable computation, these aggregators will even be able to produce computed views – metrics, follow graphs, search indexes, and more – while still preserving the trustworthiness of the data. This topological flexibility is key for creating global views of activity from many different origins.
Finally, self-authenticating data provides more mechanisms that can be used to establish trust. Self-authenticated data can retain metadata, like who published something and whether it was changed. Reputation and trust-graphs can be constructed on top of users, content, and services. The transparency provided by verifiable computation provides a new tool for establishing trust by showing precisely how the results were produced. We believe verifiable computation will present huge opportunities for sharing indexes and social algorithms without sacrificing trust, but the cryptographic primitives in this field are still being refined and will require active research before they work their way into any products.
There’s some more in the links above, but the project is moving forward, and I’m glad to see that it’s doing so in a thoughtful, deliberate manner, focused on filling in the gaps to build a protocol-based world, rather than trying to reinvent the wheel entirely.
It’s that kind of approach that will move things forward successfully, rather than simplistic concepts like “just open source the algorithm.” The end result of this may (and perhaps hopefully will) be open sourced algorithms (many of them) helping to moderate the Twitter experience, but there’s a way to get there thoughtfully, and the Bluesky team appears to be taking that path.
We’re excited today to announce that we’ve received a grant from Grant for the Web to create a content series on Techdirt exploring the history (and future?) of web monetization, entitled “Correcting Error 402.” We’ll get more into this once the series launches, but lots of people are aware of the HTTP 404 Not Found error code — and some people are at least vaguely aware of 403 Forbidden. What most people probably don’t know about is the Error Code 402: Payment Required. It’s been in the HTTP spec going back decades, with “This code is reserved for future use.” But no one’s ever actually done anything with it.
And, arguably, the lack of standardization there has created some ancillary issues — including a few giant, dominant payment processor companies, high transaction fees, as well as the current (and more recent) mad dash scramble to fill the gap by trying to build a zillion different kinds of cryptocurrencies, most of which are fluff and nonsense, but without actually understanding what makes the most sense for an open internet.
Grant for the Web is a project of the Interledger Foundation. Interledger is an attempt to create an open protocol, web monetization standard for handling internet payments and monetization. We’ve talked a little about all this in the past, when we started experimenting with Coil (a provider of tools to help enable web monetization), and on the podcast we did with Coil founder and Interledger co-creator Stefan Thomas.
But we wanted to dig deeper into the questions of what the web might look like if monetization was built on an open standard as part of the web, and that’s what this content series will entail. Expect the series to startup in about a month, and to explore the history, present, and future of monetization. And, just to answer a few questions you might have: this series is not going to be about cryptocurrency (though it may get mentioned in passing), because that’s not central to the questions here, and it’s also not going to be just about Interledger/Coil’s vision of the future. It’s designed to be a deeper exploration of the question of monetization online and how it should work.
As you know by now, much of the tech news cycle yesterday was dominated by the fact that Facebook appeared to erase itself from the internet via a botched BGP configuration. Hilarity ensued — including my favorite bit about how Facebook’s office badges weren’t working because they relied on connecting to a Facebook server that could no longer be found (also, how in borking their own BGP, Facebook basically knocked out their own ability to fix it until they could get the right people who knew what to do to have physical access to the routers).
But in talking to people who were upset about being cut off from Facebook, Instagram, WhatsApp, or Facebook Messenger, it was a good point to remind people that another benefit of a protocols, not platforms approach to these things is that it’s way more resilient. If you’re using Messenger and it’s down, but can easily swap in a different tool and continue to communicate that’s a much better, more resilient solution than relying on Facebook not to mess up. And that’s on top of all the other benefits I laid out in my paper.
In fact, a protocols approach also creates more incentives for better uptime from services, since continually screwing up for extended periods of times doesn’t just mean losing ad revenue for a few hours, but it is much more likely to lead people to permanently switch to an alternative provider.
Indeed, a key part of the value of the internet, originally, was in its resiliency of being highly distributed, rather than centralized, and how it could continue to work well if one part fell off the network. The increasing centralization/silo-ization of the internet has taken away much of that benefit. So, if anything, yesterday’s mess should be seen as another reason to look more closely at a protocols-based approach to building new internet services.
Techdirt’s coverage of open access — the idea that the fruits of publicly-funded scholarship should be freely available to all — shows that the results so far have been mixed. On the one hand, many journals have moved to an open access model. On the other, the overall subscription costs for academic institutions have not gone down, and neither have the excessive profit margins of academic publishers. Despite that success in fending off this attempt to re-invent the way academic work is disseminated, publishers want more. In particular, they want more money and more power. In an important new paper, a group of researchers warn that companies now aim to own the entire academic publishing stack:
Over the last decade, the four leading publishing houses have all acquired or developed a range of services aiming to develop vertical integration over the entire scientific process from literature search to data acquisition, analysis, writing, publishing and outreach. User profiles inform the corporations in real time on who is currently working on which problems and where. This information allows them to offer bespoke packaged workflow solutions to institutions. For any institution buying such a workflow package, the risk of vendor lock-in is very real: without any standards, it becomes technically and financially nearly impossible to substitute a chosen service provider with another one. In the best case, this non-substitutability will lead to a practically irreversible fragmentation of research objects and processes as long as a plurality of service providers would be maintained. In the worst case, it will lead to complete dependence of a single, dominant commercial provider.
Commenting on this paper, a post on the MeaseyLab blog calls this “academic capture“:
For those of us who have lived through state capture, we felt powerless and could only watch as institutions were plundered. Right now, we are willing participants in the capture of our own academic freedom.
Academic capture: when the institutions’ policies are significantly influenced by publishing companies for their profit.
Fortunately, there is a way to counter this growing threat, as the authors of the paper explain: adopt open standards.
To prevent commercial monopolization, to ensure cybersecurity, user/patient privacy, and future development, these standards need to be open, under the governance of the scholarly community. Open standards enable switching from one provider to another, allowing public institutions to develop tender or bidding processes, in which service providers can compete with each other with their services for the scientific workflow.
Techdirt readers will recognize this as exactly the idea that lies at the heart of Mike’s influential essay “Protocols, Not Platforms: A Technological Approach to Free Speech“. Activist and writer Cory Doctorow has also been pushing for the same thing — what he calls “adversarial interoperability“. It seems like an idea whose time has come, not just for academic publishing, but every aspect of today’s digital world.
Last week we announced that we wanted to write a paper exploring the NFT phenomenon, and specifically what it meant with regards to the economics around scarce and infinitely available goods. To run this crowdfund, we’re testing out a cool platform called Mirror that lets us mix crowdfunding and NFTs as part of the process (similarly, we’re now experimenting with NFTs with our Plagiarism by Techdirt collection).
We were overwhelmed by the support for the paper, which surpassed what we expected. The “podium” feature — which gave special NFTs to our three biggest backers — has closed with the winners being declared, but the rest of the crowdfund will remain open until this Thursday evening. We also offered up a special “Protocols, Not Platforms” NFT for the first 15 people who backed us at 1 ETH or above. So far, ten of those have been claimed, but five remain.
If anyone is interested in supporting this paper and our work exploring scarcity and abundance, please check it out.
It has been nearly two years since Jack Dorsey announced plans to explore switching Twitter from its current setup as a centralized platform controlled by one company to a distributed protocol project that anyone can build on — called Bluesky. This was especially exciting to me, since some of Jack’s thoughts were inspired by my “Protocols, not Platforms” paper. There hasn’t been that much news on Bluesky since then — leading many to insist that the project was going nowhere. However, there have been plenty of things happening behind the scenes — at least somewhat complicated by the whole pandemic thing. In January of this year, an “Ecosystem Review” document was published.
At the time, I saw some people mocking it as a pointless whitepaper, rather than anything concrete, but to me it was actually a really important step. When Dorsey first announced Bluesky, many people complained that he was trying to reinvent the wheel, when there were a lot of already ongoing projects trying to create distributed and decentralized protocols for social media. Understanding the actual ecosystem, what works, what is limited, what can still be done, and how to build something that will be (1) effective, (2) compelling, and (3) will last, takes some actual thought and consideration.
Since then, Twitter went through a process of interviewing a number of possible leads for the project — and, as a disclaimer, I will note that Twitter invited me to take part in interviewing each of their finalists, and submitting my feedback and thoughts on them. The candidates all had strong ideas and attributes for leading the project, but to me, one stood out way beyond the others: Jay Graber, who has now been named to lead the project. For what it’s worth, Jay was the author of that original ecosystem paper.
I?m excited to announce that I?ll be leading @bluesky, an initiative started by @Twitter to decentralize social media. Follow updates on Twitter and at https://t.co/Sg4MxK1zwl
This is an exciting announcement, as I felt that Jay’s vision for the project was not just the most complete and thorough of anyone else’s, but also the most compelling. Seeing her vision got me more excited about the possibility to actually move forward to a world of protocols over platforms than I had been in a while. There are, of course, many, many challenges to making this a reality. And there remains a high likelihood of failure. But one of the key opportunities for making a protocol future a reality — short of some sort of major catastrophe — is for a large enough player in the space to embrace the concept and bring millions of users with them. Twitter can do that. And Jay is exactly the right person to both present the vision and to lead the team to make it a reality.
I know that Bluesky is now actively looking to expand the team and hire a few developers. If you have the skills necessary, please check it out. This really is an amazing opportunity to shape the future and move us towards a more open web, rather than one controlled by a few dominant companies.
As I’m sure most people are aware, last week, the House Energy & Commerce Committee held yet another hearing on “big tech” and its content moderation practices. This one was ostensibly on “disinformation,” and had Facebook’s Mark Zuckerberg, Google’s Sundar Pichai, and Twitter’s Jack Dorsey as the panelists. It went on for five and a half hours which appears to be the norm for these things. Last week, I did write about both Zuckerberg and Pichai’s released opening remarks, in which both focused on various efforts they had made to combat disinfo. Of course, the big difference between the two was that Zuckerberg then suggested 230 should be reformed, while Pichai said it was worth defending.
If you actually want to watch all five and a half hours of this nonsense, you can do so here:
As per usual — and as was totally expected — you got a lot more of the same. You had very angry looking Representatives practically screaming about awful stuff online. You had Democrats complaining about the platforms failing to take down info they disliked, while just as equally angry Republicans complained about the platforms taking down content they liked (often this was the same, or related, content). Amusingly, often just after saying that websites took down content they shouldn’t have (bias!), the very same Representatives would whine “but how dare you not take down this other content.” It was the usual mess of “why don’t you moderate exactly the way I want you to moderate,” which is always a silly, pointless activity. There was also a lot of “think of the children!” moral panic.
However, Jack Dorsey’s testimony was somewhat different than Zuckerberg’s and Pichai’s. While it also talks somewhat about how Twitter has dealt with disinformation, his testimony actually went significantly further in noting real, fundamental changes that Twitter is exploring that go way beyond the way most people think about this debate. Rather than focusing on the power that Twitter has to decide how, who, and what to moderate, Dorsey’s testimony talked about various ways in which they are seeking to give more control to end users themselves and empower those end users, rather than leaving Twitter as the final arbiter. He talked about “algorithmic choice” so that rather than having Twitter controlling everything, different users could opt-in to different algorithmic options, and different providers could create their own algorithmic options. And he mentioned the Bluesky project, and potentially moving Twitter to a protocol-based system, rather than one that Twitter fully controls.
Twitter is also funding Bluesky, an independent team of open source architects, engineers, and
designers, to develop open and decentralized standards for social media. This team has already
created an initial review of the ecosystem around protocols for social media to aid this effort.
Bluesky will eventually allow Twitter and other companies to contribute to and access open
recommendation algorithms that promote healthy conversation and ultimately provide
individuals greater choice. These standards will support innovation, making it easier for startups
to address issues like abuse and hate speech at a lower cost. Since these standards will be open
and transparent, our hope is that they will contribute to greater trust on the part of the individuals
who use our service. This effort is emergent, complex, and unprecedented, and therefore it will
take time. However, we are excited by its potential and will continue to provide the necessary
exploratory resources to push this project forward.
All of these were showing that Dorsey and Twitter are thinking about actual ways to deal with many of the complains that our elected officials insist are the fault of social media — including the fact that no two politicians seem to agree one what is the “proper” level of moderation. By moving to something like protocols and algorithmic choice, you could allow different individuals, groups, organizations and others to set their own standards and rules.
And, yes, I’m somewhat biased here, because I have suggested this approach (as have many others). That doesn’t mean I’m convinced it will absolutely work, but I do think it’s worth experimenting with.
And what I had hoped was that perhaps, if Congress were actually interested in solving the perceived problems they declared throughout the hearing, then they would perhaps explore these initiatives, and ask Jack to explain how they might impact questions around disinformation or harm or “censorship” or “think of the children.” Because there are lots of interesting discussions to be had over whether or not this approach will help deal with many of those issues.
But as far as I can tell not one single elected official ever asked Jack about any of this. Not one. Now, I will admit that I missed some of the hearing to take a few meetings, but I asked around and others I know who watched the entire thing through could not recall it coming up beyond Jack mentioning it a few times during the hearing.
What I did hear a lot of, however, was members of the House insisting, angrily (always angrily), that none of the CEOs presenting were willing to “offer solutions” and that’s why “Congress must and will act!”
All it did was drive home the key idea that this was not a serious hearing in which Congress hoped to learn something. This was yet another grandstanding dog and pony show where Congressional members got to get their clips and headlines they can put on the very same social media platforms they insist are destroying America. But when they demanded to hear “solutions” to the supposed problems they raised, and when one of the CEOs on the panel put forth some ideas on better ways to approach this… every single one of those elected officials ignored it. Entirely. Over five and a half hours, and not one asked him to explain what he meant, or to explore how it might help.
This is not Congress trying to fix the “problems” of social media. This is Congress wanting to grandstand on social media while pretending to do real work.