from the decentralization-as-the-new-frontier dept
In his post kicking off this series, Mike notes that, “the biggest concern with moving moderation decisions down the stack is that most infrastructure players only have a sledge hammer to deal with these questions, rather than a scalpel.” And, I agree with Jonathan Zittrain and other contributors that governments, activists, and others will increasingly reach down the stack to push for takedowns—and will probably get them.
So, should we expect more blunt force infra layer takedowns or will infrastructure companies invest in more precise moderation tools? Which one is even worse?
Given the choice to build infrastructure now, would you start with a scalpel? How about many scalpels? Or maybe something less severe but distributed and transparent, like clear plastic spoons everywhere! Will the moderation hurt less if we’re all in it together? With the distributed web, we may get to ask all these questions, and have a chance to make things better (or worse). How?
Let me backup a moment for some mostly accurate natural history. In the 90s, to vastly oversimplify, there was web 1.0: static, server-side pages that arose, more manual than you’d like sometimes, maybe not so easy to search or monetize at scale, but fundamentally decentralized and open. We had webrings and manually curated search lists. Listening to Nirvana in my dorm room I read John Perry Barlow’s announcement that “We are forming our own Social Contract. This governance will arise according to the conditions of our world, not yours. Our world is different,” in a green IRC window and believed.
Ok, not every feature was that simple or open or decentralized. The specter of content moderation haunted the Internet from early days of email and bulletin boards. In 1978, a marketer for DEC sent out the first unsolicited commercial message on ARPANET and a few hundred people told him to knock it off, Gary! Voila, community moderation.
Service providers like AOL and Prodigy offered portals through which users accessed the web and associated chat rooms, and the need to protect the brand led to predictable interventions. There’s a Rosetta Stone of AOL content moderation guidelines from 1994 floating around to remind us that as long as there have been people expressing themselves online, there have been other people doing their best to create workable rule sets to govern that expression and endlessly failing in comic and tragic ways (“‘F— you’ is vulgar” but ‘my *** hurts’ is ok”).
Back in the Lascaux Cave there was probably someone identifying naughty animal parts and sneaking over with charcoal to darken them out, for the community, and storytellers who blamed all the community’s ills on that person.
We all started saying a lot of stuff and never really stopped. If you’re a fan of expression in general, and especially of people who previously didn’t have great access to distribution channels expressing themselves more, that’s a win. But let’s be honest: 500 million tweets a day? We’ve been on an expression bender for years. And that means companies spending billions, and tens of thousands of enablers—paid and unpaid—supporting our speech bender. Are people happy with the moderation we’re getting? Generally not. Try running a platform. The moderation is terrible and the portions are so large!
Who’s asking for moderation? Virtually everyone in different ways. Governments want illegal content (CSAM, terrorist content) restricted on behalf of the people, and some also want harmful but legal content restricted in ways that are still unclear, also for the people. Many want harmful content restricted, which means different things depending on which people, which place, which culture, which content, which coffee roast you had this morning. Civil society groups generally want content restricted related to their areas of expertise and concern (except EFF, who will party like it’s 1999 forever I hope).
There are lots of types of expression where at least some people think moderation is appropriate, for different reasons; misinformation is different from doxxing is different from harassment is different from copyright infringement is different from spam. Often, the same team deals with election protection and kids eating Tide Pods (and does both surprisingly well, considering). There’s a lot to moderate and lots of mutually inconsistent demand to do it coming from every direction.
Ok, so let’s make a better internet! Web 3 is happening and it is good. More specifically, as Chris Dixon recently put it, “We are now at the beginning of the Web 3 era, which combines the decentralized, community-governed ethos of Web 1 with the advanced, modern functionality of Web 2.” Don’t forget the blockchain. Assume that over the next few years, Web 3 infrastructure gets built out and flourishes—projects like Arweave, Filecoin, Polkadot, Sia, and Storj. And applications eventually proliferate; tools for expression, creativity, communication, all the things humans do online, all built in ways that embody the values of the DWeb.
But wait, the social web experiment of the past 15 years led us to build multi-billion dollar institutions within companies aimed at mitigating harms (to individuals, groups, societies, cultural values) associated with online expression and conduct, and increasingly, complying with new regulations. Private courts. Private Supreme Courts. Teams for safeguarding public health and democratic elections. Tens of thousands poring over photos of nipples, asking, where do we draw the line? Are we going to do all that again? One tempting answer is, let’s not. Let’s fire all the moderators. What’s the worst that could happen?
Another way of asking this question is — what do we mean when we talk about “censorship resistant” distributed technologies? This has been an element of the DWeb since early days but it’s not very clear (to me at least) how resistant, which censorship, and in what ways.
My hunch is that censorship resistance—in the purest sense of defaulting to immutable content with no possible later interventions affecting its availability—is probably not realistic in light of how people and governments currently respond to Web 2.0. The likely outcome is probably quick escalation to intense conflict with the majority of governments.
And even for people who still favor a marketplace-of-ideas-grounded “rights” framework, I think they know better than to argue that the cure for CSAM is more speech. There will either have to be ways of intervening or the DWeb is going to be a bumpy ride. But “censorship resistant” in the sense of, “how do we build a system where it is not governments, or a small number of powerful, centralized companies that control the levers at the important choke points for expression?” Now we’re talking. Or as Paul Frazee from Beaker Brower and other distributed projects put it: “The question isn’t ‘how do we make moderation impossible?’ The question is, how do we make moderation trustworthy?”
So, when it comes to expression and by extension content moderation, how exactly are we going to do better? What could content moderation look like if done consistent with the spirit, principles, and architecture of Web 3? What principles can we look to as a guide?
I think the broad principles will come as no surprise to anyone following this space over the past few years (and are not so different from those outlined in Corynne McSherry’s post). They include notice, transparency, due process, the availability of multiple venues for expression, and robust competition between options on many axes—including privacy and community norms, as well as the ability of users to structure their own experience as much as possible.
Here are some recurring themes:
- Transparency: Clear, broadly available, and auditable information about what interventions are being made, including at the infrastructure level, is a precondition to empowering more people on the web. Rather than building in transparency as an adjunct to moderation (like the essential Lumen database), what if we baked it in from the start? With accommodations and exceptions for privacy where needed; a moderation system that defaults to publicly viewable records as part of every act of intervention.
- Multiplicity: More nodes with smaller power differentials between. More storage providers, more ISPs, more ways of accessing expression, more contexts for encountering it, and more communities with attendant norms, controlling more layers of the stack down to the infra.
- Subsidiarity: How do we give users as much control over their own experience as possible? Allow users to customize their applications so that, if desired, certain infrastructure is favored or disfavored (as opposed to blocked or not blocked), through tools like shareable open source lists of disfavored files, actors, institutions, with reliable information about their provenance. Push decisions to the edge.
- Identity & Reputation: can we structure more experience based on a person’s chosen, persistent, potentially pseudonymous identity? Ideally, assembled from multiple, decentralized sources, and multiple implementations parsing that data rather than one or a few arbiters who store data and make a broadly effective determination.
We’re in the early days, so the question of how we build decentralized systems resilient to broad attempts to censor—in ways that are familiar from Web 2.0—is still in progress. But the applications are here. The Mastodon project and Element have been at it since 2016. Twitter’s Blue Sky project aims to build decentralized standards for social media with a leader in Jay Graber and a rocking Discord. If Web 2.0 has got you down, consider the possibility that Web 3 might be worth another shot?
Time around let’s not have a small number of actors (commercial or otherwise) making decisions for a large number of people about what expression they can or can’t publish or access. Let’s have more of everything. More decision makers, more venues for expression, more chances for users to make small decisions about what they want to see or not see, more civil society stakeholders with real influence (operationalized through say, governance tokens or whatever other means are at hand).
At the infrastructure level we need more nodes, more actors, more storage providers, more mechanisms for choosing what is available to more disparate groups, and more ways to make it all visible. With all that and some luck, it may add up to fewer and fewer decisionmakers able to decide what comes down at scale, with a sledgehammer, a vaccine for capital C Censorship by the radical proliferation of small m moderators?
Alex Feerst is a legal & tech executive, trust & safety expert and currently is CEO Murmuration Labs whose clients include the Filecoin Foundation.
Techdirt and EFF are collaborating on this Techdirt Greenhouse discussion. On October 6th from 9am to noon PT, we’ll have many of this series’ authors discussing and debating their pieces in front of a live virtual audience (register to attend here).