The Tech Policy Greenhouse is an online symposium where experts tackle the most difficult policy challenges facing innovation and technology today. These are problems that don't have easy solutions, where every decision involves tradeoffs and unintended consequences, so we've gathered a wide variety of voices to help dissect existing policy proposals and better inform new ones.

Content Moderation Beyond Platforms: A Rubric

from the guiding-questions dept

For decades, EFF and others have been documenting the monumental failures of content moderation at the platform level—inconsistent policies, inconsistently applied, with dangerous consequences for online expression and access to information. Yet despite mounting evidence that those consequences are inevitable, service providers at other levels are increasingly choosing to follow suit.

The full infrastructure of the internet, or the “full stack,” is made up of a range of entities, from consumer-facing platforms like Facebook or Pinterest, to ISPs like Comcast or AT&T. Somewhere in the middle are a wide array of intermediaries, such as upstream hosts like Amazon Web Services (AWS), domain name registrars, certificate authorities (such as Let’s Encrypt), content delivery networks (CDNs), payment processors, and email services.

For most of us, most of the stack is invisible. We send email, tweet, post, upload photos and read blog posts without thinking about all the services that help get content from the original creator onto the internet and in front of users’ eyeballs all over the world. We may think about our ISP when it gets slow or breaks, but day-to-day, most of us don’t think about intermediaries like AWS at all—until AWS decides to deny service to speech it doesn’t like, as it did with the social media site Parler, and that decision gets press attention.

Invisible or not, these intermediaries are potential speech “chokepoints” and their choices can significantly influence the future of online expression. Simply put, platform-level moderation is broken and infrastructure-level moderation is likely to be worse. That said, the pitfalls and risks for free expression and privacy may play out differently depending on what kind of provider is doing the moderating. To help companies, policymakers and users think through the relative dangers of infrastructure moderation at various levels of the stack, here’s a set of guiding questions.

  1. Is meaningful transparency, notice, and appeal possible? Given the inevitability of mistakes, human rights standards demand that service providers notify users that their speech has been, or will be, taken offline, and offer users an opportunity to seek redress. Unfortunately, many services do not have a direct relationship with either the speaker or the audience for the expression at issue, making all of these steps challenging. But without them, users will be held not only to their host’s terms and conditions but also those of every service in the chain from speaker to audience, even though they may not know what those services are or how to contact them. Given the potential consequences of violations, and the difficulty of navigating the appeals processes of previously invisible services (assuming such a process even exists), many users will simply avoid sharing controversial opinions altogether. Relatedly, where a service provider has no relationship to the speaker or audience, takedowns will be much easier and cheaper than a nuanced analysis of a given user’s speech.
  2. Do viable competitive alternatives exist? One of the reasons net neutrality rules for ISPs are necessary is that users have so few options for high-quality internet access. If your ISP decides to shut down your account based on your expression (or that of someone else using the account), in much of the world, including the U.S., you can’t go to another provider. At other layers of the stack, such as the domain name system, there are multiple providers from which to choose, so a speaker who has their domain name frozen can take their website elsewhere. But the existing of alternatives alone is not enough; answering this question also requires evaluating the costs of switching and whether it calls for technical savvy beyond the skill set of most users.
  3. Is it technologically possible for the service to tailor its moderation practices to target only the specific offensive expression? At the infrastructure level, many services cannot target their response with the necessary precision human rights standards demand. Twitter can block specific tweets; Amazon Web Services can only deny service to an entire site, which means they inevitably affect far more than the objectionable speech that motivated the action. We can take a lesson here from the copyright context, where we have seen domain name registrars and hosting providers shut down entire sites in response to infringement notices targeting a single document. It may be possible for some services to communicate directly with customers where they are concerned about a specific piece of content, and request that it be taken down. But if that request is rejected, the service has only the blunt instrument of complete removal at its disposal. 
  4. Is moderation an effective remedy? The U.S. experience with online sex trafficking teaches that removing distasteful speech may not have the hoped-for impact. In 2017, Tennessee Bureau of Investigation special agent Russ Winkler explained that online platforms were the most important tool in his arsenal for catching sex traffickers. Today, legislation designed to prevent the use of online platforms for sex trafficking has made it harder for law enforcement to find traffickers. Indeed, several law enforcement agencies report that without these platforms, their work finding and arresting traffickers has hit a wall.
  5. Will collateral damage, such as the stifling of lawful expression, disproportionally affect less powerful groups? Moderation choices may reflect and reinforce bias against marginalized communities. Take, for example, Facebook’s decision, in the midst of the #MeToo movement’s rise, that the statement “men are trash” constitutes hateful speech. Or Twitter’s decision to use harassment provisions to shut down the verified account of a prominent Egyptian anti-torture activist. Or the content moderation decisions that have prevented women of color from sharing the harassment they receive with their friends and followers. Or the decision by Twitter to mark tweets containing the word “queer” as offensive, regardless of context. As with the competition inquiry, this analysis should consider whether the impacted speakers and audiences will have the ability to respond and/or find effective alternative venues.
  6. Is there a user and speech friendly alternative to central moderation? Could there be? One of the key problems of content moderation at the social media level is that the moderator substitutes its policy preferences for those of its users. When infrastructure providers enter the game, with generally less accountability, users have even less ability to make their own choices about their own internet experience. If there are tools that allow users themselves to express and implement their own preferences, infrastructure providers should return to the business of servicing their customers — and policymakers have a weaker argument for imposing new requirements.
  7. Will governments seek to hijack any moderation pathway? We should be wary of moderation practices that will provide state and state-sponsored actors with additional tools for controlling public dialogue. Once processes and tools to takedown expression are developed or expanded, companies can expect a flood of demands to apply them to other speech. At the platform level, state and state-sponsored actors have weaponized flagging tools to silence dissent. In the U.S., the First Amendment and the safe harbor of Section 230 largely prevent moderation requirements. But policymakers have started to chip away at Section 230, and we expect to see more efforts along those lines. In other countries, such as Canada, the U.K., Turkey and Germany, policymakers are contemplating or have adopted draconian takedown rules for platforms and would doubtless like to extend them further. 

Companies should ask all of these questions when they are considering whether to moderate content (in general or as a specific instance). And policymakers should ask them before they either demand or prohibit content moderation at the infrastructure level. If more than two decades of social media content moderation has taught us anything, it is that we cannot “tech” our way out of political and social problems. Social media companies have tried and failed to do so; infrastructure companies should refuse to replicate those failures—beginning with thinking through the consequences in advance, deciding whether they can mitigate them and, if not, whether they should simply stay out of it.

Corynne McSherry is the Legal Director at EFF, specializing in copyright, intermediary liability, open access, and free expression issues.

Techdirt and EFF are collaborating on this Techdirt Greenhouse discussion. On October 6th from 9am to noon PT, we’ll have many of this series’ authors discussing and debating their pieces in front of a live virtual audience (register to attend here). On October 7th, we’ll be hosting a smaller workshop focused on coming up with concrete steps we can take to make sure providers, policymakers, and others understand the risks and challenges of infrastructure moderation, and how to respond to those risks.

Filed Under: , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Beyond Platforms: A Rubric”

Subscribe: RSS Leave a comment
GHB (profile) says:

The internet is a mixture of good and bad...

The internet is a mixture of good and bad, it is stupid to nuke just because one thing on it is “bad”.

Also give praise to Hurricane Electric for standing up against the demands of the record labels. They KNEW that it is a f**cked up idea that a huge service with huge number of customers to be obligated to terminate an entire group of customers (because customers of customers, meaning one representative on behalf of hundreds of others).

ANYTHING can be abused, both product and services. To those who believe that the intermediaries should also be the police on things that should be the site’s duty (like removing infringing content off the page) is an idiot. You might as well ban chair companies because chairs can be used as a weapon to harm others, or sue electric companies for serving electricity to the criminal’s house in which that person is attempting to hack other’s PC, and go to war against

Add Your Comment

Your email address will not be published.

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Older Stuff
12:04 SOPA Didn't Die. It's Just Lying In Wait. (5)
09:30 Demanding Progress: From Aaron Swartz To SOPA And Beyond (3)
12:00 How The SOPA Blackout Happened (3)
09:30 Remembering The Fight Against SOPA 10 Years Later... And What It Means For Today (16)
12:00 Winding Down Our Latest Greenhouse Panel: Content Moderation At The Infrastructure Layer (4)
12:00 Does An Internet Infrastructure Taxonomy Help Or Hurt? (15)
14:33 OnlyFans Isn't The First Site To Face Moderation Pressure From Financial Intermediaries, And It Won't Be The Last (12)
10:54 A New Hope For Moderation And Its Discontents? (7)
12:00 Infrastructure And Content Moderation: Challenges And Opportunities (7)
12:20 Against 'Content Moderation' And The Concentration Of Power (32)
13:36 Social Media Regulation In African Countries Will Require More Than International Human Rights Law (7)
12:00 The Vital Role Intermediary Protections Play for Infrastructure Providers (7)
12:00 Should Information Flows Be Controlled By The Internet Plumbers? (10)
12:11 Bankers As Content Moderators (6)
12:09 The Inexorable Push For Infrastructure Moderation (6)
13:35 Content Moderation Beyond Platforms: A Rubric (5)
12:00 Welcome To The New Techdirt Greenhouse Panel: Content Moderation At The Infrastructure Level (8)
12:00 That's A Wrap: Techdirt Greenhouse, Broadband In The Covid Era (17)
12:05 Could The Digital Divide Unite Us? (29)
12:00 How Smart Software And AI Helped Networks Thrive For Consumers During The Pandemic (41)
12:00 With Terrible Federal Broadband Data, States Are Taking Matters Into Their Own Hands (18)
12:00 A National Solution To The Digital Divide Starts With States (19)
12:00 The Cost Of Broadband Is Too Damned High (12)
12:00 Can Broadband Policy Help Create A More Equitable And inclusive Economy And Society Instead Of The Reverse? (11)
12:03 The FCC, 2.5 GHz Spectrum, And The Tribal Priority Window: Something Positive Amid The COVID-19 Pandemic (6)
12:00 Colorado's Broadband Internet Doesn't Have to Be Rocky (9)
12:00 The Trump FCC Has Failed To Protect Low-Income Americans During A Health Crisis (26)
12:10 Perpetually Missing from Tech Policy: ISPs And The IoT (10)
12:10 10 Years Of U.S. Broadband Policy Has Been A Colossal Failure (7)
12:18 Digital Redlining: ISPs Widening The Digital Divide (17)
More arrow