The Tech Policy Greenhouse is an online symposium where experts tackle the most difficult policy challenges facing innovation and technology today. These are problems that don't have easy solutions, where every decision involves tradeoffs and unintended consequences, so we've gathered a wide variety of voices to help dissect existing policy proposals and better inform new ones.

Moderate Globally, Impact Locally

from the monumental-balancing-act dept

Every minute, more than 500 hours of video are uploaded to YouTube, 350,000 tweets are sent, and 510,000 comments are posted on Facebook.

Managing and curating this fire hose of content is an enormous task, and one which grants the platforms enormous power over the contours of online speech. This includes not just decisions around whether a particular post should be deleted, but also more minute and subtle interventions that determine its virality. From deciding how far to allow quack ideas about COVID-19 to take root, to the degree of flexibility that is granted to the President of the United States to break the rules, content moderation raises difficult challenges that lie at the core of debates around freedom of expression.

But while plenty of ink has been spilled on the impact of social media on America’s democracy, these decisions can have an even greater impact around the world. This is particularly true in places where access to traditional media is limited, giving the platforms a virtual monopoly in shaping the public discourse. A platform which fails to take action against hate speech might find itself instrumental in triggering a local pogrom, or even genocide. A platform which acts too aggressively to remove suspected “terrorist propaganda” may find itself destroying evidence of war crimes.

Platforms’ power over the public discourse is partly the result of a conscious decision by global governments to outsource online moderation functions to these private sector actors. Around the world, governments are making increasingly aggressive demands for platforms to police content which they find objectionable. The targeted material can range from risque? photos of the King of Thailand, to material deemed to insult Turkey’s founding president. In some instances, these requests are grounded in local legal standards, placing platforms in the difficult position of having to decide how to enforce a law from Pakistan, for example, which would be manifestly unconstitutional in the United States.

In most instances, however, moderation decisions are not based on any legal standard at all, but on the platforms’ own privately drafted community guidelines, which are notoriously vague and difficult to understand. All of this leads to a critical lack of accountability in the mechanisms which govern freedom of expression online. And while the perceived opacity, inconsistency and hypocrisy of online content moderation structures may seem frustrating to Americans, for users in the developing world it is vastly worse.

Nearly all of the biggest platforms are based in the United States. This means not only that their decision-makers are more accessible and receptive to their American user-base than they are to frustrated netizens in Myanmar or Uganda, but also that their global policies are still heavily influenced by American cultural norms, particularly the First Amendment.

Even though the biggest platforms have made efforts to globalize their operations, there is still a massive imbalance in the ability of journalists, human rights activists, and other vulnerable communities to get through to the U.S.-based staff who decide what they can and cannot say. When platforms do branch out globally, they tend to recruit staff who are connected to existing power structures, rather than those who depend on the platforms as a lifeline away from repressive restrictions on speech.

For example, the pressure to crackdown on “terrorist content” inevitably leads to collateral damage against journalism or legitimate political speech, particularly in the Arab world. In setting this calculus, governments and ex-government officials are vastly more likely to have a seat at the table than journalists or human rights activists. Likewise, the Israeli government has an easier time communicating their wants and needs to Facebook than, say, Palestinian journalists and NGOs.

None of this is meant to minimize the scope and scale of the challenge that the platforms face. It is not easy to develop and enforce content policies which account for the wildly different needs of their global user base. Platforms generally aim to provide everyone with an approximately identical experience, including similar expectations with regard to the boundaries of permitted speech. There is a clear tension between this goal and the conflicting legal, cultural and moral standards in force across the many countries where they operate.

But the importance and weight of these decisions demands that platforms get this balancing right, and develop and enforce policies which adequately reflect their role at the heart of political debates from Russia to South Africa. Even as the platforms have grown and spread around the world, the center of gravity of these debates continues to revolve around D.C. and San Francisco.

This is the first in a series of articles developed by the Wikimedia/Yale Law School Initiative on Intermediaries and Information appearing here at Techdirt Policy Greenhouse and elsewhere around the internet?intended to bridge the divide between the ongoing policy debates around content moderation, and the people who are most impacted by them, particularly across the global south. The authors are academics, civil society activists and journalists whose work lies on the sharp edge of content decisions. In asking for their contributions, we offered them a relatively free hand to prioritize the issues they saw as the most serious and important with regard to content moderation, and asked them to point to areas where improvement was needed, particularly with regard to the moderation process, community engagement, and transparency.

The issues that they flag include a common frustration with the distant and opaque nature of platforms? decision-making processes, a desire for platforms to work towards a better understanding of local socio-cultural dynamics underlying the online discourse, and a feeling that platforms? approach to moderation often did not reflect the importance of their role in facilitating the exercise of core human rights. Although the different voices each offer a unique perspective, they paint a common picture of how platforms? decision making impacts their lives, and of the need to do better, in line with the power that platforms have in defining the contours of global speech.

Ultimately, our hope with this project is to shed light on the impacts of platforms? decisions around the world, and provide guidance on how social media platforms might do a better job of developing and applying moderation structures which reflect their needs and values of their diverse global users.

Michael Karanicolas is a Resident Fellow at Yale Law School, where he leads the Wikimedia Initiative on Intermediaries and Information as part of the Information Society Project. You can find him on twitter at @M_Karanicolas.

Filed Under: , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Moderate Globally, Impact Locally”

Subscribe: RSS Leave a comment
46 Comments
Anonymous Anonymous Coward (profile) says:

Re: Re:

Where does it say that platforms must be politically unbiased? Please don’t quote anything Trump, or anyone in his political sphere.

Now whether a platform should be honest about their political bent is up to the platform, and the users who accept that bent, or not (when the should stop being users), or whether the platform actually elucidates their political bent.

No matter which, bent expressed or not, it is none of the governments business, unless they decide to use the platform for their own political reasons. Then they can only say that they have different views, rather than doing anything about taking down any view expressed on the platform, which tend to come from users.

Expression of a political view via moderation choices are the opinion of the observer, rather than an expression of the platform. One needs to look at the reasons for moderation choices, rather than the fact of moderation. Violations of terms of service are not political decisions, no matter how much someone else maintains they are.

Anonymous Coward says:

Re: Re: Re:

If speech is to be censored then, but the goal is an attempt to be balanced, how can a platform not be political? Because everything political seems to be what causes the imbalance to speech, and platforms censoring speech do so to create the bias. How can a platform remain unpolitical without judgement?

Anonymous Anonymous Coward (profile) says:

Re: Re: Re: Re:

Where does it say that platforms must be balanced? Cite actual laws including the relevant Title, Chapter, Section and Sub Section. Quoting Trump or any of his minions, or for that matter any other politician who feels slighted does not count. They are the government, and by definition are not allowed to ‘censure’. Or argue censorship, though they often cannot help themselves as they don’t actually seem to know the current law.

But platforms are allowed to (and whether I agree that TOS’s are legal binding contracts that have not been negotiated is a different argument) apply their TOS’s. If any contributor to a platform violates the TOS, then they may be sanctioned, whether that is a temporary, single post, or permanent ban is up to the platform. Whether or not the poster agrees with the decision is (unfortunately simply because the systems to object are currently unworkable) irrelevant. It is the platforms decision and the poster does not actually have anything to say about it (again, unfortunately because the objection systems don’t work as they should).

And to go a bit further, read you some more Techdirt. Those decisions don’t appear to be political, at least to anyone who uses logic and reason rather than ideology as a cornerstone. Look at all the trash political speech that is allowed (see most of Trumps tweets). Those decisions are based upon the platforms reading of the TOS and since they wrote the TOS (without any user input, which may or may not be a problem) they get to interpret them. Users don’t, but courts might. In that case the TOS would simply be changed, again, and without user input (possibly, in the long run, to the platforms ruin).

Anonymous Coward says:

Re: Re: Re:2 Re:

Where does it say it is legal to throw the first amendment before the bus or sweep it under a rug? There is no law that says a platform must be balanced, but everything that is political is already corrupted. It deserves a great bit of admiration for those sites hosting platforms of discussion to remain open to debate.

Stephen T. Stone (profile) says:

Re: Re: Re:3

Where does it say it is legal to throw the first amendment before the bus or sweep it under a rug?

The First Amendment applies to government attempts at regulating speech because the Founding Fathers didn’t want the government telling people what they could and couldn’t say. It doesn’t apply to privately-owned institutions — open to the public or otherwise — for the same reason.

Anonymous Anonymous Coward (profile) says:

Re: Re: Re:5 Re:

You can say anything you want. You don’t have an absolute right to use other peoples property to do so. Platforms are owned by other people (including this one, though they choose to use a different moderation method than Facebook or Twitter or YouTube).

Start your own blog, say what you want and no one can tell you you can’t, though if what you have to say isn’t very interesting, don’t expect much of a audience. Your right to say what you want does not include forcing people to listen, and could backfire as other people might ridicule your statements. Then again, you could become a new hero, to some.

Anonymous Coward says:

Re: Re: Re:6 Re:

You misconstrue. I am not talking about Techdirt specifically, but the censoring of speech anywhere or everywhere.

Stone, I know you hang out here like a morrey eel and bite anyone for anything. You remind me of a dog being kept on the corner of a backyard with just enough chain to bite anyone who ventures into Mike’s world. It’s unfortunately a nasty world out there. It just sucks to see how much you enjoy being a barking fish while pretending to be human. See you in the funnies.

MathFox says:

Re: Newspapers

In the good old days, when news was printed on mashed dead trees, editors decided what to print and what to discard. Those decisions on what to print were influenced by the political slant of the newspaper.

This was the generally accepted situation. Why should an Internet publisher be politically unbiased? I don’t think that "Internet publishing is tree-friendly" counts as argument here.

Stephen T. Stone (profile) says:

Re: Re: Re:

All journalism has biases. Someone must decide what to publish, what to distill out of the mass of available data, and what facts to check. The best journalists try to keep their biases in check; the worst journalists make their biases clear and peddle in both mis- and disinformation.

A good journalist tells truth to power, consequences be damned. A bad journalist seeks power instead of truth, facts be damned.

Anonymous Coward says:

Re: Re: Re:2 Re:

I give you some points for that. Everything is so messed up today in the news industry. Who has the best unbiased unslanted scoop today? What news corp actually keeps politics at bay since the centralization of the news in the 90s? It is extremely agitating not being able to know what is truly happening when it all seems so devided.

Anonymous Coward says:

Too bad there’s no way to give the control directly to the user. You know, give us all little applications that filter out what WE, personally do not like – on our own machines – regardless of what is on and not touching anything that is on the website at any time. Like, you know. We could like say No Nazis, No Titties, No, etc., etc. in a personal Censor Assistant File that appends with a click to grab and record the "type", or "flavour" of stuff you want to prevent ever being displayed on your machine again, ever. A text based config file, so you can quickly add URLs you don’t like manually. Or we could add a YN to tell the app to ask you first.

Joe the Nazi gets his daily rascist reinforcement propaganda, as desired, and Joanie the housewife gets nothing but romance/sexless – from the same, un-moderated website.

In this way, no moderation is necessary… or did I miss something?

🙂

Stephen T. Stone (profile) says:

Re:

No Nazis

Good luck reading about World War II or the Wolfenstein video game series, then. Computer filters lack the ability to discern context; a mention of a Nazi on a Wikipedia article is the same as a mention of a Nazi on a White supremacist forum. Hell, this comment itself would probably cause this page to be filtered under a “no Nazis” rule.

You can’t set up filters like the ones you’re suggesting without doing one of two things: filtering a bunch of content you didn’t meant to filter, or spending more time than you’d like to set up those filters in ways that avoid “collateral damage”.

Anonymous Coward says:

Re: Re:

You know, give us all little applications that filter out what WE, personally do not like – on our own machines

That would work with word filters only, but filtering images require more computer power, and data storage than most people can afford. Besides which, that involves people storing that which they find offensive, so that they can hide it from themselves.

Leave a Reply to Anonymous Coward Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
12:04 SOPA Didn't Die. It's Just Lying In Wait. (5)
09:30 Demanding Progress: From Aaron Swartz To SOPA And Beyond (3)
12:00 How The SOPA Blackout Happened (3)
09:30 Remembering The Fight Against SOPA 10 Years Later... And What It Means For Today (16)
12:00 Winding Down Our Latest Greenhouse Panel: Content Moderation At The Infrastructure Layer (4)
12:00 Does An Internet Infrastructure Taxonomy Help Or Hurt? (15)
14:33 OnlyFans Isn't The First Site To Face Moderation Pressure From Financial Intermediaries, And It Won't Be The Last (12)
10:54 A New Hope For Moderation And Its Discontents? (7)
12:00 Infrastructure And Content Moderation: Challenges And Opportunities (7)
12:20 Against 'Content Moderation' And The Concentration Of Power (32)
13:36 Social Media Regulation In African Countries Will Require More Than International Human Rights Law (7)
12:00 The Vital Role Intermediary Protections Play for Infrastructure Providers (7)
12:00 Should Information Flows Be Controlled By The Internet Plumbers? (10)
12:11 Bankers As Content Moderators (6)
12:09 The Inexorable Push For Infrastructure Moderation (6)
13:35 Content Moderation Beyond Platforms: A Rubric (5)
12:00 Welcome To The New Techdirt Greenhouse Panel: Content Moderation At The Infrastructure Level (8)
12:00 That's A Wrap: Techdirt Greenhouse, Broadband In The Covid Era (17)
12:05 Could The Digital Divide Unite Us? (29)
12:00 How Smart Software And AI Helped Networks Thrive For Consumers During The Pandemic (41)
12:00 With Terrible Federal Broadband Data, States Are Taking Matters Into Their Own Hands (18)
12:00 A National Solution To The Digital Divide Starts With States (19)
12:00 The Cost Of Broadband Is Too Damned High (12)
12:00 Can Broadband Policy Help Create A More Equitable And inclusive Economy And Society Instead Of The Reverse? (11)
12:03 The FCC, 2.5 GHz Spectrum, And The Tribal Priority Window: Something Positive Amid The COVID-19 Pandemic (6)
12:00 Colorado's Broadband Internet Doesn't Have to Be Rocky (9)
12:00 The Trump FCC Has Failed To Protect Low-Income Americans During A Health Crisis (26)
12:10 Perpetually Missing from Tech Policy: ISPs And The IoT (10)
12:10 10 Years Of U.S. Broadband Policy Has Been A Colossal Failure (7)
12:18 Digital Redlining: ISPs Widening The Digital Divide (21)
More arrow