Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020)

from the newsletter-moderation dept

Summary: Substack launched in 2018, offering writers a place to engage in independent journalism and commentary. Looking to fill a perceived void in newsletter services, Substack gave writers an easy-to-use platform they could monetize through subscriptions and pageviews.

As Substack began to attract popular writers, concerns over published content began to increase. The perception was that Substack attracted an inordinate number of creators who had either been de-platformed elsewhere or embraced views not welcome on other platforms. High-profile writers who found themselves jobless after crafting controversial content appeared to gravitate to Substack (including big names like Glenn Greenwald of The Intercept and The Atlantic’s Andrew Sullivan), giving the platform the appearance of embracing views by providing a home for writers unwelcome pretty much everywhere else.

A few months before the current controversy over Substack’s content reached critical mass, the platform attempted to address questions about content moderation with a blog post that said most content decisions could be made by readers, rather than Substack itself. Its blog post made it clear users were in charge at all times: readers had no obligation to subscribe to content they didn’t like and writers were free to leave at any time if they disagreed with Substack’s decisions.

But even then, the platform’s moderation policies weren’t completely hands off. As its post pointed out, the platform would take its own steps to remove spam, porn, doxxing, and harassment. Of course, the counterargument raised was that Substack’s embrace of controversial contributors provided a home for people who’d engaged in harassment on other platforms (and who were often no longer welcome there).

Decisions to be made by Substack:

  • Does offloading moderation to users increase the amount of potentially-objectionable content hosted by Substack?
  • Does this form of moderation give Substack the appearance it approves of controversial content contributed by others?
  • Is the company prepared to take a more hands-on approach if the amount of objectionable content hosted by Substack increases? 

Questions and policy implications to consider:

  • Does a policy that relies heavily on users and writers to enforce allow users and contributors to shape Substack’s “identity?”
  • Does limiting moderation by Substack attract the sort of contributors Substack desires to host and/or believes will make it more profitable?
  • Does the sharing of content off-platform undermine Substack’s belief that others have complete control over the kind of content they’re seeing?

Resolution: The controversy surrounding Substack’s roster of writers continued to increase, along with calls for the platform to do more to moderate hosted content. Subtack’s response was to retirate its embrace of “free press and free expression,” but also offered a few additional moderation tweaks not present in its policies when it first received increased attention late last year.

Most significantly, it announced it would not allow “hate speech” on its platform, although its definition was more expansive than policies on other social media services. Attacks on people based on race, ethnicity, religion, gender, etc. would not be permitted. However, Substack would continue to host attacks on “ideas, ideologies, organizations, or individuals for other reasons, even if those attacks are cruel and unfair.”

Originally posted to the Trust & Safety Foundation website.

Filed Under: , , ,
Companies: substack

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020)”

Subscribe: RSS Leave a comment
8 Comments
bobob says:

Perhaps when something doesn’t scale well, the solution is to not expect to do it at scale. Instead of having facebook or twitter which attempts to cater to everyone for everything, rather than argue over whether or how to moderate it, smaller platforms that cater to specific interests would be a better solution. It’s easier for the people running the platforms to not lose control of their platforms or try do it with buggy algorithms and/or burned out contractors.

This comment has been flagged by the community. Click here to show it.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow