Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Twitter Briefly Restricts Account Of Writer Reporting From The West Bank (2021)

from the mistakes-were-made dept

Summary: In early May 2021, writer and researcher Mariam Barghouti was reporting from the West Bank on escalating conflicts between Israeli forces and Palestinian protestors, and making frequent social media posts about her experiences and the events she witnessed. Amidst a series of tweets from the scene of a protest, shortly after one in which she stated “I feel like I’m in a war zone,” Barghouti’s account was temporarily restricted by Twitter. She was unable to post new tweets, and her bio and several of her recent tweets were replaced with a notice stating that the account was “temporarily unavailable because it violates the Twitter Media Policy”.

The incident was highlighted by other writers, some of whom noted that the nature of the restriction seemed unusual, and the incident quickly gained widespread attention. Fellow writer and researcher Joey Ayoub tweeted that Barghouti had told him the restriction would last for 12 hours according to Twitter, and expressed concern for her safety without access to a primary communication channel in a dangerous situation.

The restriction was lifted roughly an hour later. Twitter told Barghouti (and later re-stated to VICE’s Motherboard) that the enforcement action was a “mistake” and that there was “no violation” of the social media platform’s policies. Motherboard also asked Twitter to clarify which specific policies were initially believed to have been violated, but says the company “repeatedly refused”.

Company Considerations:

  • In cases where enforcement actions are taken involving sensitive news reporting content, how can the reasons for enforcement be better communicated to both the public and the reporters themselves?
  • How can the platform identify cases like these and apply additional scrutiny to prevent erroneous enforcement actions?
  • What alternatives to account suspensions and the removal of content could be employed to reduce the impact of errors?
  • How can enforcement actions be applied with consideration for journalists’ safety in situations involving the live reporting of dangerous events?

Issue Considerations:

  • With so much important news content, especially live reporting, flowing through social media platforms, what can be done to prevent policy enforcement (erroneous or otherwise) from unduly impacting the flow of vital information?
  • Since high-profile enforcement and reversal decisions by platforms are often influenced by widespread public attention and pressure, how can less prominent reporters and other content creators protect themselves?

Resolution: Though the account restriction was quickly reversed by Twitter, many observers did not accept the company’s explanation that it was an error, instead saying the incident was part of a broader pattern of social media platforms censoring Palestinians. Barghouti said:

“I think if I was not someone with visibility on social media, that this would not have garnered the attention it did. The issue isn’t the suspension of my account, rather the consideration that Palestinian accounts have been censored generally but especially these past few weeks as we try to document Israeli aggressions on the ground.”

Filed Under: , , , , , ,
Companies: twitter

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Twitter Briefly Restricts Account Of Writer Reporting From The West Bank (2021)”

Subscribe: RSS Leave a comment
3 Comments
That Anonymous Coward (profile) says:

" asked Twitter to clarify which specific policies were initially believed to have been violated, but says the company “repeatedly refused”. "

Because never explaining how the screw up came into being will make sure the bad guys can’t learn from it…
Or they had no justification beyond well maybe this is bad & its better to nuke them from orbit & ‘apologize’ later.

One does wonder how many of these decisions could pass the ‘teddy bear test’.
You make them explain the situation to a teddy bear like it was another person so they can hear what they are saying & thinking. Often they correct their issue on their own without need to involve others.
Instead wide swaths get screwed till someone makes the right noise that gets attention & suddenly it was an oppsie & we have nothing further to say on the subject

migi says:

You’d think if it was an algorithm error they’d just blame the algorithm, but instead they say that the enforcement action was a mistake. So by implication the original enforcement was done by a human moderator.

So did the human click the wrong account to suspend, did they make a bad judgement call, or was it a pattern of behaviour targeting certain types of people?
If the community believes there is a pattern of censoring Palestinians, is it a cultural problem among a segment or the whole moderation team?

How can the platform identify cases like these and apply additional scrutiny to prevent erroneous enforcement actions?

This depends on whether it was an automated moderation error or a human enforcement error.
A very simple dividing line would be to have any automated moderation action reviewed by a human, if it applies to a verified account.
If the issue is human error or malice, then you could have more than 1 person review the action. However that would double the amount of humans you need, which would be hard to justify if the scale of the problem is small.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow