Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015)

from the overblocking-is-the-norm dept

Summary: Content moderation at scale often involves significant tradeoffs between diverse interests. It is often difficult for those without experience in the field to recognize these competing interests.

Social media services aren't just beholden to their users. They're also at the relative mercy of dozens of competing interests at all times.

Users expect one thing. A bunch of governments expect another. Internal policies and guidelines result in another layer of moderation. Then there are the relatively straightforward obligations platforms must fulfill to retain their safe harbors under the DMCA.
So what happens when all of these competing interests collide? Well, according to multiple studies, the most common side effect is over-moderation: the deletion of content that's not in violation of anything, just in case.

For the past half-decade, Stanford Law School's Daphne Keller has been tracking platforms' responses to external stimuli: the pressures applied by outside interests that -- for good or evil -- want social media services to expand their moderation efforts.

And for most of that half-decade, Keller has seen "good faith" efforts expand past the immediate demands to encompass preemptive removal of content that has yet to offend any one of the hundreds of stakeholders applying legal pressure to US-based tech companies.

The research shows large companies are just as preemptively compliant as smaller companies, even though smaller companies have much more at risk.

The easiest, cheapest, and most risk-avoidant path for any technical intermediary is simply to process a removal request and not question its validity. A company that takes an “if in doubt, take it down” approach to requests may simply be a rational economic actor. Small companies without the budget to hire lawyers, or those operating in legal systems with unclear protections, may be particularly likely to take this route.

Multiple studies are cited, and they appear to reach the same conclusion, whether it involves a platform with millions of users or a small group catering to a niche audience: when it doubt, take it out.

Decisions to be made by platforms:

  • Should a premium be placed on protecting user content in the face of vague takedown demands?
  • Does protecting users from questionable takedown demands result in anything more quantifiable than "goodwill?" 
  • Are efforts being made to fight back against mistargeted or unlawful content removal requests? Is the expense/liability exposure too costly to justify defending users against unlawful demands from outside entities?
Questions and policy implications to consider:
  • Do platforms ultimately serve their users' interests or the more powerful interests applying pressure from the outside?
  • Is staying alive to "fight another day" ultimately of more use to platform users than taking a stand that might result in being permanently shut down? 
  • Is it wise to attempt to satisfy all stakeholders in content moderation issues? Should platforms choose a side (users v. outside complainants) or is it wiser to "play the middle" as much as possible?
  • Are there fungible advantages to deciding users are more important than outside entities who may have the power to dismantle services specializing in third-party content?
Resolution: The war between users and outside interests continues. As pressure mounts to moderate more and more content, users are often those who feel the squeeze first. The larger the platform, the higher the demands. But larger platforms are more capable of absorbing the costs of compliance. Smaller ecosystems need more protection but are often incapable of obtaining the funds needed to fight legal battles on the behalf of their users.

True balance is impossible to achieve, as this research shows. Unfortunately, it appears preemptive removal of content remains the most cost effective way of satisfying competing moderation demands, even if it ultimately results in some loss to platforms' user bases.

Originally posted to the Trust & Safety Foundation website.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: content moderation, daphne keller, governments, overblocking


Reader Comments

Subscribe: RSS

View by: Time | Thread


  1. This comment has been flagged by the community. Click here to show it
    identicon
    Anonymous Coward, 10 Jun 2021 @ 12:33am

    Re: Re: Re: Re: Conspiracy SPAM

    https://www.techdirt.com/blog/?company=streamscale

    You appear to be inducing the public to willfully infringe a lawfully issued patent that has already withstood careful scrutiny in previous litigation.

    Please stop. Please remove this article.


Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here



Subscribe to the Techdirt Daily newsletter




Comment Options:

  • Use markdown. Use plain text.
  • Make this the First Word or Last Word. No thanks. (get credits or sign in to see balance)    
  • Remember name/email/url (set a cookie)

Follow Techdirt
Essential Reading
Techdirt Insider Chat
Recent Stories

Close

Email This

This feature is only available to registered users. Register or sign in to use it.