Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: GitHub Attempts To Moderate Banned Words Contained In Hosted Repositories (2015)

from the word-filters dept

Summary: GitHub solidified its position as the world's foremost host of open source software not long after its formation in 2008. Twelve years after its founding, GitHub is host to 190 million repositories and 40 million users.

Even though its third-party content is software code, GitHub still polices this content for violations of its terms of service. Some violations are more overt, like possible copyright infringement. But much of it is a bit tougher to track down.

A GitHub user found themself targeted by a GitHub demand to remove certain comments from their code. The user's code contained the word "retard" -- a term that, while offensive in certain contexts, isn't offensive when used as a verb to describe an intentional delay in progress or development. But rather than inform the user of this violation, GitHub chose to remove the entire repository, resulting in users who had forked this code to lose access to their repositories as well.

It wasn't until the user demanded an explanation that GitHub finally provided one. In an email sent to the user, GitHub said the code contained content the site viewed as "unlawful, offensive, threatening, libelous, defamatory, pornographic, obscene, or otherwise objectionable." More specifically, GitHub told the user to remove the words "retard" and "retarded," restoring the repository for 24 hours to allow this change to be made.

Decisions for GitHub:

  • Is the blanket banning of certain words a wise decision, considering the idiosyncratic language of coding (and coders)?
  • Should GitHub account for downstream repositories that may be negatively affected by removal of the original code when making content moderation decisions, and how?
  • Could banned words inside code comments be moderated by only removing the comments, which would avoid impacting the functionality of the code?
Questions and policy implications to consider:
  • Is context considered when moderating possible terms of service violations?
  • Is it possible to police speech effectively when the content hosted isn't what's normally considered speech?
  • Does proactive moderation of certain terms deter users from deploying code designed to offend?
Resolution: The user's repository was ultimately restored after the offending terms were removed. So were the repositories that relied on the original code GitHub decided was too offensive to allow to remain unaltered.

Unfortunately for GitHub, this drew attention to its less-than-consistent approach to terms of service violations. Searches for words considered "offensive" by GitHub turned up dozens of other potential violations -- none of which appeared to have been targeted for removal despite the inclusion of far more offensive terms/code/notes.

And the original offending code was modified with a tweak that substituted the word "retard" with the word "git" -- terms that are pretty much interchangeable in other parts of the world. The not-so-subtle dig at GitHub and its inability to detect nuance may have pushed the platform towards reinstating content it had perhaps pulled too hastily.

Originally posted on the Trust & Safety Foundation website.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: code, content moderation, repositories
Companies: github

Reader Comments

Subscribe: RSS

View by: Time | Thread

  1. icon
    Stephen T. Stone (profile), 3 Feb 2021 @ 3:58pm

    A GitHub user found themself targeted by a GitHub demand to remove certain comments from their code. The user's code contained … a term that, while offensive in certain contexts, isn't offensive when used as a verb to describe an intentional delay in progress or development.

    Being familiar with the history of that specific program, I can say this: They were decidedly not using that word in the “inoffensive” way.

Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here

Subscribe to the Techdirt Daily newsletter

Comment Options:

  • Use markdown. Use plain text.
  • Make this the First Word or Last Word. No thanks. (get credits or sign in to see balance)    
  • Remember name/email/url (set a cookie)

Follow Techdirt
Essential Reading
Techdirt Insider Chat
Recent Stories


Email This

This feature is only available to registered users. Register or sign in to use it.