Content Moderation Is Impossible: You Can't Expect Moderators To Understand Satire Or Irony

from the just-doesn't-work-that-way dept

The latest in our never ending series of posts on why content moderation at scale is impossible to do well, involves Twitter now claiming that a tweet from the account @TheTweetOfGod somehow violates its policies:

If you're unfamiliar with that particular Twitter account, it is a popular account that pretends to tweet pithy statements from "God" that attempt (often not very well, in my opinion) to be funny in a sort of ironic, satirical way. I've found it to miss a lot more than it hits, but that's only my personal opinion. Apparently, Twitter's content moderation elves had a problem with the tweet above. And it's not hard to see why. Somewhere Twitter has a set of rules that include that it's a violation of its rules to mock certain classes of people -- and that includes making fun of people for their sexual orientation, which violates Twitter's rules on "hateful conduct." And it's not difficult to see how a random content moderation employee would skim a tweet like the one flagged above, not recognize the context, the fact that it's an attempt at satire, and flag it as a problem.

Thankfully, in this case, Twitter did correct it upon appeal, but it's just another reminder that so many things tend to trip up content moderators -- especially when they have to moderate a huge amount of content -- and satire and irony are categories that frequently trip up such systems.

Filed Under: content moderation, god, irony, satire, tweet of god
Companies: twitter


Reader Comments

Subscribe: RSS

View by: Time | Thread


  1. This comment has been flagged by the community. Click here to show it
    identicon
    Anonymous Coward, 14 Jun 2019 @ 12:55am

    Re: Re: CDA 230?

    It was an implied understanding that tech companies would not abuse 230 protection as they have. They use it as a sword, not a shield. RipoffReport.com is a good example.

    UGC would still exist without 230 as it does in other countries, due to the notice requirement similar to the DMCA.

    Moderation isn't necessary if people would grow up and start filtering stuff, though this would reveal that it's not that they don't want to read what they can easily avoid, they just don't want OTHERS reading it.

    Sites which spread anti-vaxxer information should be held liable for measles outbreaks but aren't. Sites which allow online mobs to spill over into real violence should also be liable. Sites which allow people to be harassed as well. 230 enables all the horrible things we see online, plus it means you can't trust what you read, including advertising.


Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here



Subscribe to the Techdirt Daily newsletter




Comment Options:

  • Use markdown. Use plain text.
  • Remember name/email/url (set a cookie)

Follow Techdirt
Insider Shop - Show Your Support!

Advertisement
Report this ad  |  Hide Techdirt ads
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Chat
Advertisement
Report this ad  |  Hide Techdirt ads
Recent Stories
Advertisement
Report this ad  |  Hide Techdirt ads

Close

Email This

This feature is only available to registered users. Register or sign in to use it.