YouTube Warns That, Thanks To Covid-19, It's Handing Over More Content Moderation To The Machines And They Might Suck

from the be-forewarned dept

Content moderation at scale is impossible to do well in the best of times, but the solutions that seem to at least keep it from devolving into a total mess almost always use a combination of humans and technology working together. But what do you do when the humans are sick, self-isolating, quarantined, etc? While I imagine some may be able to work from home, it's a difficult time to expect anyone to be at full productivity. So YouTube has made it clear that it's turning over more content moderation decisions to the machines knowing full well that some of those decisions are going to be bad:

Our Community Guidelines enforcement today is based on a combination of people and technology: Machine learning helps detect potentially harmful content and then sends it to human reviewers for assessment. As a result of the new measures we’re taking, we will temporarily start relying more on technology to help with some of the work normally done by reviewers. This means automated systems will start removing some content without human review, so we can continue to act quickly to remove violative content and protect our ecosystem, while we have workplace protections in place.

As we do this, users and creators may see increased video removals, including some videos that may not violate policies. We won’t issue strikes on this content except in cases where we have high confidence that it’s violative. If creators think that their content was removed in error, they can appeal the decision and our teams will take a look. However, note that our workforce precautions will also result in delayed appeal reviews. We’ll also be more cautious about what content gets promoted, including livestreams. In some cases, unreviewed content may not be available via search, on the homepage, or in recommendations.

And, of course, this is absolutely the right choice to make -- indeed, it's the only choice to make given the circumstances. But it's yet another reminder of how impossible and fragile the system is when people demand that humans review everything being posted to social media. Either way, don't be surprised to hear many more stories of bad content moderation decisions not just on YouTube but elsewhere in the coming weeks. Of course, I still imagine people will scream and yell and take it personally, but at least recognize that some of the issue may be that the humans are all kinda preoccupied with more important things right now.

Filed Under: ai, content moderation, content moderation at scale, humans, youtube
Companies: google, youtube

Reader Comments

Subscribe: RSS

View by: Time | Thread

  1. icon
    Stephen T. Stone (profile), 18 Mar 2020 @ 10:45am

    The First Amendment stops the government from interfering with your speech. It says nothing about private platforms being obligated to host your speech — or about you being entitled to an audience.

    Besides, we already have platforms that don’t ban/punish bigoted speech in general. We call them “4chan and its various knockoffs”.

Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here

Subscribe to the Techdirt Daily newsletter

Comment Options:

  • Use markdown. Use plain text.
  • Remember name/email/url (set a cookie)

Follow Techdirt
Insider Shop - Show Your Support!

Report this ad  |  Hide Techdirt ads
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Chat
Report this ad  |  Hide Techdirt ads
Recent Stories
Report this ad  |  Hide Techdirt ads

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it

Email This

This feature is only available to registered users. Register or sign in to use it.