Content Moderation Case Study: Google's Photo App Tags Photos Of Black People As 'Gorillas' (2015)
from the automation-isn't-the-answer dept
Summary: In May 2015, Google rolled out its “Google Photos” service. This service allowed users to store their images in Google’s cloud and share them with other users. Unlike some other services, Google’s photo service provided unlimited storage for photos under a certain resolution, making it an attractive replacement for other paid services.
Unfortunately, it soon became apparent the rollout may have outpaced internal quality control. The built-in auto-tagging system utilizing Google’s AI began tagging Black people as “gorillas,” resulting in backlash from users and critics who believed Google’s algorithm was racist.
Google’s immediate response was to apologize to users. The Twitter user who first noticed the tagging error was contacted directly by Google, which began tackling the problem that made it out of beta unnoticed. Google’s Yonatan Zunger pointed out the shortcomings of AI when auto-tagging photos, noting the company’s previous problems with mis-tagging people (of all races) as dogs and struggles with less-than-ideal lighting or low picture resolution. In fact, Google’s rollout misstep mirrored Flickr’s own struggles with auto-tagging photos, which similarly resulted in Black people being labeled as “ape” or “animal.”
Decisions to be made by Google:
- Would more diversity in product development/testing teams increase the chance issues like this might be caught before services go live?
- Can additional steps be taken to limit human biases from negatively affecting the auto-tag AI?
- Should more rigorous testing be performed in the future, given the known issues with algorithmic photo tagging?
Questions and policy implications to consider:
- Does seemingly inconsequential moderation like this still demand some oversight by human moderators?
- Will AI ever be able to surmount the inherent biases fed into it by those designing and training it?
Resolution: As of 2018, Google was still unable to completely eliminate this problem. Instead, it chose to eliminate the problematic tags themselves, resulting in no auto-tags for terms like “gorilla,” “chimp,” “chimpanzee,” and “monkey.” An investigation by Wired showed searches of Google Photos images returned zero results for these terms. Google said it was working on “longer-term fixes” but put no end date on when those fixes would arrive. It also acknowledged those terms had been blocked by Google and would remain blocked until the problem was solved.
Originally published at the Trust & Safety Foundation website.