Content Moderation Case Study: Lyft Blocks Users From Using Their Real Names To Sign Up (2019)
from the scunthorpe-again? dept
Summary: Users attempting to sign up for a new ride-sharing program ran into a problem from the earliest days of content moderation. The “Scunthorpe problem” dates back to 1996, when AOL refused to let residents of Scunthorpe, England register accounts with the online service. The service’s blocklist of “offensive” words picked out four of the first five letters of the town’s name and served up a blanket ban to residents.
Flash forward twenty-three years and services still aren’t much closer to solving this problem.
Users attempting to sign up for Lyft found themselves booted from the service for “violating community guidelines” simply for attempting to create accounts using their real names. Some of the users affected were Nicole Cumming, Cara Dick, Dick DeBartolo, and Candace Poon.
These users were asked to “update their names,” as though such a thing were even possible to do with a service that ties names to payment systems and internal efforts to ensure driver and passenger safety.
Decisions to be made by Lyft:
- Should names triggering Community Guidelines violations be reviewed by human moderators, rather than automatically rejected?
- Is the cross-verification process enough to deter pranksters and trolls from activating accounts with actually offensive names?
Questions and policy implications to consider:
- Considering the identification system is backstopped by credit cards and payment services that require real names, does deploying a blocklist actually serve any useful purpose?
- Given that potential users are likely to abandon a service that generates too much friction at sign up, does a blocklist like this do damage to company growth?
- Does global growth create a larger problem by adding other languages and possible names that will trigger rejections of more potential users? Can this be mitigated by backstopping more automatic processes with human moderators?
Resolution: The users affected by Lyft’s blocklist were reinstated. Lyft apologized for the rejections, pointing a finger at automated moderation efforts designed to keep people from creating offensive content using nothing more than the First Name/Last Name fields.
Unfortunately, the problem still hasn’t been solved. Candace Poon — whose first attempt to sign up for Lyft was rejected — just ran into the same issue attempting to create an account for new social media platform, Clubhouse.
Originally posted to the Trust & Safety Foundation website.