As Facebook Agrees To Pay $52 Million In PTSD Payments To Moderators, Why Are Some Demanding More Human Moderators?
from the exposing-humans-to-cruelty-to-protect-humans-from-cruelty dept
There’s been plenty of talk these days about content moderation, and how different platforms should moderate their content, but much less attention has been paid to the people who do the actual moderation. Last year, we highlighted an amazing story by Casey Newton at The Verge detailing just what a horrible job it is for many people — in which they are constantly bombarded with the worst of the internet. Indeed, some of his reporting helped spur on a labor dispute that just ended (reported, again, by Newton) with Facebook agreeing to pay out $52 million to content moderators who developed mental health issues on the job.
Each moderator will receive a minimum of $1,000 and will be eligible for additional compensation if they are diagnosed with post-traumatic stress disorder or related conditions. The settlement covers 11,250 moderators, and lawyers in the case believe that as many as half of them may be eligible for extra pay related to mental health issues associated with their time working for Facebook, including depression and addiction.
I’ve seen many people cheer on this decision, and I agree that it’s a good thing that those moderators are getting some compensation for the horrors and anguish they’ve experienced. But I’m somewhat perplexed that some of those cheering on this settlement are the same people who have, regularly, demanded that Facebook should be doing much more content moderation. It would seem that if they got their wish, that would mean subjecting many more people to these kinds of traumas and stresses — and we should all be concerned about that.
Incredibly, as Newton’s story points out, the content moderator who brought the lawsuit was hired after the 2016 election, when the very same crew of Facebook critics demanded that the company ramp up its content moderation to deal with their belief that misinformation on Facebook helped elect President Trump:
In September 2018, former Facebook moderator Selena Scola sued Facebook, alleging that she developed PTSD after being placed in a role that required her to regularly view photos and images of rape, murder, and suicide. Scola developed symptoms of PTSD after nine months on the job. The complaint, which was ultimately judged by several other former Facebook moderators working in four states, alleged that Facebook had failed to provide them with a safe workspace.
Scola was part of a wave of moderators hired in the aftermath of the 2016 US presidential election when Facebook was criticized for failing to remove harmful content from the platform.
And yet, now the same people who pushed for Facebook to do more moderation seem to be the ones most vocally cheering on this result and continuing to attack Facebook.
Again, there are many, many reasons to criticize Facebook. And there are lots of reasons to at least be glad that the company is forced to pay something (if probably not nearly enough) for the moderators whose lives and mental health have been turned upside down by what they had to go through. But I’m trying to square that with the fact that it’s the very same people who pushed Facebook to ramp up its moderation who are now happy with this result. It’s because of those “activists” that Facebook was pressured into hiring these people and putting them in harm’s way, often.
Content moderation creates all sorts of downstream impacts, and many people seem to want to attack companies like Facebook both for not employing enough content moderators and for the mental harms that content moderation creates. And I’m not sure how anyone can square those two views, beyond just thinking that Facebook should do the impossible and moderate perfectly without involving any humans in the process (and, of course, I’ll note that many of these same people laugh off the idea that AI can do this job, because they’re right that it can’t).
I don’t know what the right answer here is — because the reality is that there is no good answer. Facebook doing nothing is simply not an option, because then the platform is filled with garbage, spam, and nonsense. AI simply isn’t qualified to do that good of a job. And human moderators have very real human consequences. It would be nice if people could discuss these issues with the humility and recognition that there are no good answers and every option has significant tradeoffs. But it seems that most just want to blame Facebook, no matter what the choices or the outcomes. And that’s not very productive.