There Is No Magic Bullet For Moderating A Social Media Platform
from the it's-not-so-easy dept
It’s kind of incredible how frequently we see people who seem to think that the fact that social media platforms are so bad at moderating the content on those platforms is because they just don’t care or don’t try hard enough. While it is true that these platforms can absolutely do a much better job (which we believe often involves providing the end user more tools themselves), it’s still amazing at how many people think that deciding what content “belongs” and what content doesn’t belong is somehow easy. Earlier this month, in Washington DC there was the Content Moderation at Scale “COMO” conference. It was a one day event in which a bunch of companies revealed (sometimes for the first time) how they go about handling questions around content moderation. It was a followup to a similar event at Santa Clara University held back in February (for which we published a bunch of the papers that came out of the event).
For the DC event, we teamed up with the Center for Democracy and Technology to produce a live game for everyone at the event to play — turning them all into a trust & safety team, tasked with responding to “reported” content on a fictional social media platform. Emma Llanso from CDT and I ran the hour-long session, which included discussions of why people chose their decisions. The video of our session has now been posted which helpfully edits out the “thinking/discuss amongst yourselves” part of the process:
Obviously, many of the examples we chose were designed to be challenging (many based on real situations). But the process was useful and instructive. With each question there were four potential actions that the “trust & safety” team could take and on every single example at least one person chose each option. In other words, even when there was a pretty strong agreement on the course of action to take, there was still at least some disagreement.
Now, imagine (1) having to do that at scale, with hundreds, thousands, hundreds of thousands or even millions of pieces of “flagged” content showing up, (2) having to do it when you’re not someone who is so interested in content moderation that you spent an entire day at a content moderation summit, and (3) having to do it quickly where there are trade-offs and consequences to each choice — including possible legal liability — and no matter which option you make, someone (or perhaps lots of someones) are going to get very upset.
Again, this is not to say that internet platforms shouldn’t strive to do better — they should. But one of the great things about attending both of these events is that it demonstrated how each internet platform is experimenting in very, very different ways on how to tackle these problems. Google and Facebook are trying to throw a combination of lots and lots of people plus artificial intelligence at the problem. Wikipedia and Reddit are trying to leverage their own communities to deal with these issues. Smaller platforms are taking different approaches. Some are much more proactive, others are reactive. And out of all that experimentation, even if mistakes are being made, we’re finally starting to get some ideas on things that work for this community or that community (and remember, not all communities work the same way).
As I mentioned at the event, we’re looking to do a lot more with this concept of getting people to understand the deeper questions involved in the tradeoffs around moderating content. Setting it up as something of a game made it both fun and educational and we’d love some feedback as we look to do more with this concept.