Parler's CEO Promises That When It Comes Back… It'll Moderate Content… With An Algorithm
from the are-you-guys-serious? dept
Parler, Parler, Parler, Parler. Back in June of last year when Parler was getting lots of attention for being the new kid on the social media scene with a weird (and legally nonsensical) claim that it would only moderate “based on the 1st Amendment and the FCC” we noted just how absolutely naive this was, and how the company would have to moderate and would also have to face the same kinds of impossible content moderation choices that every other website eventually faces. In fact, we noted that the company (in part due to its influx of users) was seemingly speedrunning the content moderation learning curve.
Lots of idealistic, but incredibly naive, website founders jump into the scene and insist that, in the name of free speech they won’t moderate anything. But every one of them quickly learns that’s impossible. Sometimes that’s because the law requires you to moderate certain content. More often, it’s because you recognize that without any moderation, your website becomes unusable. It fills up with garbage, spam, harassment, abuse and more. And when that happens, it becomes unusable by normal people, drives away many, many users, and certainly drives away any potential advertisers. And, finally, in such an unusable state it may drive away vendors — like your hosting company that doesn’t want to deal with you any more.
And, as we noted, Parler’s claims not to moderate were always a part of the big lie. The company absolutely moderated, and the CEO even bragged to a reporter about banning “leftist trolls.” The whole “we’re the free speech platform” was little more than a marketing ploy to attract trolls and assholes, with a side helping of “we don’t want to invest in content moderation” like every other site has to.
Of course, as the details have come out in the Amazon suit, the company did do some moderation. Just slowly and badly. Last week, the company admitted that it had taken down posts from wacky lawyer L. Lin Wood in which he called for VP Mike Pence to face “firing squads.”
Amazon showed, quite clearly, that it gave Parler time to set up a real content moderation program, but the company blew it off. But now, recognizing it has to do something, Parler continues to completely reinvent all the mistakes of every social media platform that has come before it. Parler’s CEO, John Matze, is now saying it will come back with “algorithmic” content moderation. This was in an interview done on Fox News, of course.
“We?re going to be doing things a bit differently. The platform will be free speech first, and we will abide by and we will be promoting free speech, but we will be taking more algorithmic approaches to content but doing it to respect people?s privacy, too. We want people to have privacy and free speech, so we don?t want to track people. We don?t want to use their history and things of that nature to predict possible violations, but we will be having algorithms look at all the content ? to try and predict whether it?s a terms-of-service violation so we can adjust quicker and the most egregious things can get taken down,” Matze said. “So calls for violence, incitements, things of that nature, can be taken down immediately.”
This is… mostly word salad. The moderation issue and the privacy question are separate. So is the free speech issue. Just because people have free speech rights, it doesn’t mean that Parler (or anyone) has to assist them.
Also, Matze is about to learn (as every other company has) that algorithms can help a bit, but really won’t be of much help in the long run. Companies with much more resources, including Google and Facebook, have thrown algorithmic approaches to content moderation at their various platforms, and they are far from perfect. Parler will be starting from a much weaker position, and will almost certainly find that the algorithm doesn’t actually replace a true trust and safety program like most companies have.
In that interview, Matze is also stupidly snarky about Amazon’s tool, claiming:
“We even offered to Amazon to have our engineers immediately use Amazon services ? Amazon Rekognition and other tools ? to find that content and get rid of it quickly and Amazon said, ?That?s not enough,? so apparently they don?t believe their own tools can be good enough to meet their own standards,” he said.
That’s incredibly misleading, and makes Matze look silly. Amazon Rekognition is a facial recognition system. What does that have to do with moderating harassment, death threats, and abuse off your site? Absolutely nothing.
Instead of filing terrible lawsuits and making snarky comments, it’s stunning that Parler doesn’t shut up, find an actual expert on trust and safety to hire, and learn from what every other company has done in the past. That’s not to say it needs to handle the moderation in the same way. More variation and different approaches are always worth testing out. The problem is that you should do that from a position of knowledge and experience, not ignorance. Parler has apparently chosen the other path.