Parler Was Allowed Back In The Apple App Store Because It Will Block 'Hate Speech,' But Only When Viewed Through Apple Devices
from the fracturing dept
Last month we noted that Apple told Congress that it was allowing Parler’s iOS app to return to its app store, after the company (apparently) implemented a content moderation system. This was despite Parler’s then interim CEO (who has since been replaced by another CEO) insisting that Parler would not remove “content that attacks someone based on race, sex, sexual orientation or religion.” According to a deep dive by the Washington Post, the compromise solution is that such content will be default blocked only on iOS devices, but will be available via the web or the sideloaded Google app, though they will be “labeled” as hate by Parler’s new content moderation partner, Hive.
Posts that are labeled ?hate? by Parler?s new artificial intelligence moderation system won?t be visible on iPhones or iPads. There?s a different standard for people who look at Parler on other smartphones or on the Web: They will be able to see posts marked as ?hate,? which includes racial slurs, by clicking through to see them.
Hive is well known in the content moderation space, as it is used by Chatroulette, Reddit and some others. Hive mixes “AI” with a large team of what it refers to as “registered contributors” (think Mechanical Turk-style crowdsourced gig work). Of course, it was only just last year that the company announced that its “hate model” AI was ready for prime time, and I do wonder how effective it is.
Either way, this is interesting for a variety of reasons. One thing we’ve talked about in the past with regards to content moderation is that one of the many problems is that different people have different tolerances for different kinds of speech, and having different moderation setups for different users (and really pushing more of the decision making to the end users themselves) seems like an idea that should get a lot more attention. Here, though, we have a third party — Apple — stepping in and doing that deciding for the users. It is Apple’s platform, so of course, they do get to make that decision, but it’s a trend worth watching.
I do wonder if we’ll start to see more pressure from such third parties to moderate in different ways to the point that our mobile app experiences and our browser experiences may be entirely different. I see how we end up in such a world, but it seems like a better solution might be just pushing more of that control to the end users themselves to make those decisions.
The specific setup here for Parler is still interesting:
Parler sets the guidelines on what Hive looks for. For example, all content that the algorithms flag as ?incitement,? or illegal content threatening physical violence, is removed for all users, Peikoff and Guo said. That includes threats of violence against immigrants wanting to cross the border or politicians.
But Parler had to compromise on hate speech, Peikoff said. Those using iPhones won?t see anything deemed to be in that category. The default setting on Android devices and the website shows labels warning ?trolling content detected,? with the option to ?show content anyway.? Users have the option to change the setting and, like iOS users, never be exposed to posts flagged as hate.
Peikoff said the ?hate? flag from the AI review will cue two different experiences for users, depending on the platform they use. Parler?s tech team is continuing to run tests on the dual paths to make sure each runs consistently as intended.
Of course, AI moderation is famously mistake-prone. And both Parler and Hive execs recognize this:
Peikoff said Hive recently flagged for nudity her favorite art piece, the ?To Mennesker? naked figures sculpture by Danish artist Stephan Sinding, when she posted it. The image was immediately covered with a splash screen indicating it was unsafe.
?Even the best AI moderation has some error rate,? Guo said. He said the company?s models show that one to two posts out of every 10,000 viewed by the AI should have been caught on Parler but aren?t.
I do question those percentages, but either way it’s another interesting example of how content moderation continues to evolve — even if Parler’s users are angry that they won’t be able to spew bigotry quite as easily as previously.