Elon Musk Continues Along The Content Moderation Learning Curve, But Doesn’t Seem To Be Learning A Damn Thing
from the doomed-to-groundhog-day-this-shit dept
You know, it was just a few weeks ago that we posted an open letter to Elon Musk laying out just some of the basics of speedrunning the content moderation learning curve. And, as people keep reminding me, he seems to be doing all the levels all at once.
But here’s the incredible bit: unlike most other sites that actually learn something from all of this, Elon still doesn’t seem to be getting it at all. He’s fucking around and finding out… but not actually learning anything as he bumps into each level.
Let’s cover some of the highlights.
First up, Twitter (signed just as “Twitter” rather than naming who wrote it as Twitter used to do), put up a blog post claiming that nothing at all has changed in their content moderation policies. Almost everything about the statement is bonkers or easily disproved (or sometimes both). It starts out by claiming that it wants “to be the town square of the internet,” which is a line people have often used to try to describe Twitter, but has always been false. As we’ve noted, the “town square” is the internet itself. Twitter is one private venue on the town square. Claiming otherwise is counterproductive, because it opens the company up to all sorts of bullshit claims about “censorship” and whatnot.
As for the more specific claims in the post:
First, none of our policies have changed. Our approach to policy enforcement will rely more heavily on de-amplification of violative content: freedom of speech, but not freedom of reach.
This is demonstrably false. Indeed, just days earlier, Twitter announced that its policies had changed, specifically in removing its rules against spreading Covid misinformation. Literally days before insisting that “none of our policies have changed” Twitter updated its Covid misinformation page to say:
Effective November 23, 2022, Twitter is no longer enforcing the COVID-19 misleading information policy.
So, I guess feel free to tweet out about how SpaceX and Teslas cause Covid?
Separately, clearly Twitter’s content moderation policies have changed, because Musk did one of his infamous polls and said he was going to “give amnesty” to accounts that had previously violated the policies, and has been in the process of reinstating approximately 62,000 accounts that had previously been banned.
That… is a change in policy.
As for diminishing the “reach” of “violative” posts, that sounds exactly like “shadowbanning,” which was one of the great sins that Musk insisted he was taking over Twitter to cure.
Our Trust & Safety team continues its diligent work to keep the platform safe from hateful conduct, abusive behavior, and any violation of Twitter’s rules. The team remains strong and well-resourced, and automated detection plays an increasingly important role in eliminating abuse.
This is also bullshit. The Trust & Safety team has been gutted by layoffs and the “non-hardcore” resignations. As that article notes, while Elon Musk keeps claiming publicly that his “top priority” is to stop child sexual abuse material (CSAM), he has effectively destroyed the already overworked team that was tackling that problem:
Elon Musk has dramatically reduced the size of the Twitter Inc. team devoted to tackling child sexual exploitation on the platform, cutting the global team of experts in half and leaving behind an overwhelmed skeleton crew, people familiar with the matter said.
The team now has fewer than 10 specialists to review and escalate reports of child sexual exploitation, three people familiar with the matter said, asking not to be identified for fear of retaliation.
And for things like that you can’t just turn the dial on the “automated detection” and say “we now need fewer people.” Anyone with any experience in this stuff knows that while automation is an important tool in the trust & safety toolbox (especially around CSAM) it only works in conjunction with human experts, the majority of whom are no longer at the company.
Next on Twitter’s list:
When urgent events manifest on the platform, we ensure that all content moderators have the guidance they need to find and address violative content.
That has not been shown in practice. Indeed, we’re seeing the opposite play out. There has been a rash of highly questionable account suspensions, most of which were mass reported by the Trumpist grifter crew.
And talking about “urgent events,” there has been a lot of discussion on Twitter over the last few days about the powerful protests in China regarding that country’s Covid lockdown policies. And China has flooded Twitter with spam to try to obscure those reports, making it difficult for people doing searches to find legitimate content about the protests:
Numerous Chinese-language accounts, some dormant for months or years, came to life early Sunday and started spamming the service with links to escort services and other adult offerings alongside city names.
The result: For hours, anyone searching for posts from those cities and using the Chinese names for the locations would see pages and pages of useless tweets instead of information about the daring protests as they escalated to include calls for Communist Party leaders to resign.
And it’s not at all clear that Twitter has the resources or “guidance” to deal with that. In fact, what we’ve seen is that accounts that were actually promoting on-the-ground reporting about the protests were shut down instead. Freelance journalist Clarissa Wei called out two examples: a Hong Kong based journalist and a Taiwanese writer who both had their accounts suspended.
While later reports said that both accounts had been reinstated, it appears that the Taiwanese writer’s account is still currently offline.
Also, reports earlier this week highlighted how videos of the Christchurch shooting from a few years ago were being reuploaded to the platform and no longer being caught by Twitter’s automated filtering system. That’s… noteworthy. Because the rapid effort by all of the tech companies to flag and remove exactly that video is literally the prime example used by governments and social media companies of how those companies need to be able to respond “urgently” to sudden crises.
And Twitter is now failing that.
Yes, mistakes like this happen all the time. That’s part of what we’ve always highlighted about the impossibility of content moderation at scale. But, understanding that is part of understanding the learning curve.
And it’s somewhat amusing how Musk and his fans were absolutely unwilling to accept the “mistakes happen” explanation for pre-Musk content moderation issues, but now demand that everyone be extra forgiving as Musk “learns the ropes” and “experiments” here.
Indeed, this new Twitter statement includes a line about giving them the benefit of the doubt as mistakes will inevitably be made:
Finally, as we embark on this new journey, we will make mistakes, we will learn, and we will also get things right. Throughout, we’ll communicate openly with our users and customers, to get and share your feedback as we build.
And, again, that’s what happened before, but Musk and crew still insist (without any proof) that when those mistakes happened before they were for malicious reasons, but now they’re somehow righteous experiments.
Then, of course, there are the other aspects of the content moderation learning curve that we’ve discussed: outside pressure from advertisers, governments, and partner companies. And we’re seeing all of them play out in real time as well. Leaked reports say that advertising may be down somewhere around 50%. And Elon went ballistic over Apple apparently pulling ads and potentially threatening to remove Twitter from the app store, though Elon met with Tim Cook and claimed to have patched things up. Apple was apparently Twitter’s largest advertiser, so you can see why it was important.
Of course, as Twitter’s former head of Trust & Safety recently noted, the threats to remove the app from the app store are not a new thing at all. It apparently happens all the time.
And then we have the EU. As we explained all the way back in May, Musk did a very stupid thing in “endorsing” the new Digital Services Act which comes into force in January of 2024. Earlier this week, the European Commission’s Thierry Breton (who had met with Musk back in May) had another meeting with Musk and basically warned him that the company did not appear ready at all to meet the requirements of the DSA. And, well, that’s because all of the people who were working on getting the company ready (which is a massive job) have left.
It’s being reported that the EC threatened to ban Twitter, but that’s a bit hyperbolic. The DSA isn’t going to lead to an outright ban. But it will create real issues for Twitter if the company doesn’t change.
This isn’t necessarily a good thing. We’ve been screaming about the risks and dangers of the DSA for years now. And we should be extremely worried about governments telling companies how to moderate speech. But it is a reality. A reality that Musk now needs to deal with.
Every one of these things are issues that everyone knew was coming. Most of them we’ve written about for years, and highlighted in our “speedrunning” post.
But the somewhat incredible part is that Musk doesn’t seem to be learning anything from any of this, and instead seems focused on remaking every single mistake in the book.
Indeed, he’s even making new and more ridiculous mistakes. Just yesterday he tweeted out that Twitter “interfered in elections,” which seems quite likely to now show up in lawsuits against Twitter (he seems to not realize that in buying the company he is also now liable for things the company may have done under previous ownership). This is leaving aside that the claim of “interfering in elections” is almost certainly bullshit. He’s still created a massive new liability for himself.
There are so many examples of this, but it’s quite clear that whereas most people actually seem to pick up some lessons while moving up the learning curve, Elon Musk seems to just be making the same stupid mistakes over and over and over again.