It Feels Like The Only Reason ExTwitter Still Has A Trust And Safety Team Is So Elon Can Lay Them Off
from the yeah,-sure,-that'll-bring-back-the-advertisers dept
Almost everyone I’ve seen talking about this new Insider article about how Elon had a special layoff just of trust and safety employees has shared it with the line “wait, there are still trust & safety employees?”
Elon Musk recently laid off more Twitter employees working on the platform’s trust and safety efforts, roles typically crucial to keeping a social media platform safe for advertisers.
The cuts happened the first week of September, according to two people familiar with the company, one of the first targeted layoffs since he shrunk operations earlier this year. While this layoff only affected a handful of people, five to 10, it was focused entirely on workers in trust and safety.
The article goes on to note that the trust and safety team, which no longer appears to have someone officially running the team (since Ella Irwin quit), has now gone from what had been about 230 people to somewhere around 20:
Now, the team is a fraction of its original size, according to the people familiar. One of the people said there are currently about 20 full-time employees on the trust and safety team, some of whom were contract workers who were promoted over the summer to full-time roles at the company, not long after Linda Yaccarino joined as Twitter’s CEO. The employees mainly engage in content moderation, the person familiar added, while a few members of the team work in legal or policy.
The article further notes that there used to be trust and safety roles working across the entire company, in product, legal, and policy, to ensure that the company didn’t view trust and safety as an afterthought, but rather was a core element of the business.
Now, it is most clearly an afterthought.
It’s unclear what prompted the latest layoff. Since the early layoffs late last year and early this year, Elon insisted that the layoffs were over. Linda Yaccarino has claimed that the company was hiring (though apparently she just meant a bunch of TV execs with no internet experience).
Meanwhile, all this is happening at a time when Musk is claiming that it’s the Anti-Defamation League that is causing advertisers to leave the site, not the fact that the site is a mess and it’s literally damaging the brands that continue to advertise there.
Of course, it’s not difficult to see how this is playing out. Elon has long made it clear that he thinks everything can be handled by AI, so that trust and safety becomes an engineering role now, where the only people working on any of this are simply trying to improve the AI. But all of the evidence so far suggests it’s not working very well, like pretty much all of Elon’s hunches about how to run a social media site.
Filed Under: brand safety, content moderation, elon musk, trust and safety
Companies: twitter, x


Comments on “It Feels Like The Only Reason ExTwitter Still Has A Trust And Safety Team Is So Elon Can Lay Them Off”
'Stop applying the rules to people I like!'
Of course, it’s not difficult to see how this is playing out. Elon has long made it clear that he thinks everything can be handled by AI, so that trust and safety becomes an engineering role now, where the only people working on any of this are simply trying to improve the AI. But all of the evidence so far suggests it’s not working very well, like pretty much all of Elon’s hunches about how to run a social media site.
That’s one explanation but I can’t help but suspect that an alternative is that he’s firing them because they keep flagging the comments and accounts of people he likes/that like him.
Re:
They did flag an account sharing CSAM that Musk at least had a hand in restoring. An anecdotal occurence isn’t evidence of a pattern in and of itself, but it’s a good place to start.
Re: Re:
Not to mention if he’s willing to step in over something that heinous it’s not hard to believe that he’s overruled them on less problematic content before.
Re: Re: replaced by a machine
You can automate the child sex abuse material search fairly easily. If the users will include an agreed tag, e.g. #CSAM or similar, then the machines can spot these and bring them to the attention of those interested in such material.
We will hope that it does not get confused with #ISAM (indexed sequential access method) which is likely to have a different set of interested vierwers.
20 trust and safety people for purported 350m active users? I don’t even have a witty comment on that one, this is just staggering.
Re:
I’m guessing that the actual number of active users is smaller than they let on nowadays.
But, yeah, automation can only go so far and I would assume that just dealing with individual territories would need more time and expertise than 20 people could provide.
Re: Re:
Yeah, Quora tells me there are about 83 languages with at least 10m speakers. That makes even cursory verification of what the hypothetical amazing AI is doing virtually impossible with Xitter’s levels of staff.
Re: Re: Re:
Even within a language, the level of offensiveness (or legality) of words and images can vary greatly, and that’s before you take into account things like historical context.
I very much doubt there’s much accuracy here, even if it’s somehow the greatest AI model ever created.
If you’re an AI, of course you can hallucinate that an AI can handle trust and safety.
Re:
We reach the point where an AI would manage this social network (whatever the name) better than it actually is.
Re:
You imply there’s intelligence in the man-child. You can call it just A.
This comment has been flagged by the community. Click here to show it.
Why do you think Twitter should have a chief censor, MM?
Re:
Why do you think anyone wants to hear/read your bullshit?
Re: Re:
The dude is just a salty asshole with the mental fortitude of broken china who insists on making everyone around him miserable.
Re:
Why do you and Musk enjoy CSAM?
Re:
Why do you think that a few aggressive bullshit artists should be allowed to drive users away from a site, and having destroyed the site move on to next popular site and do it all over again?
Re:
Why do you think that a private space should not be able to control who uses their property?
I’m sure you’ll reply with long-debunked nonsense like “public square” or a misunderstanding of both the first amendment and how things operate globally, but I’m always willing to hear why you wish to remove control of private property from people.
Re:
You’re just mad that no one wants to follow you on Twitter, aren’t you.
Re:
Why do you hate the First Amendment, Jhon?
Re:
Every site with user content, especially social media sites, need censors. Otherwise they don’t work. None of them.
If anyone needs further details from Mike, read his article on the evolution of content moderation.
Re:
Only those whose skulls serve purely decorative purposes call platform moderation censorship as you do.
Re:
Why do your share how much you hate yourself over and over?
This comment has been flagged by the community. Click here to show it.
Re:
Because fuck you, white boy.
Re: Re:
You can’t hide the truth forever, Matthew Bennett.
Re:
I don’t think any website should have a “chief censor,” because websites setting rules, and then enforcing those rules has nothing to do with censorship, and everything to do with crafting the kind of community the owner of the private property wants to create.
I also think that any website that doesn’t have any sort of trust & safety apparatus becomes unusable fairly quickly as it fills up with spam and other garbage, some of which will create legal problems for the site.
And, finally, if you don’t do any sort of rules enforcement, it allows total fucking assholes to come in and destroy otherwise interesting conversations.
But what would you know about that?
Re: Re:
I looked up “enshittification” in the dictionary, and the answer displayed a picture of Elon Musk!
Re: Re: Re:
Elon is more like Enshittification on Steroids.
What some people may deduce from this:
The website formerly known as Twitter is not to be trusted and you are not safe there.
Re:
That’s why it has a giant X, like it’s poison.
This comment has been flagged by the community. Click here to show it.
Re:
This is deranged. How exactly in one “unsafe” while reading a website?
Re: Re:
hmm, pick an answer from the bucket.
Re: Re:
This posting is about how Elon has removed the group of employees at former twitter who were in charge of Trust and Safety. I think that was the name of their department.
So, removing trust and safety might cause some people to think they can not trust or feel safe …. at the website formerly known as twitter. People are funny that way.
Re: Re:
T, FTFY
Always remember, in the mind of a deranged person, such derangement looks normal. That’s how we spotted you so easily.
Re: Re:
By referring to me with a pronoun that I did not consent to, which forces me to relive the horrors of having to share the same breathing space as misguided, conservative, STRAIGHT parents.
When quoted like that, isn’t it obvious? Fewer advertisers means fewer employees to cater to advertisers.
Of course, that doesn’t mean they should fire the entire team. It’s always helpful to keep a few potential scapegoats around, and that’s quite affordable to a billionaire. In that sense, Mike’s headline is apt.
Or, alternatively, that everything can be handled by his own personal intervention, whenever he gets upset or confused about something. Even if the thing he is upset or confused about is something that he decided himself last week.
Re:
I alone can fix it.
Where have I heard that before?
Hiring TV executives
Hiring all those TV executives suggests that Musk wants to change eX-Twitter from a mainly interactive platform, with all the content coming from users to one that mostly broadcasts content that is centrally produced and published.
Re:
As does the “subscription users only” decision. Then, first thing you know, they’ll be farming content creation out-of-house, and hey presto, indistinguishable from a streaming service!
How’s that for a face-heel turn?
Re: Re:
A streaming service for authcaps and dictators, and their “captive” audiences.
This comment has been flagged by the community. Click here to show it.
Since we’re now refe
Since we’re now referring to any form of automation as “AI”, we should remember that most major platforms have used “AI” to perform content moderation for decades. The humans are there to sort through the edge cases caused by AI’s error prone nature. Legit artists get banned from YouTube while terrorist content makes it through. And that’s with Google’s “AI” and thousands of human content moderators. I’m sure the 20 guys living in an illegal dorm in the Twitter office have got this.
Re:
we?
The all inclusive terminology is wrong, obviously.
Everyone needs a whipping boy
Although Elon is making it hard to take that title away from himself.
I’m just enjoying watching Major Kong riding the missile to oblivion.