It Feels Like The Only Reason ExTwitter Still Has A Trust And Safety Team Is So Elon Can Lay Them Off

from the yeah,-sure,-that'll-bring-back-the-advertisers dept

Almost everyone I’ve seen talking about this new Insider article about how Elon had a special layoff just of trust and safety employees has shared it with the line “wait, there are still trust & safety employees?”

Elon Musk recently laid off more Twitter employees working on the platform’s trust and safety efforts, roles typically crucial to keeping a social media platform safe for advertisers.

The cuts happened the first week of September, according to two people familiar with the company, one of the first targeted layoffs since he shrunk operations earlier this year. While this layoff only affected a handful of people, five to 10, it was focused entirely on workers in trust and safety.

The article goes on to note that the trust and safety team, which no longer appears to have someone officially running the team (since Ella Irwin quit), has now gone from what had been about 230 people to somewhere around 20:

Now, the team is a fraction of its original size, according to the people familiar. One of the people said there are currently about 20 full-time employees on the trust and safety team, some of whom were contract workers who were promoted over the summer to full-time roles at the company, not long after Linda Yaccarino joined as Twitter’s CEO. The employees mainly engage in content moderation, the person familiar added, while a few members of the team work in legal or policy.

The article further notes that there used to be trust and safety roles working across the entire company, in product, legal, and policy, to ensure that the company didn’t view trust and safety as an afterthought, but rather was a core element of the business.

Now, it is most clearly an afterthought.

It’s unclear what prompted the latest layoff. Since the early layoffs late last year and early this year, Elon insisted that the layoffs were over. Linda Yaccarino has claimed that the company was hiring (though apparently she just meant a bunch of TV execs with no internet experience).

Meanwhile, all this is happening at a time when Musk is claiming that it’s the Anti-Defamation League that is causing advertisers to leave the site, not the fact that the site is a mess and it’s literally damaging the brands that continue to advertise there.

Of course, it’s not difficult to see how this is playing out. Elon has long made it clear that he thinks everything can be handled by AI, so that trust and safety becomes an engineering role now, where the only people working on any of this are simply trying to improve the AI. But all of the evidence so far suggests it’s not working very well, like pretty much all of Elon’s hunches about how to run a social media site.

Filed Under: , , ,
Companies: twitter, x

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “It Feels Like The Only Reason ExTwitter Still Has A Trust And Safety Team Is So Elon Can Lay Them Off”

Subscribe: RSS Leave a comment
45 Comments
This comment has been deemed insightful by the community.
That One Guy (profile) says:

'Stop applying the rules to people I like!'

Of course, it’s not difficult to see how this is playing out. Elon has long made it clear that he thinks everything can be handled by AI, so that trust and safety becomes an engineering role now, where the only people working on any of this are simply trying to improve the AI. But all of the evidence so far suggests it’s not working very well, like pretty much all of Elon’s hunches about how to run a social media site.

That’s one explanation but I can’t help but suspect that an alternative is that he’s firing them because they keep flagging the comments and accounts of people he likes/that like him.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re:

he’s firing them because they keep flagging the comments and accounts of people he likes/that like him

They did flag an account sharing CSAM that Musk at least had a hand in restoring. An anecdotal occurence isn’t evidence of a pattern in and of itself, but it’s a good place to start.

Tanner Andrews (profile) says:

Re: Re: replaced by a machine

You can automate the child sex abuse material search fairly easily. If the users will include an agreed tag, e.g. #CSAM or similar, then the machines can spot these and bring them to the attention of those interested in such material.

We will hope that it does not get confused with #ISAM (indexed sequential access method) which is likely to have a different set of interested vierwers.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re:

I’m guessing that the actual number of active users is smaller than they let on nowadays.

But, yeah, automation can only go so far and I would assume that just dealing with individual territories would need more time and expertise than 20 people could provide.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re:

Even within a language, the level of offensiveness (or legality) of words and images can vary greatly, and that’s before you take into account things like historical context.

I very much doubt there’s much accuracy here, even if it’s somehow the greatest AI model ever created.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re:

Why do you think that a private space should not be able to control who uses their property?

I’m sure you’ll reply with long-debunked nonsense like “public square” or a misunderstanding of both the first amendment and how things operate globally, but I’m always willing to hear why you wish to remove control of private property from people.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re:

hmm, pick an answer from the bucket.

  • uptick in general harassment
  • uptick in bigotry
  • talks about removing/limiting vital features (i.e. blocking)
  • ceasing working with the people that help prevent CSAM from staying on the site (and reinstating an account that was banned for it)
  • changing verification in a way that benefited scammers
This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re:

This posting is about how Elon has removed the group of employees at former twitter who were in charge of Trust and Safety. I think that was the name of their department.

So, removing trust and safety might cause some people to think they can not trust or feel safe …. at the website formerly known as twitter. People are funny that way.

Anonymous Coward says:

roles typically crucial to keeping a social media platform safe for advertisers. […] causing advertisers to leave the site […] It’s unclear what prompted the latest layoff.

When quoted like that, isn’t it obvious? Fewer advertisers means fewer employees to cater to advertisers.

Of course, that doesn’t mean they should fire the entire team. It’s always helpful to keep a few potential scapegoats around, and that’s quite affordable to a billionaire. In that sense, Mike’s headline is apt.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Elon has long made it clear that he thinks everything can be handled by AI

Or, alternatively, that everything can be handled by his own personal intervention, whenever he gets upset or confused about something. Even if the thing he is upset or confused about is something that he decided himself last week.

This comment has been flagged by the community. Click here to show it.

Sentient Toaster says:

Since we’re now referring to any form of automation as “AI”, we should remember that most major platforms have used “AI” to perform content moderation for decades. The humans are there to sort through the edge cases caused by AI’s error prone nature. Legit artists get banned from YouTube while terrorist content makes it through. And that’s with Google’s “AI” and thousands of human content moderators. I’m sure the 20 guys living in an illegal dorm in the Twitter office have got this.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...