Elon’s ‘Zero Tolerance’ Policy On CSAM Apparently Does Not Apply To Conspiracy Theorist Accounts He Likes

from the not-how-it-works dept

You may recall that early on in Elon’s ownership of Twitter, he insisted that “removing child exploitation is priority #1” while exhorting his supporters to “reply in the comments” if they saw any.

Leaving aside that this is a ridiculously terrible process for having people report potential CSAM (Child Sexual Abuse Material) or, as some people prefer, CSEM (with the E standing for “exploitation”), there was little to no evidence of this actually being put into practice. Most of the people (one person told me everyone) who worked on the CSAM team was let go or left. Ella Irwin, who headed up trust & safety until she resigned two months ago (as far as I can tell no replacement has been named) made a bunch of statements about how the company was treating CSAM, but there was almost no evidence backing that up.

There were multiple reports of the CSAM mitigation process falling apart. There were reports of CSAM on the platform remaining up for months. Perhaps even worse (and risking serious legal consequences), the company claimed it had suspended 400k accounts, but only reported 8k to law enforcement which is required by law. Oh, and apparently Twitter’s implementation of PhotoDNA broke at some point, which is again incredibly serious as, PhotoDNA (for all its problems) remains a key tool for large sites in fighting known CSAM.

And yet the company still claims (on a Twitter-branded page, because apparently no one actually planned for the “X” transition) that it has a “zero tolerance” policy for CSAM.

The key parts of that page say both “We have a zero-tolerance child sexual exploitation policy on Twitter” and “Regardless of the intent, viewing, sharing, or linking to child sexual exploitation material contributes to the re-victimization of the depicted children.”

Anyway, that all leads up to the following. One of the small group of vocal and popular utter nonsense peddlers on the site, dom_lucre, had his account suspended. A bunch of other nonsense peddlers started wringing their hands about this and fearing that Musk was going soft and was now going to start banning “conservative” accounts. In responses, Elon just came out and said that the account had posted CSAM, that only Twitter “CSE” staff had seen it, and that after removing the tweets in question, it had reinstated that guy’s account.

It’s worth noting that this person was among the hand-picked accounts who received money during Elon’s recent pay-for-stanning rollout.

Almost everything about this statement is problematic, and one that any lawyer would have a heart attack over if Elon were their client. First off, blaming Twitter’s legacy code is getting old and less and less believable each time he does it. He could just say “we fired everyone who understood how stuff worked,” but he can’t quite get there.

Second, posting “the reason” for a suspension is, like in so many cases having to do with trust & safety, trickier and involves more nuances than Elon would ever think through. Just to scratch the surface, sometimes telling users why they were suspended can create more problems, as users try to “litigate” their suspension. It can also alert abusive users to who may have reported them, leading to further abuse. Posting the reason publicly can lead to even more issues, including the potential risk of defamation claims.

But, even more importantly, it’s not a zero tolerance policy if you reinstate the account. It really seems like an “Elon’s inner circle tolerance policy.”

The claim that the only people who saw the images were the CSE team seems… unlikely. Internet sleuths have sniffed out a bunch of replies to his now deleted post (which was up for four days on an account with hundreds of thousands of followers), suggesting that the content was very much seen.

Also, there are big questions about what process Twitter followed here, since deleting the content, telling the world about who was suspended for what, and then reinstating the account are not what one would consider normal. Did Twitter send the content to NCMEC? Did it report it to any other law enforcement? These seem like pretty big questions.

On top of that, viewing that content on Twitter itself could potentially expose users to criminal liability. This whole thing is a huge mess, with a guy in charge who seems to understand literally none of this.

He’s now making Twitter a massive risk to use. At a time when the company is begging advertisers to put their ads on the site, I can’t see how Elon choosing to reinstate someone who posted CSAM, which was left on the site for days, is going to win them back.

Filed Under: , , , ,
Companies: ncmec, twitter, x

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Elon’s ‘Zero Tolerance’ Policy On CSAM Apparently Does Not Apply To Conspiracy Theorist Accounts He Likes”

Subscribe: RSS Leave a comment
76 Comments

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Matthew M Bennett says:

Re: Re: Re:2

Oh Strawby, I said evidence. That “paper” (term used very loosely) offers nothing of the kind.

It doesn’t list hate speech searched for (hilariously, after a trigger warning), and admits it used a sentiment detection algo (it’s not an API) which are a famously and hilariously inaccurate.

There’s no link to the paper the article talks about, the only link is to conference where this paper is supposedly talked about.

There is a lot of bunk “social science” (almost never repeatable) talking about “hate speech” how did you find a paraphrase of the worst one without any citations? Did you even read it? That’s amazing.

This comment has been deemed insightful by the community.
Strawb (profile) says:

Re: Re: Re:3

Oh Strawby, I said evidence. That “paper” (term used very loosely) offers nothing of the kind.

It’s more than you’ve ever provided for any of your claims.

It doesn’t list hate speech searched for

The study has a link to a Github that contains the hate keywords they used.

admits it used a sentiment detection algo (it’s not an API)

Firstly, it’s literally called “Perspective API”. Let’s assume that the people who developed the tool and named it know more about it than you do.

Secondly, nice cherrypicking. Putting it through the API was the last step of their filtering process, but they also had actual people look through the hate keywords to score them.

There’s no link to the paper the article talks about, the only link is to conference where this paper is supposedly talked about.

I know you’re used to having your thoughts and opinions spoonfed to you by crackpots and conspiracy nuts, but when most people are given a title of a study, they know how to use Google to find the actual study.

There is a lot of bunk “social science” (almost never repeatable) talking about “hate speech”

Oh, in that case, here’s some more evidence.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re:5

And again, I actually want more “hate speech”

that’s because you have been a privileged white male your whole life and have never been the recipient of the constant barrage of hate speech that most minorities in this country have to endure for their entire lives, especially when it comes from people like you.

You just want to call people racial slurs and not have any consequences of doing so. Where the rest of the civilized population realize that there is no point in discussing any possible good things about Hitler.

This comment has been deemed insightful by the community.
JMT (profile) says:

Re: Re: Re:

…I think what you call “hate speech” for the most part should be allowed, it’s part of open discussion, so by necessity would go up, if only a little.

“As a straight white man who’s never experienced actual hate speech directed at me I have no issue with other people experiencing hate speech because fuck them.”

Anonymous Coward says:

Speaking of begging advertisers to return, according to WSJ, the company “…warned advertisers that beginning Aug.7, brands’ accounts will lose their verification – a golden check mark that indicates their account truly represent the brand – if they haven’t spent at least $1000 on ads in the previous 30 days, or $6000 on ads in the previous 180 days…”
I’m trying to come up with a word for such an action, but for some reason can’t quite choose one from the many that spring to mind.

This comment has been deemed insightful by the community.
Manabi (profile) says:

Re: Re: Re:

It’s a shakedown. It’s almost identical to the whole “Nice store you got here, shame if anything happened to it,” method of shaking store owners down to make them pay for protection. (Protection from the people they’re paying the protection money to.) This is, “Nice brand name you got there, shame if someone else could pretend to be you and tweet horrible stuff, ruining your brand’s reputation…”

If they lose their gold check-marks, someone could change their name to match the brand, pay for Twitter Blue and a lot of people would think the blue check-mark meant they were really the brand tweeting. Same as what happened during the initial roll-out of Twitter Blue, only worse because the real brand wouldn’t have a check-mark this time.

Anonymous Coward says:

Re: Re:

Some of those words are best to not be used in a decent company, others are full of sarcasm. I mean, those advertisers are getting the same treatment the regular users did but a few months ago. And the were warned by the “chief twit” (he can rebrand all he wants, but he called himself that and it stuck) back when the current money grabbing – misnamed – idiotic “verification” system was announced. And they saw the first attempt at rolling it out. Should take a hint when given one… or several!
Also, i wonder what the current theoretical CEO with her ad world background thinks of this move. I’m not sure that “extortion” is a good B2B strategy.

This comment has been deemed insightful by the community.
Cliff Jerrison says:

The main defense Elon and Dom’s fans are using right now is that he didn’t post it for enjoyment, but to condemn it.

Which, even taking that at face value, changes absolutely nothing about the legality or ethics here. If there were a loophole that it’s okay to post pictures of children being abused as long as you caption them “what NOT to do,” that would go very badly, very fast.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Another day, another article by Democratic Party operative MM in which he indulges his obsessive hatred of Elon Musk.

In a few hours, we’ll next have another post by TC alleging that law enforcement is evil and that most Americans don’t actually like their police stomping on the heads and alleged-rights of the scum of our society.

After that, probably something about how intellectual property rights are a bad thing and companies and individuals that try to defend them are working against society (unlike radical gender ideologues, but I digress).

Rinse, repeat.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re:

  1. Who knows? Neither of us know him personally, but from the way MM covers him, you’d think Mr. Musk had fired from Twitter all of MM’s degenerate friends and then kicked his puppy.
  2. No. The only people who think law enforcement is evil are communists and non-ideologically aligned criminals who resent the check on their predatory, anti-social behavior.
  3. The only instance of truly egregious abuse of copyright that I can recall was by Prenda Law.
This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re:

The only instance of truly egregious abuse of copyright that I can recall was by Prenda Law.

How many tears did you have to fight back just to type that sentence John Smith?

I assure you, between the likes of Perfect 10, Malibu Media and Richard Liebowitz, copyright law has no shortage of bad actors.

This comment has been deemed insightful by the community.
Toast says:

Re: Re: Re:

„The only people who think law enforcement is evil are communists and non-ideologically aligned criminals who resent the check on their predatory, anti-social behavior.“
So you agree that the January-06-people should be in prison for attacking capitol police, right?
Right?

This comment has been deemed insightful by the community.
David says:

Re: Re:

“Law enforcement is evil” is about as compelling as “raspberries are rotten”. Obviously that isn’t an universal truth or there would be no reason to want either in the first place. When you find raspberries are more prone to rotting than you want to, you need to work on your culture.

This comment has been flagged by the community. Click here to show it.

Matthew M Bennett says:

You're lying, again.

You don’t know what, exactly was posted, and it if something that people are claiming is kiddie porn (no idea, I don’t know what it was), it was absolutely done to out and shame pedophiles.

I certainly think intent matters (it matters under the law), and the policy that you’re quoting predates Musk’s takeover. Removing a lot of those policies was literally the point of taking over Twitter.

(Which no, was not profitable prior to takeover, regardless of whatever dumbshit cherrypicking you want to play by saying “it was profitable in June!” So what? It wasn’t profitable overall. And yes they banned the laptop story for 2 weeks and yes they shadowbanned and yes followed government orders to censor)

Furthermore, you’re purposefully omitting the none-zero tolerance part of the policy:

In the majority of cases, the consequence for violating our child sexual exploitation policy is immediate and permanent suspension

In other words, there’s wiggle room, and you’re making it sound like the Old-Twitter policy meant that dom_lucre must be banned forever, when it says nothing of the kind. You are literally lying about what the policy says. Meanwhile, the original posts were removed, regardless of intent, so Musk is following the old policy quite well, actually.

But you can’t have that because you need to pretend Musk is being hypocritical and capricious.

Bonus, I loved this bit:

sometimes telling users why they were suspended can create more problems, as users try to “litigate” their suspension.

So what that tells me is that you are against any kind of due process, which since you’re an orwellian leftist who loves censorship, even guided by government, yeah, that tracks.

Just in general I think it’s hilarious that you think it’s Musk’s job to run Twitter how you would (which would obviously fail) but you feel the need to lie to pretend that Musk isn’t following a policy he probably doesn’t feel obligated to follow but is still following, anyway.

Perfect. My expectations for you were very low and you still failed them.

bhull242 (profile) says:

Re:

You don’t know what, exactly was posted, and it if something that people are claiming is kiddie porn (no idea, I don’t know what it was), it was absolutely done to out and shame pedophiles.

Irrelevant. Intent is not a defense in CSAM law.

I certainly think intent matters (it matters under the law), […]

It actually doesn’t matter under the law. Have you actually read the law?

Removing a lot of those policies was literally the point of taking over Twitter.

Removing policies in place to conform to criminal law by removing CSAM immediately, regardless of context (which is what the law requires, btw) is why Musk took over Twitter? Because that’s the policy we’re talking about here.

(Which no, was not profitable prior to takeover, regardless of whatever dumbshit cherrypicking you want to play by saying “it was profitable in June!” So what? It wasn’t profitable overall. […])

Do you have evidence to support this?

([…] And yes they banned the laptop story for 2 weeks and yes they shadowbanned and yes followed government orders to censor).

No, the post containing the story was banned for one day. The account was locked for two weeks, but that isn’t the same as banning the story.

No, they did not shadowban in the sense it was meant and understood at the time they said they didn’t, and what they were doing was exactly what Musk is supporting now.

No, they did not follow government orders to censor. The evidence points to the exact opposite.

So what that tells me is that you are against any kind of due process, which since you’re an orwellian leftist who loves censorship, even guided by government, yeah, that tracks.

That doesn’t follow. At all.

Just in general I think it’s hilarious that you think it’s Musk’s job to run Twitter how you would […]

He doesn’t. Nobody does. Stop lying.

[…] but you feel the need to lie to pretend that Musk isn’t following a policy he probably doesn’t feel obligated to follow but is still following, anyway.

The policy in question is US criminal law, so he ought to feel obligated to follow it.

This comment has been flagged by the community. Click here to show it.

Matthew M Bennett says:

Re: Re:

Irrelevant. Intent is not a defense in CSAM law.

Relevant, actually, because intent is always part of criminal law, even if the law makers try to exclude it. You literally cannot remove intent as a prerequisite.

But Mr Bari Weiss, I thought I made clear you were not smart enough to argue with, and that I did not want to hear from you?

Nothing about that has changed. You definitely haven’t gotten smarter.

Anonymous Coward says:

Re: Re: Re:

But Mr Bari Weiss, I thought I made clear you were not smart enough to argue with, and that I did not want to hear from you?

And yet here you are.

If you genuinely hate this place so much you would have fucked off a long time ago, instead of spending the time here you might have spent on your alleged wife.

TMCC says:

So CSEM vs. CSAM. Are people that use the term CSEM low-key disputing that stuff is ‘abuse’, or is ‘exploitation’ just casting a wider net than ‘abuse’?

Haven’t seen that distinction before, but seeing as how Twitter is using it now, I assume there’s some fuck-stupid passive aggressive reason behind it.

bhull242 (profile) says:

Re:

I think it’s to avoid the whole argument over how broad the term “abuse” is. No one except pedophiles is going to argue that it isn’t, at a minimum, exploitation, but there are some who insist that abuse must involve violence and/or negative emotions, not just manipulating. I don’t agree, but it is easier to just avoid the argument altogether.

This comment has been deemed insightful by the community.
rahaeli (user link) says:

Re:

They are related concepts, but CSAM is a subset of CSEM.

CSAM: child sexual abuse material, aka material that depicts or memorializes an actual act of sexual abuse against a child

CSEM: child sexual exploitation material, aka material that is an image of a child and is being used for sexual gratification, but is not inherently and by definition memorializing an actual act of sexual abuse against a child

All CSAM is also CSEM, but not all CSEM is CSAM. The images on Twitter that we’re discussing are absolutely CSAM: they are the recording of a particularly heinous act of child abuse (and trust me, if you don’t know the details, don’t look them up). Examples of something that could be CSEM but not CSAM range all the way from “beauty pageant photos” to “toddlers in diapers” to “kids running through a sprinkler so their clothes are wet and see-through”: there’s a wide range of stuff that doesn’t meet the legal US defintion of “child pornography” (which is what the law calls what we now call CSAM) but is definitely weird, creepy, and sexualized, and that’s what the CSEM term is useful for. Non-photorealistic art that is not “indistinguishable from an actual minor” like drawings is also CSEM but not CSAM. Photomanipulations that appear to be indistinguishable from an actual minor is classed with CSAM. AI-generated photorealistic depictions are probably going to eventually be classified under CSAM just like photomanipulations are, but that’s a very fast-moving area and there’s no real precedent on it yet.

CSAM is, inherently, illegal in the US. (It’s inherently illegal in most other countries, too, but I can only reliably speak to US law.) CSEM is sometimes illegal and sometimes not, depending on whether or not it meets the Miller vs California obscenity test. There is a large demand for CSEM-that-is-not-CSAM, in both “drawings that are not photorealistic” and “photorealistic images that don’t meet the legal defintion of ‘child pornography’ but are still weird, creepy, and sexualized in the context they’re presented in”, and platforms struggle really hard with identifying and removing it, especially because the same image can be CSEM in one context and not in the next.

(There’s a reason why just about every T&S expert out there will tell you that if you’re a parent, and especially if your kid is pre-pubertal, do not ever post images of them online. Especially do not ever post images of them in diapers or images that show their feet. Just trust me on this one.)

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

This may include media, text, illustrated, or computer-generated images.

I wholeheartedly support this.

I support freedom of artistic expression as much as the next person, but there are limits to that.

It really is time that Japanese artists started to realize that and adhere to American and British standards.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re: Re:3

I literally pay for artists to illustrate these scenes. I have a right to this just as a boring ass male pays for boring ass porn.

You can’t stop the paradigm shift. We will evolve the same way as the anglerfish does: where the male is nothing more than a weakling simp to seek out an alpha female and become her shiny pair of testicles. Because that’s all men are. Spermatozoa on legs.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re: Re:

You cannot censor us forever, Hywoman. Neither can you censor all of us. As Jodi Picoult portrayed in her bestseller Sing You Home, lesbians are tasked with the important duty of saving women from themselves by invalidating their unhappy unions with men. We can start by touching schoolgirls to kiss each other, not filthy, nasty boys.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

'Zero tolerance(for anyone but us posting it)'

Another fine showing of ‘rules for thee but not for me’ that has plagued the site since he took over, where the rules aren’t there to keep order and be applied equally but merely used against those he doesn’t like and ‘forgotten’ when it comes to those he does.

That he’s willing to hand out nothing more than a minor wrist slap for CSAM though… kinda giving away the game there Elon.

Bloof (profile) says:

He values the fact qAnon targets the political left too much to want to take any action against them, no matter how much harm they cause to their families, the victims of actual child exploitation and legitimate child protection charities. Also he knows if he starts banning them, they will turn the shitbag conspiracy theorist eye of sauron on him and his connection to Epstein and Maxwell.

This comment has been deemed insightful by the community.
Emma says:

I don’t know how relevant this is, since I understand reasonable people aren’t in the business of ranking CSAM by badness, but based on other reporting I’ve seen on this, it was a piece of CSAM so violent and vile that multiple law enforcement agencies assumed it was a hoax when they were first made aware of it. That’s what Musk’s QAnon buddy posted, and that’s what Elon’s twitter left up for four days

This comment has been flagged by the community. Click here to show it.

Leave a Reply to observer Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt needs your support! Get the first Techdirt Commemorative Coin with donations of $100
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...