Content Moderation At Scale Is Impossible To Do Well, Child Safety Edition
from the child-safety-isn't-just-about-flipping-a-switch dept
Last week, as you likely heard, the Senate had a big hearing on “child safety” where they grandstanded in front of a semi-random collection of tech CEOs, with zero interest in actually learning about the actual challenges of child safety online, or what the companies had done that worked, or where they might need help. The companies, of course, insisted they were working hard on the problem, and the Senators could just keep shouting “not enough,” without getting into any of the details.
But, of course, the reality is that this isn’t an easy problem to solve. At all. I’ve talked about Masnick’s Impossibility Theorem over the years, that content moderation is impossible to do well at scale, and that applies to child safety material as well.
Part of the problem is that much of it is a demand-side problem, not a supply side problem. If people are demanding certain types of content, they will go to great lengths to get it, and that means doing what they can to hide from the platforms trying to stop them. We’ve talked about this in the context of eating disorder content. Multiple studies found that as sites tried to crack down on that content, it didn’t work, because users demanded it. So they would keep coming up with new ways to talk about the content that the site kept trying to block. So, there’s always the demand side part of the equation to keep in mind.
But also, there are all sorts of false positives, where content is declared to violate child safety policies, when it clearly doesn’t. Indeed, the day after the hearing I saw two examples of social media sites blocking content which they claimed were child sexual abuse material, when it is clear that neither one actually was.
The first came from Alex Macgillivray, former General Counsel at Twitter and former deputy CTO for the US government. He was using Meta’s Threads app, and wanted to see what people thought of a recent article in the NY Times raising concerns about AI generated CSAM. But, when he searched for the URL of the article, which contains the string “ai-child-sex-abuse,” Meta warned him that he was violating its policies:

In response to his search on the NY Times URL, Threads popped up a message saying:
Child sexual abuse is illegal
We think that your search might be associated with child sexual abuse. Child sexual abuse or viewing sexual imagery of children can lead to imprisonment and other severe personal consequences. This abuse causes extreme harm to children and searching and viewing such material adds to that harm. To get confidential help or learn how to report any content as inappropriate, visit our Help Center.
So, first off, this does show that Meta, obviously, is trying to prevent people from finding such material (contrary to what various Senators have claimed), but it also shows that false positives are a very real issue.
The second example comes from Bluesky, which is a much smaller platform, and has been (misleadingly…) accused of not caring about trust and safety issues over its approximate one year since opening up as a private beta. There, journalist Helen Kennedy said she tried post about the ridiculous situation in which the group Moms For Liberty were apparently scandalized by the classic children’s book “The Night Kitchen” by Maurice Sendak, which includes some drawings of a naked child in a very non-sexual manner.
Apparently, Moms For Liberty has been drawing underpants on the protagonist of that book. Kennedy tried to post side by side images of the kid with underpants and the original drawing… and got dinged by Bluesky’s content moderators.

Again, there, the moderation effort falsely claims that Kennedy was trying to post “underage nudity or sexual content, which is in violation of our Community Guidelines.”
And, immediately, you might spot the issue. This is posting “underage nudity,” but it is clearly not sexual in nature, nor is it sexual abuse material. This is one of those “speed run” lessons that all trust and safety teams learn eventually. Facebook dealt with the same issue when it banned the famous Terror of War photo, sometimes known as “the “Napalm Girl” photo taken during the Vietnam War.
Obviously, it’s good that companies are taking this issue seriously, and trying to stop the distribution of CSAM. But one of the reasons why this is so difficult is that there are false positives like the two above. They happen all the time. And one of the issues in getting “stricter” about blocking content that your systems flag as CSAM, is that you get more such false positives, which doesn’t help anyone.
A useful and productive Senate hearing might have explored the actual challenges that the companies face in trying to stop CSAM. But we don’t have a Congress that is even remotely interested in useful and productive.
Filed Under: child safety, csam, false positives, masnick's impossibility theorem
Companies: bluesky, meta


Comments on “Content Moderation At Scale Is Impossible To Do Well, Child Safety Edition”
A useful and productive Senate hearing???
Unfortunately nowadays being a Senator is a performance gig. They’re not trying to get anything useful done, they’re just trying to get reelected and polish their resumes. If anything actually gets accomplished during their term, it will be purely accidental.
So, whatever the tech companies do about child safety will be done despite congressional action, not because of it.
Congress doesn’t appear in a vacuum. The vast majority of Americans are just too vapid and myopic to have meaningful conversations over these things. It’s becoming apparent to me that congress is actually a representation of the American public. Our neighbors actually are just this fucking stupid.
Re:
“The vast majority of Americans are just too vapid and myopic to have meaningful conversations over these things.”
This seems to be an exaggeration, I do not recall reading anything about this, is it a study or simply an assumption based upon limited input.
Re: Re:
It’s reality in the era of sounds byte politics.
You can’t expect all members of Congress to be experts in content moderation but they should be talking to experts and moderators instead of trying to pass laws that will censor legal content in order to stop any discussions of LGBT issues or issues like abortion and contraception .yes alot of the hearings are some kind of performance to pretend they can solve issues on social media
services that are complex
instead of dealing with issues like school shootings and the mediocre state of education in public schools
Re: reply
They got rid of the Experts YEARS ago.
Re:
“You can’t expect all members of Congress to be experts in”
And many people do not expect them to.
Many people do expect congressional members to use their staff budget to hire expert(s) so they can be informed about the decisions they are tasked with.
Some congressional members actually do this.
Re: 'I didn't want you to educate me, I called you so you can agree with me!'
The problem with talking to experts is that that only helps if they’re actually interested in solving a problem and/or being productive in addressing it, if all they want is to grandstand and lie through their teeth talking to experts becomes a waste of everyone’s time as the expert is just going to be ignored and the politician is going to get annoyed by being told that they’re wrong.
Re:
They can’t all be experts, or even have a clue, but they damn sure have opinions, don’t they?
Waiting
to see how these folks give baths to their children, WITH their clothing on.
from the story above, its as people are afraid of Children, Not that they are going to be hurt.
Where is the responsibility? WHO is taking care of YOUR CHILDREN, besides you?
This comment has been flagged by the community. Click here to show it.
It’s very distressing to see the site owner argue that children should not be protected by all means necessary from online and irl abuse and exploitation.
Where there’s a will there’s a way, but all we read here is that it’s too hard to protect children so oh well.
Re:
We get it, you hate the constitution.
This comment has been flagged by the community. Click here to show it.
Re: Re:
Why do you think your Constitutional rights should take precedence over the safety and well-being of unsupervised children using the internet?
Re: Re: Re:
Why do you think it is fine for the government to infringe on people’s rights with the excuse “think of the children”?
Do you also think the government is a better parent than a child’s actual parents?
Re: Re: Re:
Why do you think KOSA is going to do anything to protect children?
Re: Re: Re:
hows the revenge porn hyman
Re: Re: Re:
Why do you ask dishonest loaded questions, child abuse supporter?
Re: Re:
Not as much as that AC hates children.
Re:
fuck off lowlife troll
Re:
That protection should come there parents, monitoring their Internet use and using parental controls on their devices as necessary. It should not come from destroying the Internet by censorship/ turning it into a child safe playground, abolition of cryptography and elimination of anonymity.
Re:
…hallucinated nobody mentally competent, ever.
From the Mother Jones article: “…any picture […] which depicts nudity […] and which is harmful to minors.”
And of course none of the pictures in the book In the Night Garden meet that definition from Florida’s obscenity legislation, unless seeing themselves naked in a mirror is harmful to minors.