Content Moderation At Scale Is Impossible To Do Well, Child Safety Edition

from the child-safety-isn't-just-about-flipping-a-switch dept

Last week, as you likely heard, the Senate had a big hearing on “child safety” where they grandstanded in front of a semi-random collection of tech CEOs, with zero interest in actually learning about the actual challenges of child safety online, or what the companies had done that worked, or where they might need help. The companies, of course, insisted they were working hard on the problem, and the Senators could just keep shouting “not enough,” without getting into any of the details.

But, of course, the reality is that this isn’t an easy problem to solve. At all. I’ve talked about Masnick’s Impossibility Theorem over the years, that content moderation is impossible to do well at scale, and that applies to child safety material as well.

Part of the problem is that much of it is a demand-side problem, not a supply side problem. If people are demanding certain types of content, they will go to great lengths to get it, and that means doing what they can to hide from the platforms trying to stop them. We’ve talked about this in the context of eating disorder content. Multiple studies found that as sites tried to crack down on that content, it didn’t work, because users demanded it. So they would keep coming up with new ways to talk about the content that the site kept trying to block. So, there’s always the demand side part of the equation to keep in mind.

But also, there are all sorts of false positives, where content is declared to violate child safety policies, when it clearly doesn’t. Indeed, the day after the hearing I saw two examples of social media sites blocking content which they claimed were child sexual abuse material, when it is clear that neither one actually was.

The first came from Alex Macgillivray, former General Counsel at Twitter and former deputy CTO for the US government. He was using Meta’s Threads app, and wanted to see what people thought of a recent article in the NY Times raising concerns about AI generated CSAM. But, when he searched for the URL of the article, which contains the string “ai-child-sex-abuse,” Meta warned him that he was violating its policies:

Image

In response to his search on the NY Times URL, Threads popped up a message saying:

Child sexual abuse is illegal

We think that your search might be associated with child sexual abuse. Child sexual abuse or viewing sexual imagery of children can lead to imprisonment and other severe personal consequences. This abuse causes extreme harm to children and searching and viewing such material adds to that harm. To get confidential help or learn how to report any content as inappropriate, visit our Help Center.

So, first off, this does show that Meta, obviously, is trying to prevent people from finding such material (contrary to what various Senators have claimed), but it also shows that false positives are a very real issue.

The second example comes from Bluesky, which is a much smaller platform, and has been (misleadingly…) accused of not caring about trust and safety issues over its approximate one year since opening up as a private beta. There, journalist Helen Kennedy said she tried post about the ridiculous situation in which the group Moms For Liberty were apparently scandalized by the classic children’s book “The Night Kitchen” by Maurice Sendak, which includes some drawings of a naked child in a very non-sexual manner.

Apparently, Moms For Liberty has been drawing underpants on the protagonist of that book. Kennedy tried to post side by side images of the kid with underpants and the original drawing… and got dinged by Bluesky’s content moderators.

Image

Again, there, the moderation effort falsely claims that Kennedy was trying to post “underage nudity or sexual content, which is in violation of our Community Guidelines.”

And, immediately, you might spot the issue. This is posting “underage nudity,” but it is clearly not sexual in nature, nor is it sexual abuse material. This is one of those “speed run” lessons that all trust and safety teams learn eventually. Facebook dealt with the same issue when it banned the famous Terror of War photo, sometimes known as “the “Napalm Girl” photo taken during the Vietnam War.

Obviously, it’s good that companies are taking this issue seriously, and trying to stop the distribution of CSAM. But one of the reasons why this is so difficult is that there are false positives like the two above. They happen all the time. And one of the issues in getting “stricter” about blocking content that your systems flag as CSAM, is that you get more such false positives, which doesn’t help anyone.

A useful and productive Senate hearing might have explored the actual challenges that the companies face in trying to stop CSAM. But we don’t have a Congress that is even remotely interested in useful and productive.

Filed Under: , , ,
Companies: bluesky, meta

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation At Scale Is Impossible To Do Well, Child Safety Edition”

Subscribe: RSS Leave a comment
22 Comments
Anonymous Coward says:

A useful and productive Senate hearing???

Unfortunately nowadays being a Senator is a performance gig. They’re not trying to get anything useful done, they’re just trying to get reelected and polish their resumes. If anything actually gets accomplished during their term, it will be purely accidental.

So, whatever the tech companies do about child safety will be done despite congressional action, not because of it.

Anonymous Coward says:

You can’t expect all members of Congress to be experts in content moderation but they should be talking to experts and moderators instead of trying to pass laws that will censor legal content in order to stop any discussions of LGBT issues or issues like abortion and contraception .yes alot of the hearings are some kind of performance to pretend they can solve issues on social media
services that are complex
instead of dealing with issues like school shootings and the mediocre state of education in public schools

That One Guy (profile) says:

Re: 'I didn't want you to educate me, I called you so you can agree with me!'

The problem with talking to experts is that that only helps if they’re actually interested in solving a problem and/or being productive in addressing it, if all they want is to grandstand and lie through their teeth talking to experts becomes a waste of everyone’s time as the expert is just going to be ignored and the politician is going to get annoyed by being told that they’re wrong.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...