Another Look At The STOP CSAM Act

from the ai-on-the-brain dept

Amidst the current batch of child safety bills in Congress, a familiar name appears: the STOP CSAM Act. It was previously introduced in 2023, when I wrote about the threat the bill posed to the availability of strong encryption and consequently to digital privacy in the post-Roe v. Wade era. Those problems endure in the 2025 version (which has passed out of committee), as explained by the EFF, the Internet Society, and many more civil society orgs. To their points, I’ll just add that following the Salt Typhoon hack, no politician in any party has any business ever again introducing a bill that in any way disincentivizes encryption.

With all that said, the encryption angle is not the only thing worth discussing about the reintroduced bill. In this post, I’d like to focus on some other parts of STOP CSAM – specifically, how the bill addresses online platforms’ removal and reporting of child sex abuse material (CSAM), including new language concerning AI-generated CSAM. The bill would make platforms indicate whether reported content is AI – something my latest research finds platforms are not all consistently doing. However, the language of the requirement is overbroad, going well beyond generative AI. What’s more, forcing platforms to indicate whether content is real or AI overlooks the human toll of making that evaluation, risks punishing platforms for inevitable mistakes, and assumes too much about the existence, reliability, and availability of technological tools for synthetic content provenance detection.

STOP CSAM Would Make Platforms Report Whether Content Is AI-Generated

One of the many things the STOP CSAM bill would do is amend the existing federal statute that requires platforms to report apparent CSAM on their services to the CyberTipline operated by the nonprofit clearinghouse the National Center for Missing and Exploited Children (NCMEC). The 2025 version of the bill dictates several new requirements to platforms for how to fill out CyberTipline reports. One is that, “to the extent the information is within the custody or control of a provider,” every CyberTipline report “shall include, to the extent that it is applicable and reasonably available,” “an indication as to whether” each item of reported content “is created in whole or in part through the use of software, machine learning, artificial intelligence, or any other computer-generated or technological means, including by adapting, modifying, manipulating, or altering an authentic visual depiction” (i.e., real abuse material). If a platform knowingly omits that information when it’s “reasonably available” – or knowingly submits a report that “contains materially false or fraudulent information” – STOP CSAM permits the federal government to impose a civil penalty of $50,000 to $250,000.

This provision is pertinent to the findings of a paper about AI-generated CSAM that my colleague Shelby Grossman and I published at the end of May. Based on our interviews with platforms (including some AI companies), we find that platforms are generally confident in their ability to detect AI CSAM, and they’re reporting AI CSAM to the CyberTipline (as they must), but it appears platforms aren’t all consistently and accurately labeling the content as being AI-generated when submitting the CyberTipline reporting form (which includes a checkbox marked “Generative AI”). When we interviewed NCMEC employees as part of our research, they confirmed to us that they receive CyberTipline reports with AI-generated files that aren’t labeled as AI. Our paper urges platforms to (1) invest resources in assessing whether newly identified CSAM is AI-generated and accurately labeling AI CSAM in CyberTipline reports, and (2) communicate to NCMEC the platform’s policy for assessing whether CSAM is AI-generated and labeling it as such in its reports.

In short, current practice for AI CSAM seems to be to remove it and report it to NCMEC, but our sense is that most platforms are not prioritizing labeling CSAM as AI-generated in CyberTipline reports. Presently, reporting CSAM (irrespective of whether it’s AI or real) is mandatory, but the statute doesn’t give that many specifics about what information must be included, meaning most parts of the CyberTipline reporting form are optional. Thus there’s currently no incentive to spend extra time trying to figure out whether an image is AI and checking another box (all while the neverending moderation queue keeps piling up). STOP CSAM would change that, and would likely lead platforms to spend more time filling out CyberTipline reports about the content they’d quickly remove.

The $250,000 question is: How accurate does an “indication as to whether” a reported file is partially/wholly AI-generated have to be – and how much effort do platforms have to put into it? Can platforms rely on a facial assessment by a front-line content moderator, or is some more intensive analysis required? At what point is information about a file not “reasonably available” to the platform, even if it’s technically within the platform’s “custody or control”? Also, a lot of CyberTipline reports are submitted automatically without human review at the platform, typically where a platform’s CSAM detection system flags a hash match to known imagery that’s been confirmed as CSAM. How would this AI “indication” requirement interact with automated reporting? 

The Reporting Requirement Goes Beyond “AI”

STOP CSAM’s new reporting provision doesn’t require the reporting only of AI-generated imagery. Read the language again: when submitting a CyberTipline report, platforms must include “an indication as to whether the apparent [CSAM] is created in whole or in part through the use of software, machine learning, artificial intelligence, or any other computer-generated or technological means, including by adapting, modifying, manipulating, or altering an authentic visual depiction.”

That goes well beyond the “Generative AI” checkbox currently included in the reporting form (which can already mean multiple different things if it’s checked, according to our interview with NCMEC). Indeed, this language is so broad that it seems like it would apply even to very minor changes to real abuse images, like enhancing the brightness and saturation of the colors, or flipping it so it’s a mirror-image. I’m not sure why or how a platform could reasonably be expected to know what edits have been made to an image. Plus, it’s strange to equate a fully AI-generated image with a real image that’s merely had the color saturation tweaked in a photo editing app. Yet the bill language treats those two things as the same. 

This broad language would turn that “Generative AI” checkbox into a catch-all. Checking the checkbox could equally likely mean (1) “this is a digital image of a child who’s actively being abused which has been converted from color to grayscale,” (2) “this is an image from a years-old known abuse image series that’s been altered with Photoshop,” (3) “this is a morphed image of a real kid that’s been spit out by an AI-powered nudify app,” or (4) “this is a fully virtual image of an imaginary child who does not exist.” How is that useful to anyone? Until NCMEC adds more granularity to the reporting form, how is NCMEC, or law enforcement, supposed to triage all the reports with the “Generative AI” box checked? Is Congress’s expectation that platforms must also include additional details elsewhere (i.e. the free text entry box also included in the CyberTipline form)? Will they be fined if they don’t? 

It’s not a speculative concern that platforms would comply with STOP CSAM by reporting that an image has an AI element even if it merely has minor edits. In both this AI CSAM paper and our previous paper on the CyberTipline, we found that platforms are incentivized to “kick the can down the road” when reporting and let NCMEC and law enforcement sort it out. As one platform employee told us, “All companies are reporting everything to NCMEC for fear of missing something.” The burden then falls to NCMEC and law enforcement to deal with the deluge of reports of highly variable quality. Congress reinforces this incentive to over-report whenever it ups the ante for platforms by threatening to punish them more for not complying with increased reporting requirements – such as by fining them up to a quarter of a million dollars for omitting information that was “reasonably available.” The full Senate should keep that in mind should the bill ever be brought to the floor.

The Human Cost of the “Real or AI?” Determination

Although our report urges platforms to try harder to indicate in CyberTipline reports whether content is AI-generated, there are downsides if Congress forces platforms to do so. In adding that mandate to platforms’ CyberTipline reporting requirements, the STOP CSAM bill does not seem to contemplate the human factors involved in making the call as to whether particular content is AI-generated. 

As our paper discusses, there are valid reasons why platforms might hesitate to make the assessment that a file is AI-generated or convey that in a CyberTipline report. For one, platforms may not want to make moderators spend additional time scrutinizing deeply disturbing images or videos. Doing content moderation for CSAM was already psychologically harmful work even before generative AI, and we heard from respondents that AI-generated CSAM tends to be more violent or extreme than other material. One platform employee memorably called it “nightmarescape” content: “It’s images out of nightmares now, and they’re hyperrealistic.” By requiring an indication of whether reported content is AI, the STOP CSAM Act would incentivize platforms to make moderators spend longer analyzing content that’s particularly traumatic to view. Congress should not ignore the human toll of their child-safety bill.

Platforms may also fear making the wrong call: What if a platform reports an image as AI CSAM when it’s actually of a real child in need of rescue? What if the law enforcement officer who receives that report deprioritizes it for action out of the mistaken belief that it’s “just” AI, thereby letting the harm continue? Besides the weight of that mistake on platform personnel’s conscience, there’s also the specter of potential corporate liability for the error. (Platforms are supposed to be immune from liability for their CyberTipline reports, but that isn’t always the case.)

STOP CSAM would exacerbate the fear of getting the “real or AI?” assessment wrong. Platforms could incur stiff fines if a CyberTipline report knowingly omits required information or knowingly includes “materially false or fraudulent” information. That is, a platform could get fined both for failing to indicate that content is AI-generated when in fact it is, and for wrongly indicating that it is when in fact it isn’t, if the government concludes the conduct was knowing. (Even if the platform ends up getting absolved, the path to reaching that outcome will likely be costly and intrusive.)

Forcing platforms to make this assessment, while threatening to fine them for getting it wrong, could improve the consistency and accuracy of platforms’ CyberTipline reporting for AI-generated content. But it won’t come without a human cost, and it won’t guarantee 100% accuracy. There will inevitably be errors where real abuse imagery is mistakenly indicated to be AI (potentially delaying a child’s rescue), or where, as now, AI imagery is mistakenly indicated to be real (potentially wasting investigators’ time). 

To try to comply while mitigating their potential liability for errors, platforms might submit more CyberTipline reports with that “Generative AI” box checked, but add a disclaimer: that this is the platform’s best guess based on reasonably available information, but the platform is not guaranteeing the assessment’s accuracy and the assessment should not be relied on for legal purposes, etc. If platforms hedge their bets, what’s the point of making them check the box?

What’s the State of the Art for AI CSAM Detection?

Congress seems to believe that platforms know for a fact whether any given image they encounter is AI-generated or not, or at least that they can conclusively determine the ground truth. I’m not sure that’s true yet, based on our interviews for the AI CSAM paper

A respondent from a company that does include AI labels in its CyberTipline reports told us that they still use a manual process of determining whether CSAM is AI-generated. For now, most of our respondents believe the AI CSAM they’re seeing still has obvious tells that it’s synthetic. But moderators will need new strategies as AI CSAM becomes increasingly photorealistic. Already, one platform employee said that even with significant effort, it remains extremely difficult to determine whether AI-generated CSAM is entirely synthetic or based on the likeness of a real child. 

When it comes to content provenance, Congress should take care not to impose reporting requirements without understanding the current state of the technology for detecting AI content as well as the availability of such tools. True, there are already hash lists for AI CSAM that platforms are implementing, and tools do exist for AI CSAM detection. One respondent said that general AI-detection models are often sufficient to determine whether CSAM is AI-generated; we heard from a couple of respondents that existing machine learning classifiers do decently well at detecting AI CSAM, about as well as they do at detecting traditional CSAM. However, we also heard that the results vary by tool and tend to decline when the AI content is less photorealistic. And even currently performant tools can’t remain static, since the cat-and-mouse game of content generation and detection will continue as long as determined bad actors keep exploiting advances in generative AI. 

There’s also the issue of scale. Congress shouldn’t expect every entity that reports CSAM to NCMEC to have the same resources as a massive tech company that submits hundreds of thousands of CyberTipline reports annually. Implementing AI CSAM detection tools might not be appropriate for a small platform that submits only a handful of reports each year and does everything manually. This goes back to the question of how much effort a platform must put into indicating whether reported material is AI, and how accurate that indication is expected to be. Even for big platforms, it is a challenge to determine conclusively whether highly realistic-looking material is real or AI-generated, much less for small ones. Congress should not lose sight of that.

Conclusion

The reboot of STOP CSAM is just one of several bills introduced in this Congress that involve AI and child safety, of which the TAKE IT DOWN Act is the most prominent. Having devoted most of my work over the past two years to the topic of AI-generated CSAM, it is gratifying to see Congress pay it so much attention. That said, it’s dismaying when legislators’ alleged concern about child sex abuse manifests as yet another plan to punish online platforms unless they “do better,” without reckoning with the counterproductive incentives that creates, the resources available for compliance (especially to different-size platforms), or the technological state of the art. In that regard, unfortunately, the new version of STOP CSAM is the same as the old.

Riana Pfefferkorn (writing in her personal capacity) is a Policy Fellow at the Stanford Institute for Human-Centered AI.

Filed Under: , , , , , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Another Look At The STOP CSAM Act”

Subscribe: RSS Leave a comment
13 Comments
That Anonymous Coward (profile) says:

waves hi

So what underlies all of this is the long running belief, that politicians can’t seem to shake, that tech can just tech harder to solve all of societies problems.

We just told them to tech harder & we declared mission accomplished! Now we can focus on important things like passing laws that use your skin color to decide how much of a vote you have in trumps ‘merica.

CSAM is horrific but what is more horrific is the willful ignorance about it.

They believe some people are to good to look at it, my long running thread of busted conservative pedos / pulpit pedos blows that out of the water.

A politician recently managed to get a law changed so that a family member of his wouldn’t be charged with molesting a child. No thought about how horrible that change will end up being as now 13 yr olds can be ‘asking for it’.
In that state are graphic images of 13 yr olds now legal to possess?

AI can’t manage to not tell an addict to not do a little meth as a treat for trying to stay sober, but somehow it is capable of identifying new CSAM AI or not now?

The people who have to look at these things are not paid enough & definitely not given enough mental health care to deal with it, but politicians don’t care about that. They care about getting a 6 second soundbite for Faux or Newsmax where they can claim they solved the problem… again. Because the public have really horrible memories & they’ve declared they solved this so many time with so many similar we passed a law its up to someone else to make it happen that have failed.

Eventually it would be nice if we all lived in 1 reality, where we don’t have AI’s on the verge of going HAL 9000 who can instantly know everything about an image in a nanosecond and decide if its CSAM that is real or generated & flag it. Y’all live in a nation where people can’t handle seeing the tit of a statue or a marble penis without 17 trigger warnings… and somehow we can leap to perfect AI CSAM backed by humans who will have to look at the absolute dregs of human suffering & cruelty for a paycheck that ignores the dangers.

This battle has been going on for a VERY long time (trust the immortal on this one), there isn’t a solution beyond removing humans from the planet, then there would never be CSAM made ever again.

This law just makes everything shittier, will make sure there are penalties for not being able to do the impossible, and still kids are going to be molested, filmed, pimped out while politicians are patting themselves on the back.

ECA (profile) says:

Kinda funny

Shorting for AI, as if it matters?
This is Just another way to FORCE the internet corps to DO THE WORK that the Gov. Should be doing/assisting with.

The reality? They dont want the reality.
AS IF’ someone wanted to hide something on the net, it WOULD NOT BE HARD. 90% of the Most often Caught, is from Family pictures.

Direct messaging IS NOT SUPPOSED TO BE COVERED.(dic Pics) Which means that sending Any pic directly are NOT Noticed as THAT IS INVADING PRIVACY.

Long ago when the internet was created, there WERE HOLES in it. And they are finding it HARD to Fix those holes. BUT IT CAN BE DONE.
#1, is email, and the tag on email can include every site it goes thru. But does your Email get Scanned for Porn? Isnt that a privacy issue?

How do you get around Privacy while trying to PROTECT the children? You CANT.
So the people they are Catching Arnt trying to hide it.
How many Arrests based on CSAM? Internet reports or MOST ANY other thing about Child Abuse?

Epstein, is a perfect example. How long was he doing this? WHO was around him and Taking advantage of the situation??
Think of the requirements to Cause Harm to Children, That arnt Family based. This is NOT a poor mans game. And its a Very secret game, that has been happening Since God created, What ever god created.

Anonymous Coward says:

  • we found that platforms are incentivized to “kick the can down the road” when reporting and let NCMEC and law enforcement sort it out.*

It should be not only incentivized, but the law. A reporting party should report immediately, and not be doing detective work. Supposedly the style of law enforcement in the US involves law enforcement arrogating any and all law enforcement and investigative activity unto itself. This frequently makes the most sense.

Should one wait on reporting a body until one determines whether it fell, jumped, or was pushed off a roof?

ECA (profile) says:

Re: IT NOT THEIR JOB

Is what you are saying.
What are the Rule/laws of Looking up Private info on people?
As a company its a No-No.
As a Private person tracking another Human, Its a No-no and Called Stalking.

So the Gov. wants to let the States handle things, and Now they pass things onto the Citizens to do?? Crossing 1 border limits everything you can Do as a citizen. And anything NOT local will put you in jail.
Locally, IF’ you figure out who it is, AND CAN PROVE IT, Goto the police and Show your Proof.
They wont do wnything until the STATE AG tells them to.. But you dod your best.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re:

The problem is that we’ve seen the other side of what you’re suggesting in the US, such as the recent example in a Techdirt article of 11 year old school children getting charged with making threats for making a bad joke of a video about getting shot in a school shooting. The schools often defer to law enforcement for issues that don’t arise to criminal acts, but law enforcement seems only too happy to handcuff, arrest, charge, and traumatize children who can’t consent to a legal contract over the loosest interpretation of the available information. Some prosecutors have charged minors with crimes relating to producing child porn because they sexted with their peers. One child predator cop even committed suicide after failing to secure pictures of a minor’s genitalia in order to confirm he sent his own dick pics because the cop got caught pursuing inappropriate relations with young boys. Parents have been accused of creating CSAM by Google and had their accounts shut down during the pandemic when nurses and doctors asked for pictures of their children’s symptoms.

There are too many horror stories of how people in large systems react with brute force against situations that require nuance. Do you want to get accused due to a misunderstanding which might cost you your job or freedom or marriage even if you’re ultimately proven innocent?

That One Guy (profile) says:

'You want me to WHAT as part of my job? Hard pass, absolutely not.'

Good luck keeping the people doing moderation work if they’re now going to be required by law to spend copious amounts of time looking at CSAM so they can determine whether it’s AI created or not before they report it; unless they start recruiting from the non-trans, non-drag people that keep getting caught with that sort of content putting that in the job description is likely to cause a mass-exodus of moderators which is only going to benefit those posting it.

Anon says:

Huh?

Identify an AI altered picture of an older real image? Really? I would be very worried if anyone outside of specific law enforcement departments could reliably recognize “older real CSAM”. How do they know?

Unless the picture shows someone who has six fingers or 3 hands, how reliable is any determination that a picture is pure AI or even altered by AI?

Anonymous Coward says:

Here is my concern: Too many people claim “cartoons” include children in sexual situations when it is incorrect, because they assume specific bodytypes that irl women may have are children. Don’t have large breasts? Child. Short? Child.
Guess what, here in Asia, there are many short, small chested women that are nothing close to children, but because their bodytypes match a certain criteria, they are labeled as such.

I understand the need to protect children, to prevent sexual exploitation of children, etc, but one issue I do take, especially around the AI space, is how quickly people are to label something as CSAM that quite obviously is not. Seriously, it is actually very stupid. My own wife would fit into the category of “Oh, she looks like a child” because she is very short, small chested, and even has freckles.

I think we need to be very, very careful about these laws, and I worry about the risks with them, especially to places like Asia.

markatlarge (user link) says:

CSAM False Positives Are Real and Harmful

Excellent post!

I do take issue with this statement: Platforms are “generally confident in their ability to detect AI CSAM.”

That’s simply not true. I built an app called Punge to detect NSFW images on-device. While benchmark testing my app on a publicly available dataset cited in academic papers, my Google account was permanently suspended and 136k+ files were erased. I’ve documented my story here, including a verifiable filename:

https://medium.com/@russoatlarge_93541/googles-ai-surveillance-erased-130k-of-my-files-a-stark-reminder-the-cloud-isn-t-yours-it-s-50d7b7ceedab

The detection process itself is deeply flawed and open to manipulation. In this piece, I explain how it works and how researchers have shown it can be weaponized: https://medium.com/@russoatlarge_93541/weaponized-false-positives-how-poisoned-datasets-could-erase-researchers-overnight-188810395602

The current laws and methods don’t reliably catch true offenders. Instead, they encourage over-flagging — with devastating consequences for the falsely accused.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt needs your support! Get the first Techdirt Commemorative Coin with donations of $100
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...