Researchers Who Built Similar System Explain Why Apple's CSAM Scanning System Is Dangerous
from the it's-not-good dept
Jonathan Mayer, a Princeton University professor and former chief technologist at the FTC, is one of the smartest people I know. Every time I’ve spoken with him I feel like I learn something. He’s now written a quite interesting article for the Washington Post noting how he, and a graduate researcher at Princeton, Anunay Kulshrestha, actually built a CSAM scanning system similar to the one that Apple recently announced, which has security experts up in arms over the risks inherent to the approach.
Mayer and Kulshretha note that while Apple is saying that people worried about their system are misunderstanding it, they are not. They know what they’re talking about — and they still say the system is dangerous.
We wrote the only peer-reviewed publication on how to build a system like Apple?s ? and we concluded the technology was dangerous. We?re not concerned because we misunderstand how Apple?s system works. The problem is, we understand exactly how it works.
Mayer and Kulshretha are certainly not making light of the challenges and problems associated with stopping CSAM. It’s why they started their project to see if there was an effective way to identify CSAM even in end-to-end encrypted systems. And they even built a system to do just that. And what they found is that the risks are simply too great.
We were so disturbed that we took a step we hadn?t seen before in computer science literature: We warned against our own system design, urging further research on how to mitigate the serious downsides. We?d planned to discuss paths forward at an academic conference this month.
That dialogue never happened. The week before our presentation, Apple announced it would deploy its nearly identical system on iCloud Photos, which exists on more than 1.5 billion devices. Apple?s motivation, like ours, was to protect children. And its system was technically more efficient and capable than ours. But we were baffled to see that Apple had few answers for the hard questions we?d surfaced.
The potential dangers that Mayer and Kulshretha are exactly what many had warned about when Apple announced its plans:
After many false starts, we built a working prototype. But we encountered a glaring problem.
Our system could be easily repurposed for surveillance and censorship. The design wasn?t restricted to a specific category of content; a service could simply swap in any content-matching database, and the person using that service would be none the wiser.
A foreign government could, for example, compel a service to out people sharing disfavored political speech. That?s no hypothetical: WeChat, the popular Chinese messaging app, already uses content matching to identify dissident material. India enacted rules this year that could require pre-screening content critical of government policy. Russia recently fined Google, Facebook and Twitter for not removing pro-democracy protest materials.
We spotted other shortcomings. The content-matching process could have false positives, and malicious users could game the system to subject innocent users to scrutiny.
That’s already pretty damning. But there’s some other even more damning information that has come out as well. As we noted in our earlier posts (and as Mayer and Kulshretha noted in their research), the risk of false positives is extremely high. And late last week, the hypothetical became a lot more real. Someone reverse engineered the NeuralHash algorithm that Apple is using and put it on Github.
It did not take long for people to point that, first of all, there are collisions with totally unrelated images:
Well that didn't take long.
?Can you verify that these two images collide??
?Yes! I can confirm that both images generate the exact same [NeuralHash] hashes on my iPhone. And they are identical to what you generated here.?
— Kenn White (@kennwhite) August 18, 2021
Now, you might say that since that second image is not really an image, maybe it’s not that big a deal? Well, about that…
— Anish Athalye (@anishathalye) August 19, 2021
And, of course, one of the issues with any hash-based system is the idea that subtle changes to the image would output a totally different hash, thus making it easy for some to simply route around the scanning. Apple had suggested its system could defeat that, but…
Do you see a difference between these two images? Apple's NeuralHash thinks they differ on 52 of 96 bits (ed570844756690de887ceec3 vs 2d574044756690de887cfe43).
No I won't tell you how I did it, but it runs on CPU in about 10 seconds. NeuralHash is trivial to evade. pic.twitter.com/0D2nsjvUVJ
— Rich Harang (@rharang) August 19, 2021
This is part of the reason that we highlighted earlier that security researchers are so up in arms about this. Apple seemingly ignored so much of the research and conversations that were happening about these approaches, and just barged right in announcing that it had a solution without exploring the tradeoffs and difficulties associated with it — leaving that for security experts to point out afterwards.
Apple is trying to downplay these findings, saying that it expected the collisions at least, and that it’s system would also do a separate server side hashing comparison which would stop the false collisions. Though, as Bruce Schneier points out, if this was “expected,” then why wasn’t it discussed in the initial details that were released? Similarly, I have yet to see a response to the flip side issue of changing the images in a way that fool NeuralHash while still looking the same.
I know Apple keeps wanting to insist that it’s thought through all of this, but it doesn’t seem to have thought through any of how the security community would see all of this, and it’s after-the-fact scrambling is not exactly reassuring.