Report: Client-Side Scanning Is An Insecure Nightmare Just Waiting To Be Exploited By Governments

from the 14-out-of-14-cybersecurity-experts-agree dept

In August, Apple declared that combating the spread of CSAM (child sexual abuse material) was more important than protecting millions of users who’ve never used their devices to store or share illegal material. While encryption would still protect users’ data and communications (in transit and at rest), Apple had given itself permission to inspect data residing on people’s devices before allowing it to be sent to others.

This is not a backdoor in a traditional sense. But it can be exploited just like an encryption backdoor if government agencies want access to devices’ contents or mandate companies like Apple do more to halt the spread of other content governments have declared troublesome or illegal.

Apple may have implemented its client-side scanning carefully after weighing the pros and cons of introducing a security flaw, but there’s simply no way to engage in this sort of scanning without creating a very large and slippery slope capable of accommodating plenty of unwanted (and unwarranted) government intercession.

Apple has put this program on hold for the time being, citing concerns raised by pretty much everyone who knows anything about client-side scanning and encryption. The conclusions that prompted Apple to step away from the precipice of this slope (at least momentarily) have been compiled in a report [PDF] on the negative side effects of client-side scanning, written by a large group of cybersecurity and encryption experts (Hal Abelson, Ross Anderson, Steven M. Bellovin, Josh Benaloh, Matt Blaze, Jon Callas, Whitfield Diffie, Susan Landau, Peter G. Neumann, Ronald L. Rivest, Jeffrey I. Schiller, Bruce Schneier, Vanessa Teague, and Carmela Troncoso). (via The Register)

Here’s how that slippery slope looks. Apple’s client-side scanning may be targeted, utilizing hashes of known CSAM images, but once the process is in place, it can easily be repurposed.

Only policy decisions prevent the scanning expanding from illegal abuse images to other material of interest to governments; and only the lack of a software update prevents the scanning expanding from static images to content stored in other formats, such as voice, text, or video.

And if people don’t think governments will demand more than Apple’s proactive CSAM efforts, they haven’t been paying attention. CSAM is only the beginning of the list of content governments would like to see tech companies target and control.

While the Five Eyes governments and Apple have been talking about child sex-abuse material (CSAM) —specifically images— in their push for CSS, the European Union has included terrorism and organized crime along with sex abuse. In the EU’s view, targeted content extends from still images through videos to text, as text can be used for both sexual solicitation and terrorist recruitment. We cannot talk merely of “illegal” content, because proposed UK laws would require the blocking online of speech that is legal but that some actors find upsetting.

Once capabilities are built, reasons will be found to make use of them. Once there are mechanisms to perform on-device censorship at scale, court orders may require blocking of nonconsensual intimate imagery, also known as revenge porn. Then copyright owners may bring suit to block allegedly infringing material.

That’s just the policy and law side. And that’s only a very brief overview of clearly foreseeable expansions of CSS to cover other content, which also brings with it concerns about it being used as a tool for government censorship. Apple has already made concessions to notoriously censorial governments like China’s in order to continue to sell products and services there. Additional demands will obviously be made if Apple implements scanning that can be exploited to locate and censor critics of the government.

There’s plenty of bad stuff on the technical side, too. CSS is pretty much malware, the report says:

CSS is at odds with the least-privilege principle. Even if it runs in middleware, its scope depends on multiple parties in the targeting chain, so it cannot be claimed to use least-privilege in terms of the scanning scope. If the CSS system is a component used by many apps, then this also violates the least-privilege principle in terms of scope. If it runs at the OS level, things are worse still, as it can completely compromise any user’s device, accessing all their data, performing live intercept, and even turning the device into a room bug.

CSS has difficulty meeting the open-design principle, particularly when the CSS is for CSAM, which has secrecy requirements for the targeted content. As a result, it is not possible to publicly establish what the system actually does, or to be sure that fixes done in response to attacks are comprehensive. Even a meaningful audit must trust that the targeted content is what it purports to be, and so cannot completely test the system and all its failure modes.

Finally, CSS breaks the psychological-acceptability principle by introducing a spy in the owner’s private digital space. A tool that they thought was theirs alone, an intimate device to guard and curate their private life, is suddenly doing surveillance on behalf of the police. At the very least, this takes the chilling effect of surveillance and brings it directly to the owner’s fingertips and very thoughts.

While the report does offer some suggestions on how to make scanning less exploitable, the downsides are too numerous to conclude this can somehow be done safely. Given how many intrusive surveillance programs have already been justified with concerns about terrorism or the spread of illicit material, CSS — no matter how implemented — with become a tempting tool for governments to exploit.

In a world where our personal information lies in bits carried on powerful communication and storage devices in our pockets, both technology and laws must be designed to protect our privacy and security, not intrude upon it. Robust protection requires technology and law to complement each other. Client-side scanning would gravely undermine this, making us all less safe and less secure.

Despite this comprehensive report warning against the implementation of client-side scanning, there’s a chance Apple may still roll its version out. And once it does, the pressure will be on other companies to do at least as much as Apple is doing to combat CSAM. The only upside is that if governments decide scanning should be used for reasons other than Apple intends, it has the power to shut its system down.

Filed Under: , , ,
Companies: apple

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Report: Client-Side Scanning Is An Insecure Nightmare Just Waiting To Be Exploited By Governments”

Subscribe: RSS Leave a comment
Anonymous Coward says:

every government is doing the same thing! they’re not interested in catching thieves or terrorists, they’re all more interested in surveilling ordinary folks, then, if able, blame them for doing nothing, making themselves look good! what i really dont understand is why the hell was a world war fought in order to prevent Hitler from doing the exact same thing as governments are doing today, watching everything us ordinary people do, say, read etc, etc and keep the few most wealthy and powerful people today in the positions they’ve had for decades? we’re slaves, with no rights because of the way these people have had them removed erroded or whatever and how they have been backed by those in the most favorable positions to do so! baically, the planet is completely fucked and the people are right behind it, in just as much total crap!!

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: I have something to tell you, and it will make you sad.

The robot censor doesn’t care about false positives, because it doesn’t care about anying.

The politicians won’t care about false positives, because at worst, the false positives are what they campaign against Next Term.

The copyright holders won’t care about false positives (when the system devolved to letting them in) because there won’t be organized blow-back against them.

The Intelligence Community won’t care about false positives because they justify increased funding.

Uriel-238 (profile) says:

Re: False positives

A lot of porn-screening AI finds sand-dunes very sexy. In the meantime, there was a study team that figured out how to convince scanning software how to misidentify things by adding an overlay of something else.

Both of theses specific issues may or may not have been compensated for by later tech, but it’s going to be an ongoing race.

And if the government doesn’t care about false positives, well, rather than using the overlay to make CSAM look to AI like a sports car, one can make a sports car look to AI like CSAM, and then make sure the image finds its way to the phone of political enemies.

Anonymous Coward says:

A smartphone is now as powerful as a laptop from 2009, it could be used to spy on someone’s browsing, audio, video data if client side spying is allowed there will be a rush of governments asking for for data , the erosion of user privacy will be disastrous as many users their only connection to the Web is a phone we will be in an orwellian world where the government will have data on every person’s data and location minoritys protestors human rights activists will be targeted first once example in russia being gay lgbt is almost illegal

This comment has been deemed insightful by the community.
Keroberos (profile) says:

This is a stupid idea that needs to be taken out to a deserted field in the middle of nowhere, beaten to death with a baseball bat, buried in a shallow grave, and never spoken of again. If implemented it will not end well. In repressive regimes it will be cracking down on anti-government materials, here in the US it will be tracking down those pesky terrorists, and everywhere will have copyright holders needing to protect their precious, precious intellectual property.

Scary Devil Monastery (profile) says:

Re: Re: Re:

"If Apple implements this, the other phone manufacturers will follow like a string of baby chicks."

Fortunately not that easy. Most other OEM’s ditched their proprietary OS in favor of Android and it’s just a tad more difficult to implement this sort of scheme on an OS which assumes the owner has or easily can obtain /root.

Anonymous Coward says:

Re: Re: Re: Re:

an OS which assumes the owner has or easily can obtain /root.

Yeah sure. If you also want to disable half of your phone features (KNOX on Samsung devices, NFC payments, the Camera on something else…), or use anything that requires a SafetyNet attestation (Most banking apps, McDonalds App, random video apps, random games, pretty much any employer mandated app, etc.). Go right ahead. root that device. Just be prepared to exploit your device like a hacker would to able to do so, and / or buy an expensive and not well publicized "developers edition" phone, and / or buy an international phone for the privilege. That’s value will permanently drop once you unlock it’s bootloader.

Why? Because many of the "off the shelf" devices, even the ones you buy from places like Walmart and Best Buy without a carrier subsidy, around the US are not rooted by default. They have no built in mechanism to allow rooting, and have the bootloader locked at the request of cell carriers who may or may not allow you to unlock the bootloader with a registration form and / or account. Which you’ll need to unlock to install a root binary on devices without one. (Unless you want to exploit a kernel vulnerability on your phone to get root, which will be removed with the next system update, along with the kernel vulnerability you used to install it.)

Before you come at me with "You can use the mobile version of the bank’s website", why should I need to? The SafetyNet attestation detects root and my device is forbidden access via the app because it’s "insecure". That same device is still capable of using the web browser version, hell that "insecure" device will still be used for the SMS text message that contains my 2FA code needed to login via the web browser. (SMS which is insecure for these purposes in all instances regardless, but lets focus on the SafetyNet issue here.)

So which is it? Is my device "insecure" and the web browser should honor SafetyNet’s prohibition of access, or is SafetyNet a bunch of crap and the app should work regardless?

Because you know what will happen if you choose the former: Suddenly SafetyNet is required for web browsing everywhere. After all it verifies things like the device’s IMEI response which is perfect for trackers, and it’s cryptographically signed by a per-device key, verified by Alphabet / Google, stored on a secure processor at the factory and that cannot be changed by the end user. Which means it cannot be easily spoofed for privacy purposes.

Scary Devil Monastery (profile) says:

Re: Re: Re:2 Re:

"Why? Because many of the "off the shelf" devices, even the ones you buy from places like Walmart and Best Buy without a carrier subsidy, around the US are not rooted by default."

This another of those "only in america" issues that keep cropping up? Because assuming root is in the rest of the world at least such an important part of Android it’s hard to imagine an OEM or carrier locking that down without getting reviews bad enough to sink sales.

"If you also want to disable half of your phone features…"

Most of those impacted you can really go without. Ordinary core android without bells and whistles will do quite nicely if what you need is credential verification, for instance.

"and / or buy an international phone for the privilege."

It is an "only in america" issue then. That, to me, points towards the US marketplace being the unhealthy bit, not to Android somehow being flawed in the regard you try to point out.

"So which is it? Is my device "insecure" and the web browser should honor SafetyNet’s prohibition of access, or is SafetyNet a bunch of crap and the app should work regardless?"

You’re trying to argue a completely different issue there;

What I posited from the start is that Android will not be as easy to force this shit unto as the iPhone is, for a multitude of reasons.
One such reason is the assumption that the owner of the device owns that device.
Another being that you’re going to have a tough sell telling Samsung, Xiaomi, ZTE, Huwaei, Moto, Lenovo, and the other fifty-odd major Android phone OEM’s that they all need to issue phones with hardware hacks allowing the OEM the dubious accountability of going through the private and confidential information of their customer base.

This sort of shit only works because enough of the Apple crowd are cultists so beholden to Steve Jobs old view that should leave everything to Apple even this step won’t make them cease their ritual of bringing a sleeping bag and a day off to be first in line every time a new iDevice is on offer.

realitymonster says:

This system isn't the problem

Ugh, look, this is actually kind of a bad take.

Photos already does client-side scanning of your library. It would be easier to build in a back door to that than to repurpose a system that’s built for CSAM scanning that theoretically has protected sources. If China is looking for pictures of Winnie the Pooh, it’s a million times easier for Apple to train their photo scanning system on existing Winnie the Pooh pictures than find a way to integrate it into their CSAM system and then generate a whole bunch of strikes against the account so the police get called.

Everyone keeps talking about how foreign governments will demand that Apple add pictures that they want to use to trump up charges against people, or to hunt down dissidents. Well, bad news:

  1. Those governments can already compel Apple to do anything they want, irrespective of these image databases. They’ll just tell Apple to hand over the unencrypted backup and they’ll scan it (or modify it) directly.
  2. In the case of China, they already have all the users’ data in servers that are located in China. Why waste time scanning on the phone and having it report back?
  3. If a government is going to gin up some fake crime and throw someone in jail, they don’t need Apple there to do it. They’ll simply confiscate the victim’s phone and CLAIM that they found images on it. Due process doesn’t matter to them, so why spend time pretending that it does?

I’m not saying that Apple’s client-side scanning system is good or without problems, it’s that it makes no sense to use it even if it does exist. For a government that’s a bad actor, data security doesn’t matter. This is like the XKCD about the wrench: governments that don’t care about your digital rights will also beat you with a wrench until you confess anyway.

This discourse around Apple’s system pretends like it’s the most obvious way to scan someone’s photo library and find incriminating data and it absolutely isn’t.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: This system isn't the problem

The apple system is a simple hash matcher, and not an AI system. It has full access to the system, and as such defeats end to end encryption as it could exfiltrate private keys with a trivial modification.

To put it simply, it is Apple showing that if they want to, they have root access to phones, and can do whatever they want with that access.

Anonymous Coward says:

Re: Re: Re: This system isn't the problem

All OS vendors have potential root access to devices against the owners wishes. There is an implied trust that they will not exercise that capability, and breaking that trust should result in the vendor losing sales, and Apple have shown willing to break that trust. Open source OS’s. like Linux have the same reliance on trust, but can be forked if someone betrays that trust; (just ask Oracle about OpenOffice, forked to LibreOffice, and MySQL forked to MariaDB because they lost the trust of a large portion of the user and developer base).

This comment has been deemed funny by the community.
That One Guy (profile) says:

Totally and absolutely unbelievable

I find it difficult to take the article seriously as it seems to be based upon a flawed premise, namely that governments would ever ask for more once they’ve got what they wanted. I mean really, I’m sure once they have one company scanning for a particular kind of content they’ll be perfectly content with that, what kind of greedy, self-serving government would take advantage of the new door Apple just provided them to ask for even more?

That One Guy (profile) says:

Re: Re: Totally and absolutely unbelievable

For the most part I trust TD’s readers to be smart enough to spot the sarcasm without it having to be explicitly pointed out for them with the only expected exceptions being new readers who might not be familiar with my style of commenting or the deranged trolls that infest the site and who only see what they want to.

Anonymous Coward says:

Fundamentally, "intellectual property" is the sister of the peculiar institution of slavery in the constitution. It defines part of everyone’s mind and actions to be someone else’s property. And when it is extended to software, it says that what "your" machine runs is controlled and modifiable only by someone else, working against your interests. It makes you a slave and everything you own is owned by somebody else.

As we see Apple scanning your files for the police and Windows demanding mandatory password escrow, we are seeing a basic truth: the days of pretending that "proprietary software" is usable, for any purpose, are coming to an end.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...