Apple's New Scanning Tools Raising More Concerns, Even Inside Apple

from the take-a-step-back dept

Last week, we wrote about our concerns about Apple’s newly announced scanning efforts that the company claimed were to protect children. Lots of security experts raised concerns about how this was being rolled out — and none of the complaints were meant to take away from the very real and legitimate concerns about child sexual abuse. Security guru Alex Stamos wrote one of the most thoughtful threads about the whole thing, noting (as with so many of these issues) that there are no easy answers here. I highly recommend you read the entire thread, but here’s a little snippet:

Similar to many of these debates involving nuanced tech policy issues with no easy answers, a big part of the problem here is, as Stamos notes in his thread, there are tons of conversations happening about the nuances and tradeoffs, and even though Apple’s approach is not as disastrous and dangerous as it could have been (i.e., clearly a lot of thought was put by the team at Apple into minimizing many — though not all — of the risks here), this approach was still done without talking to the many, many people who have been trying to find a reasonable balance here. And that messes a lot of stuff up.

Stamos, along with computer science professor/security guy Matt Green, have now published a good piece in the NY Times highlighting their concerns. The article notes there is less concern about the iMessage child safety features (Apple’s initial description of that seemed much more concerning, but the details show why it’s not that bad). But the photo scanning on the phone raises a lot of concerns:

But the other technology, which allows Apple to scan the photos on your phone, is more alarming. While Apple has vowed to use this technology to search only for child sexual abuse material, and only if your photos are uploaded to iCloud Photos, nothing in principle prevents this sort of technology from being used for other purposes and without your consent. It is reasonable to wonder if law enforcement in the United States could compel Apple (or any other company that develops such capacities) to use this technology to detect other kinds of images or documents stored on people?s computers or phones.

While Apple is introducing the child sexual abuse detection feature only in the United States for now, it is not hard to imagine that foreign governments will be eager to use this sort of tool to monitor other aspects of their citizens? lives ? and might pressure Apple to comply. Apple does not have a good record of resisting such pressure in China, for example, having moved Chinese citizens? data to Chinese government servers. Even some democracies criminalize broad categories of hate speech and blasphemy. Would Apple be able to resist the demands of legitimately elected governments to use this technology to help enforce those laws?

Another worry is that the new technology has not been sufficiently tested. The tool relies on a new algorithm designed to recognize known child sexual abuse images, even if they have been slightly altered. Apple says this algorithm is extremely unlikely to accidentally flag legitimate content, and it has added some safeguards, including having Apple employees review images before forwarding them to the National Center for Missing and Exploited Children. But Apple has allowed few if any independent computer scientists to test its algorithm.

The computer science and policymaking communities have spent years considering the kinds of problems raised by this sort of technology, trying to find a proper balance between public safety and individual privacy. The Apple plan upends all of that deliberation. Apple has more than one billion devices in the world, so its decisions affect the security plans of every government and every other technology company. Apple has now sent a clear message that it is safe to build and use systems that directly scan people?s personal phones for prohibited content.

In a separate thread, Stamos has a suggested path forward for Apple, which involves pumping the brakes quite a bit on some of these features.

Meanwhile, Reuters revealed on Thursday that inside Apple there are widespread concerns as well.

Apple employees have flooded an Apple internal Slack channel with more than 800 messages on the plan announced a week ago, workers who asked not to be identified told Reuters. Many expressed worries that the feature could be exploited by repressive governments looking to find other material for censorship or arrests, according to workers who saw the days-long thread.

Past security changes at Apple have also prompted concern among employees, but the volume and duration of the new debate is surprising, the workers said. Some posters worried that Apple is damaging its leading reputation for protecting privacy.

The article notes that many of the concerns are coming from outside of the security team at Apple — suggesting that the concerns are more about perception than they are technical. But, really, this highlights the same problem that Stamos noted earlier: Apple’s standard operating procedure of doing everything alone, and then also doing “surprise” announcements regarding products. That’s great for a new gadget in your pocket. It’s not so great for dealing with a massively challenging and very legitimate problem with no easy answers, where getting things even a little wrong can have significant negative consequences.

Unlike many companies that rush out offerings that do more harm than good, I do think that Apple did think this through internally with lots of smart and thoughtful people. But these are problems and challenges that go beyond just one company — and Apple’s famously insular approach is exactly the wrong thing for this sort of challenge.

Filed Under: , , , , , , , ,
Companies: apple

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Apple's New Scanning Tools Raising More Concerns, Even Inside Apple”

Subscribe: RSS Leave a comment
24 Comments
Michael says:

Of course there's an easy answer

there are no easy answers here

Easy answer: No one should be scanning anyone’s device ever, with the exception of law enforcement armed with a warrant.

I have no issue with Apple scanning your iCloud — every cloud storage company’s been doing this for years because it’s their hardware.

Apple scanning your phone is no different than Mazda searching my car. Fuck that.

Anonymous Coward says:

Re: Of course there's an easy answer

Apple scanning your phone is no different than Mazda searching my car. Fuck that.

Well, that’s coming. It’s already impossible to order some cars without cellular-based tracking features (OnStar etc.), and now we’ve got in-car cameras and microphones appearing for various purposes—e.g., cameras watching the driver’s face to tell if they’re not watching the road while using driver assistance features. It’s only going to get worse, and there’s not much you can do other than stock up on old cars.

Anonymous Coward says:

Re: Of course there's an easy answer

The problem is that people are falling for the idea that this is a "tech policy issue". Like, forget about child abuse: that doesn’t involve technology, so we can’t tech our way out of that. But pictures of child abuse (or non-abusive sexual pictures of children) do involve technology, and they make people uncomfortable, so let’s just pretend that’s the problem. Nevermind that we have no real evidence they’re a significant driver of child abuse (and, in fact, there’s some evidence that sexual pictures of adults reduce sexual abuse of adults—so maybe we’re actually harming children by alleviating the discomfort of adults).

You know, we have technology like mass surveillance, centralized storage, machine learning, and Apple can press a button and apply it to all our private shit. They have to do something, right? Doesn’t do anything about people abusing children without taking photos. But, hmm, maybe those people have iPhones in their pocket or on the table while abusing the children, and if Apple could only activate the cameras and microphones—well, activate them differently, because Siri’s already got the microphones always listening—maybe we’d help a few more children… Don’t worry, though—we’d never use it for anything other than child sexual abuse material. And maybe non-sexual child abuse. And murder, it goes without saying. Maybe trade secrets, but only Apple’s. But never for copyright infringement, ’cause we respect your privacy after all.

Greg Glockner says:

A dangerous backdoor

CSAM deserves to be prosecuted to the full extent of the law. Scanning for CSAM on the cloud is fair game. However, on-device scanning can and will lead to scope creep.

What happens when China compels Apple to report pro-democratic content? When Mideast monarchs compel Apple to report homosexual content? When Germany compels Apple to report pro-Nazi content that traps research about fascism?

Apple’s response is that we should all trust them to do the right thing. Do I trust Apple? Yes, I’m a loyal customer. Do I trust the US government? Generally. But do I trust all nations to treat their citizens fairly? Absolutely not.

This feature is a despot’s dream, especially thanks to Apple’s infamous secrecy.

Greg Glockner says:

Re: Re: A dangerous backdoor

I can’t see that happening. Remember, Apple said they don’t view images. However, Apple is alerted when a file hash (a fingerprint) on the device matches the database. Currently, Apple said the database only includes CSAM images. I believe them and generally trust them in the USA. However, once the technology is available, then, for instance, the Chinese government can force Apple to add pro-democracy image hashes to the database. To force Apple, the Chinese government gives a simple ultimatum: “you can’t manufacture or sell your products in China unless you comply”.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re: A dangerous backdoor

The problem is that apple is about to let the nose of a camel into the tent, and when they do they will have difficulty in stopping it and the rest of the herd following it in. If apple implement this, they will have given law enforcement a very big lever for demanding other remote access to Apple systems.

Anonymous Coward says:

Antivirus

Based on the underlying concept behind many antivirus products, using AI and other filters to create a subset of virus signatures that can be detected, this doesn’t seem that different. All that is being defined as a virus artefact is CSAM.

What seems to be the fundamental issue is that a human behavior is being classed as a virus. Yes, I agree that those who enjoy CSAM need to be found and punished, but backdooring this seems like a massive problem.

What if those who leaked incriminating documents had their rights to whistleblow on illegal documents had those purloined documents submitted to this "virus" database. And that’ll most likely preceed those who sent sexual material to someone of their own gender, because, VIRUS!

As we’ve seen before, Apple leads Microsoft, how long before such features get built into Defender (or whatever its called now)? Assuming that it can be disabled, won’t that just lead to people removing basic security from their computers? And for those who sightly understand will just buy other AV software which will likely have similar functions and signatures?

This won’t end in the way Apple has planned

Uriel-238 (profile) says:

Wow. It sucks to be an Apple user.

Not that Google users or Windows users are much safer. In fact, Microsoft has reserved the right to search Win10 systems and keylog Win10 sessions for all sorts of allegedly benign purposes, with the assertion that they keep all rights to use what they find however they like whether to sell to rival companies or report crime to law enforcement. I really wonder how any business uses Win10 without (regularly) stripping all the spyware out. I only know businesses that do and who shrug about it.

But so far we haven’t seen any company go without controversy when it comes to either a) spying on more than what they promised or b) spreading that data around more than they promised, usually selling it to third parties. And we haven’t really seen any company that has suffered dire consequences for having done so, such as getting liquidated and all the executive officers getting shunned from society.

The US and EU don’t care much about broken corporate promises.

And projects like this always creep without notice.

The only way to do something like this is with full, 100% transparency, so that Apple reports to the end user everything they scanned and what they found of interest, including if it’s sus. And the user gets these reports before the police does.

If they’re not willing to do that, it’s high time to jailbreak your phone and encrypt the shit out of it, or use all your extra data space for hello.jpg type images that would squick any human brain behind any human eyes. That or chose a new phone OS.

Another option would be to fill any unused iCloud space with sexy sand dunes, extreme baby close-ups or whatever else might serve to provide AI false positives. There are Def Con hackers more creative than I regarding antagonistic input data to foul up image detection engines.

Police in the United States are more interested in securing a white ethnostate than actually enforcing law, and alleged crimes are just a tool to them to that end. All this is going to do is provide more reasons for the DEA / FBI / ATF / local police to bust into nonwhite households and murder people where they live.

In China or whatever country is rolling this out, I can only assume the motivations of their law enforcement are even worse.

Anonymous Coward says:

Apple’s internal plan runs thusly:

We promise publicly to block GOVERNMENT requests for data.

We will "sell" data to an NGO we setup. This NGO can sell the data to the government.

Thus publicly we can say we ignore government requests, whilst funneling every document and image direct to the government’s servers.

We will start with "think of the children" for child porn, and slowly add to it. first CPM, then all images. Then text-only documents. then imessages, then SMS and finally every document will be "scanned" and reported on.

The internal plans are SO damning, governments across the EU, UK etc are planning to abandon iPhones for work use entirely, regardless of cost as the security implications are just too high.

Apple will be selling passwords to government authentication apps and websites stored on an iphone to whichever government will pay for it.

Anonymous Coward says:

Sarah has already pointed out this system works really poorly on legal drawings. Drawings make up the majority of Child Sexual Abuse Material (CSAM) in the world, particularly with things like anime abuse imagery. Some countries like Japan openly produce it, in spite of international condemnation.

This is a sub-category of NPAI, Non-Photographic Abuse Imagery, an official and widespread term, which itself is a sub-category of CSAM. Remember, that this is a country by country list, so laws can vary. It is not required for it to be a real person, and it is no less abusive by failing to be a real person, as it is a representation of abuse, even if a fictional one.

Unfortunately, it’s very likely it could mistake a regular anime girl for an anime girl engaging in sexual activity, through factors like comparing the background.

Leave a Reply to Tanner Andrews Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...