The European Union Council is once again debating its controversial message scanning proposal, aka “Chat Control,” that would lead to the scanning of private conversations of billions of people.
Chat Control, which EFF has strongly opposed since it was first introduced in 2022, keeps being mildly tweaked and pushed by one Council presidency after another.
Chat Control is a dangerous legislative proposal that would make it mandatory for service providers, including end-to-end encrypted communication and storage services, to scan all communications and files to detect “abusive material.” This would happen through a method called client-side scanning, which scans for specific content on a device before it’s sent. In practice, Chat Control is chat surveillance and functions by having access to everything on a device with indiscriminate monitoring of everything. In a memo, the Danish Presidency claimed this does not break end-to-end encryption.
This is absurd.
We have written extensively that client-side scanning fundamentally undermines end-to-end encryption, and obliterates our right to private spaces. If the government has access to one of the “ends” of an end-to-end encrypted communication, that communication is no longer safe and secure. Pursuing this approach is dangerous for everyone, but is especially perilous for journalists, whistleblowers, activists, lawyers, and human rights workers.
If passed, Chat Control would undermine the privacy promises of end-to-end encrypted communication tools, like Signal and WhatsApp. The proposal is so dangerous that Signal has stated it would pull its app out of the EU if Chat Control is passed. Proponents even seem to realize how dangerous this is, because state communications are exempt from this scanning in the latest compromise proposal.
This doesn’t just affect people in the EU, it affects everyone around the world, including in the United States. If platforms decide to stay in the EU, they would be forced to scan the conversation of everyone in the EU. If you’re not in the EU, but you chat with someone who is, then your privacy is compromised too. Passing this proposal would pave the way for authoritarian and tyrannical governments around the world to follow suit with their own demands for access to encrypted communication apps.
Even if you take it in good faith that the government would never do anything wrong with this power, events like Salt Typhoon show there’s no such thing as a system that’s only for the “good guys.”
Despite strong opposition, Denmark is pushing forward and taking its current proposal to the Justice and Home Affairs Council meeting on October 14th.
We urge the Danish Presidency to drop its push for scanning our private communication and consider fundamental rights concerns. Any draft that compromises end-to-end encryption and permits scanning of our private communication should be blocked or voted down.
Phones and laptops must work for the users who own them, not act as “bugs in our pockets” in the service of governments, foreign or domestic. The mass scanning of everything on our devices is invasive, untenable, and must be rejected.
The EU Commission is the definition of insanity. It has tried for years to convince all EU members the best way to fight crime is to undermine the security and privacy of millions of EU residents. And, for years, it has failed to make an argument capable of convincing a majority of the 27 European Union countries that this especially drastic, incredibly dangerous proposal is necessary.
Those pushing for encryption backdoors (that they dishonestly won’t call encryption backdoors) have leveraged all the usual hot buttons: terrorism, drug trafficking, national security, child sexual abuse material. But once anyone reads past the introductory hysteria, they tend to see it for what it is: a way to create massive government-mandated security flaws that would negatively affect their constituents and, ironically enough, their own national security.
The Commission keeps pushing, though. And it has no reason to stop. After all, it’s not playing with its own money and it rarely, if ever, seems to actually care what most Europeans think about this proposal. But to get it passed it does need a majority. So far, it hasn’t even managed to talk most members of the EU Parliament into giving broken-by-mandate encryption a thumbs up, much less at least 14 of the 27 governments that make up the EU Council.
The desperation of the would-be encryption banners is evident. If the EU Commission thought it had the upper hand in anti-encryption negotiations, it never would have sent out the EU’s Donald Trump to convince fence-sitters to side with the encryption breakers. This is from activist group EDRi’s (European Digital Rights) report on the latest failure of the EU Commission to secure some much-needed support for its “chat control” (a.k.a. client-side scanning) efforts.
In summer 2024, the government of Hungary became the fifth country to be given the unenviable task of attempting to broker a common position of the Council of the EU on this ill-fated law. The European Commission has long been trying to convince Member State governments that the proposed Regulation is legally sound (it isn’t), would protect encryption (it wouldn’t) and that reliable technologies already exist (they don’t).
[…]
According to Politico and to local reports, notorious Hungarian Prime Minister, Viktor Orbán, pulled out all the stops to try and convince the Netherlands to support the latest text. And in the last few days, he came worryingly close to succeeding.
Orban, last seen at Techdirt manipulating emergency powers rolled out during the pandemic to arrest people who called him things like “dear dictator” and “cruel tyrant” on social media, is one of an unfortunate number of European leaders to hold “conservative” views. (You know which ones.) He’s a nationalist, which is a polite way of calling him a bigot. And, of course, our own would-be “dear dictator” thinks he’s one of the greatest guys in Europe.
Orbán, who has turned into a hero of Trump’s followers and other conservative populists, is known for his restrictions on immigration and LGBTQ+ rights. He’s also cracked down on the press and judiciary in his country while maintain a close relationship with Russia.
You can’t make human rights violation omelets without breaking a few encryptions, as they say. There are several self-serving reasons why Orban would support the notion of “chat control.” And very few of them have anything to do with fighting crime, combating terrorism, or stopping the spread of CSAM.
And that’s exactly why he should have been the last choice to soft-sell continent-wide undermining of encryption. But, as EDRi notes, it almost worked in the Netherlands. If the near-success of Orban’s sales tactics is surprising, it’s not nearly as surprising as the entities that showed up to push the Dutch government away from agreeing to the Commission’s “chat control” proposal.
When the people who would have the most to gain from pervasive disruption of encrypted services tell you there’s also a downside, that means something. It’s one thing for rights groups to say it. It’s quite another when the spies say the negatives would outweigh the positives.
While one might think that the last ditch effort that briefly converted an aspiring autocrat into a EU salesperson might signal the end of the line for “chat control”/client-side scanning/encryption bans, hope seems to spring eternal at the Commission. A new Commission will be in place by the end of the year and we can expect several of the new members will be just as desirous of breaking encryption as their predecessors, no matter how many times (and by how many countries) they’ve been told “no.”
Yet another attempt to mandate broken encryption has been disrupted. The Australian government has long held the belief that broken encryption would be a net win for citizens. Or, at the very least, it’s pretty sure it will be a huge win for law enforcement, which won’t have to deal with encrypted communications or devices.
But, despite declaring only criminals need encryption, proposals to expand the government’s power to include direct regulation of encryption have met with significant pushback. Its efforts began more than a half-decade ago but — after folding in horrible proposals by the UK government and the EU Commission — got a bit worse in recent years.
The new idea was called “client-side scanning.” The aim was to give the government access to illegal content passed around via encrypted services. Since the government wasn’t willing to simply declare encryption illegal, it passed the buck. New regulations would require service providers to undermine the encryption they offered their users, stripping one of the end-to-end encryption so communications can be monitored.
In November, the eSafety commissioner announced draft standards that would require the operators of cloud and messaging services to detect and remove known child abuse and pro-terror material “where technically feasible”, as well as disrupt and deter new material of the same nature.
[…]
But in the finalised online safety standards lodged in parliament on Friday, the documents specifically state that companies will not be required to break encryption and will not be required to undertake measures not technically feasible or reasonably practical.
That includes instances where it would require the provider to “implement or build a systemic weakness or systemic vulnerability in to the service” and “in relation to an end-to-end encrypted service – implement or build a new decryption capability into the service, or render methods of encryption used in the service less effective”.
This is great news, as long as the “final” proposal remains “final.” It will, of course, be temporary. The calls for breaking encryption aren’t going away. They’re omnipresent but have yet to take a solid foothold because governments can’t actually explain how any proposal like this is possible, much less feasible. They also can’t logically declare that any security flaw introduced by legislation won’t be exploited by the very people it aims to stop: criminals.
Those advocating the hardest for broken encryption are the most disturbed by this rollback. Australia’s eSafety commissioner, Julia Inman Grant, was given space in The Australian to vent her feelings about the success of those pushing back against anti-encryption mandates:
Grant hit back at the criticism of the proposals, saying tech companies had claimed the standards “represented a step too far, potentially unleashing a dystopian future of widespread government surveillance”.
The real dystopian future, she said, would be one where “adults fail to protect children from vile forms of torture and sexual abuse, then allow their trauma to be freely shared with predators on a global scale”.
Right. That’s a pretty hot take on what’s actually happened here. Tech companies can’t undo the laws of mathematics. Governments can’t guarantee their security holes won’t be exploited by criminals. And most rational people recognize there’s a trade-off being made here — one that gives millions of non-criminals additional security and privacy while only inconveniencing the government in rare cases. If that’s the equation, the government has no business demanding companies deliberately undermine the security of all users just so it can go after a very small percentage of them.
A few weeks back we wrote about a report that the EU Commission, in its push for dangerous client-side scanning mandates, had started buying highly targeted ads to try to influence people to support the policy. The ads, first revealed by Wired, were incredibly misleading. But, also, as we noted, appeared to violate EU’s privacy laws with the targeting.
The micro-targeting ad campaign categorized recipients based on religious beliefs and political orientation criteria—all considered sensitive information under EU data protection laws—and also appeared to violate X’s terms of service. Mekić found that the ads were meant to be seen by select targets, such as top ministry officials, while they were concealed from people interested in Julian Assange, Brexit, EU corruption, Eurosceptic politicians (Marine Le Pen, Nigel Farage, Viktor Orban, Giorgia Meloni), the German right-wing populist party AfD, and “anti-Christians.”
Mekić then found out that the ads, which have garnered at least 4 million views, were only displayed in seven EU countries: the Netherlands, Sweden, Belgium, Finland, Slovenia, Portugal, and the Czech Republic.
At first, Mekić could not figure out the country selection, he tells WIRED, until he realized that neither the timing nor the purpose of the campaign was accidental. The Commission’s campaign was launched a day after the EU Council met without securing sufficient support for the proposed legislation Mekić had been studying, and the targeted counties were those that did not support the draft.
The complaint is kind of amusing, as it points out that the EU Commission itself has spoken out against targeted advertising.
Of course, this is hardly the first time that the EU Commission has been accused of violating the very data protection laws it insists everyone else follow. It’s not even the second time. No matter what you think of the GDPR, at some point you have to wonder how seriously it can be taken when the body that pushed it so heavily for years, and likes to be condescendingly smug at the US for not adopting its own version of the GDPR… can’t even abide by its own regulations.
His heart is probably in the right place. That’s the best thing I can say about Berkeley professor Dr. Hany Farid, who has spent the last couple of years being wrong about CSAM (child sexual abuse material) detection.
That he’s been wrong has done little to shut him up. But he appears to deeply feel he’s right. And that’s why I’m convinced his heart is in the right place: right up there in the chest cavity where most hearts are located.
Physical heart location aside, he’s pretty much always wrong. He’s always happy to offer his (non-expert) opinion and deploy presentations that preach to the converted. He’s sure the CSAM problem is the fault of service providers, rather than those who create and upload CSAM.
So, he’s spent a considerable amount of time going after Apple. Apple, at one point, considered client-side scanning to be an acceptable solution to this problem, even if it meant making Apple less secure than its competitors. Shortly thereafter — following plenty of unified criticism — Apple decided it was better off protecting millions of innocent customers and users, rather than sacrificing them on the altar of “for the children” just because it might make it easier for the government to locate and identify the extremely small percentage of users engaged in illicit activity.
Why are there so many images of child abuse stored on iCloud? Because Apple allows it
There’s a difference between “allows” and “this kind of thing happens.” That’s the difference Farid hopes to obscure. No matter what platform is involved, a certain number of users will attempt to use it to share illicit content. That Apple’s cloud service is host to (a minimal amount) of CSAM says nothing about Apple’s internal attitude towards CSAM, much less about it’s so-called “allowing” of this content to be hosted and shared via its services.
But Farid insists Apple is complicit in the sharing of CSAM, something he attempts to prove by highlighting recent convictions aided by (wait for it) evidence obtained from Apple itself.
Earlier this year, a man in Washington state was sentenced to 22 years in federal prison for sexually abusing his girlfriend’s 7-year-old stepdaughter. As part of their investigation, authorities also discovered the man had been storing known images and videos of child sexual abuse on his Apple iCloud account for four years.
Why was this man able to maintain a collection of illegal and sexually exploitative content of children for so long? Because Apple wasn’t looking for it.
The first paragraph contains facts. The second paragraph contains conjecture. The third paragraph of this op-ed again mixes both, presenting both conjecture and and a secured conviction as evidence of Apple’s unwillingness to police iCloud for CSAM.
What goes ignored is the fact that the evidence used to secure these convictions was derived from iCloud accounts. If Apple indeed has no desire to rid the world of CSAM, it seems it might have put up more of a fight when asked to hand over this content.
What this does show is something that runs contrary to Farid’s narrative: Apple is essential in securing convictions of CSAM producers and distributors. The content stored in these iCloud accounts was essential to the success of these prosecutions. If Apple was truly more interested in aiding and abetting in the spread of CSAM, it would have done more to prevent prosecutors from accessing this evidence.
And that’s the problem with disingenuous arguments like the ones Farid is making. Farid claims Apple isn’t doing enough to stymie CSAM distribution. But then he tries to back his claims by detailing all the times Apple has been instrumental in securing convictions of child abusers.
Not content with ignoring this fatal flaw in his argument, Farid moves on to make arguably worse arguments using his version of known facts.
Back in the summer of 2021, Apple announced a plan to use innovative methods to specifically identify and report known images and videos of the rape and molestation of a child — without compromising the privacy that its billions of users expect.
This is a huge misrepresentation of Apple’s client-side scanning plan. It definitely would “compromise the privacy that its billions of users expect.” Apple’s proposed scanning of all content on user devices that might be hosted (however temporarily) by its iCloud service very definitely compromised their privacy. Worse, it compromised their security by introducing a new attack vector for malicious governments and malicious hackers that could have allowed anyone to access content phone users (incorrectly, in this case) assumed was only accessible to them.
That misrepresentation is followed by another false assertion by Farid:
Apple did not “quietly” abandon this plan. It publicly announced this reversal, something that led almost immediately to a number of government figures, talking heads, and special interest groups publicly expressing their displeasure with this move by Apple. It was anything but “quiet.”
Adding to this wealth of misinformation is Farid’s unsupported claims about hash-matching, which has been repeatedly shown to be easily circumvented and, even worse, easily manipulated to create false positives capable of causing irreparable damage to innocent people.
Detecting known images is a tried and true way many companies, including Apple’s competitors, have detected this content for years. Apple could deploy this same technique to find child sexual abuse images and videos on its platforms.
Translation: A parent innocently taking pictures of their infant in the bathtub will not be reported to law enforcement because those images have not previously been determined to be illicit. This critical distinction ensures that innocent users’ privacy remains intact while empowering Apple to identify and report the presence of known child sexual abuse images and videos on iCloud.
While it’s true hash-matching works to a certain extent, pretending innocent people won’t be flagged and/or the system can’t be easily defeated is ridiculous. But Farid has an ax to grind, and he’s obviously not going to be deterred by the reams of evidence that contradict what he obviously considers to be foregone conclusions.
The ultimate question is this: is it better to be wrong but loud about stuff? Or is it better to be right, even if it means some of the worst people in the world will escape immediate detection by governments or service providers?
Or, if those aren’t the questions you like, consider this: is it more likely Apple desires to be host of illicit images or is it more likely Apple isn’t willing to intrude on the privacy of users because it wishes to earn the trust of non-criminal users — users who make up the largest percentage of Apple customers?
People like Professor Farid aren’t willing to consider the most likely explanation. Instead, they insist — without evidence — big tech companies are willfully ignoring illegal activity so they can increase their profits. That’s just stupid. Companies that ignore illegal activity may enjoy brief bumps in profit margin but the long-term profitability of relying (as Farid insists they are) on illegal activity is something no tech company, no matter how large, would consider to be a solid business model.
Plenty of legislators and law enforcement officials seem to believe there’s only one acceptable solution to the CSAM (child sexual abuse material) problem: breaking encryption.
They may state some support for encryption, but when it comes to this particular problem, many of these officials seem to believe everyone’s security should be compromised just so a small percentage of internet users can be more easily observed and identified. They tend to talk around the encryption issue, focusing on client-side scanning of user content — a rhetorical tactic that willfully ignores the fact that client-side scanning would necessitate the elimination of one end of end-to-end encryption to make this scanning possible.
The issue at the center of these debates often short-circuits the debate itself. Since children are the victims, many people reason no sacrifice (even if it’s a government imposition) is too great. Those who argue against encryption-breaking mandates are treated as though they’d rather aid and abet child exploitation than allow governments to do whatever they want in response to the problem.
Plenty of heat has been directed Meta’s way in recent years, due to its planned implementation of end-to-end encryption for Facebook Messenger users. And that’s where the misrepresentation of the issue begins. Legislators and law enforcement officials claim the millions of CSAM reports from Facebook will dwindle to almost nothing if Messenger is encrypted, preventing Meta from seeing users’ communications.
Yes, the transition of CSAM sharing to online communication services has resulted in a massive increase in reports to NCMEC (National Center for Missing and Exploited Children).
The organization received 29 million reports of online sexual exploitation in 2021, a 10-fold increase over a decade earlier. Meanwhile the number of video files reported to NCMEC increased over 40 percent between 2020 and 2021.
But that doesn’t necessarily mean there are more children being exploited than ever before. Nor does it mean Facebook sees more CSAM than other online services, despite its massive user base.
Understanding the meaning of the NCMEC numbers requires careful examination. Facebook found that over 90 percent of the reports the company filed with NCMEC in October and November 2021 were “the same as or visually similar to previously reported content.” Half of the reports were based on just six videos.
As Landau is careful to point out, that doesn’t mean the situation is acceptable. It just means tossing around phrases like “29 million reports” doesn’t necessarily mean millions of children are being exploited or millions of users are sharing CSAM via these services.
Then there’s the uncomfortable fact that a sizable percentage of the content reported to NMCEC doesn’t actually involve any exploitation of minors by adults. Landau quotes from Laura Draper’s 2022 report on CSAM and the rise of encrypted services. In that report, Draper points out that some of the reported content is generated by minors for other minors: i.e., sexting.
Draper observed that CSAE consists of four types of activities exacerbated by internet access: (a) CSAM, which is the sharing of photos or videos of child sexual abuse imagery; (b) perceived first-person (PFP) material, which is nude imagery taken by children of themselves and then shared, often much more widely than the child intended; (c) internet-enabled child sex trafficking; and (d) live online sexual abuse of children.
While these images are considered “child porn” (to use an antiquated term), they are not actually images take by sexual abusers, which means they aren’t actually CSAM, even if they’re treated as such by NMCEC and reported as such by communication services. In these cases, Landau suggests more education of minors to inform them of the unintended consequences of these actions, first and foremost being that they can’t control who these images are shared with once they’ve shared them with anyone else.
The rest of the actions on that list are indeed extremely disturbing. But, as Landau (and Draper) suggest, there are better solutions already available that don’t involve undermining user security by removing encryption or undermining their privacy by subjecting them to client-side scanning.
[C]onsider the particularly horrific crime in which there is live streaming of a child being sexually abused according to requests made by a customer. The actual act of abuse often occurs abroad. In such cases, aspects of the case can be investigated even in the presence of E2EE. First, the video stream is high bandwidth from the abuser to the customer but very low bandwidth the other way, with only an occasional verbal or written request. Such traffic stands out from normal communications; it looks neither like a usual video communication nor a showing of a film. And the fact that the trafficker must publicly advertise for customers provides law enforcement another route for investigation.
Unfortunately, government officials tend to portray E2EE as the root of the CSAM problem, rather than just something that exists alongside a preexisting problem. Without a doubt, encryption can pose problems for investigators. But there are a plethora of options available that don’t necessitate making everyone less safe and secure just because abusers use encrypted services in order to avoid immediate detection.
Current processes need work as well. As invaluable as NCMEC is, it’s also contributing to a completely different problem. Hash matching is helpful but it’s not infallible. Hash collisions (where two different images generate identical hashes) are possible. Malicious actors could create false collisions to implicate innocent people or hide their sharing of illicit material. False positives do happen. Unfortunately, at least one law enforcement agency is treating the people on the receiving end of erroneous flagging as criminal suspects.
Responding to an information request from ICCL, the Irish police reported that NCMEC had provided 4,192 referrals in 2020. Of these, 409 of the cases were actionable and 265 cases were completed. Another 471 referrals were “Not Child Abuse Material.” The Irish police nonetheless stored “(1) suspect email address, (2) suspect screen name, [and] (3) suspect IP address.” Now 471 people have police records because a computer program incorrectly flagged them as having CSAM.
Stripping encryption and forcing service providers to engage in client-side scanning will only increase the number of false positives. But much of what’s being proposed — both overseas and here in the United States — takes the short-sighted view that encryption must go if children are to be saved. To come up with better solutions, legislators and law enforcement need to be able to see past the barriers that immediately present themselves. Rather than focus on short-term hurdles, they need to recognize online communication methods will always be in a state of fluctuation. What appears to be the right thing to do now may become utterly worthless in the near future.
Think differently. Think long term. Think about protecting the privacy and security of all members of society—children and adults alike. By failing to consider the big picture, the U.K. Online Safety Act has taken a dangerous, short-term approach to a complex societal problem. The EU and U.S. have the chance to avoid the U.K.’s folly; they should do so. The EU proposal and the U.S. bills are not sensible ways to approach the public policy concerns of online abetting of CSAE. Nor are these reasonable approaches in view of the cyber threats our society faces. The bills should be abandoned, and we should pursue other ways of protecting both children and adults.
The right solution now isn’t to make everyone less safe and secure. Free world governments shouldn’t be in such a hurry to introduce mandates that lend themselves to abuse by government entities and used to justify even more abusive surveillance methods deployed by autocrats and serial human rights abusers. Yes, the problem is important and should be of utmost concern. But that doesn’t mean governments should, for all intents and purposes, outlaw encryption just because it seems to be quickest, easiest solution to a problem that’s often misrepresented and misperceived.
After years of irritating the DOJ with its refusal to compromise encryption, Apple suddenly went the other way after receiving criticism over its perceived inability to stop the distribution of CSAM (child sexual abuse material) via its devices and services.
For a very brief moment, Apple decided it would no longer be a world leader in privacy and security. It declared it would begin engaging in client-side scanning of users’ content in hopes of preventing the spread of CSAM.
Mere moments later, it abandoned this never-implemented plan, citing the security and privacy flaws client-side scanning would create. While it’s always a good idea to do what you can to prevent CSAM distribution, if that effort means subjecting every device user to unpatchable, deliberately created security holes, then it’s not worth doing.
Why? Because it just creates an exploit governments can use to search out other content they don’t care for, like dissenting views, work product created by critical journalists, or anything that might be used to silence content that doesn’t comply with the government’s preferred narrative.
Apple had good reasons for attempting to limit the distribution of CSAM. It also had good reason to shut down this project before it began after realizing the negative, unintended consequences would likely outweigh whatever public good it might create by deliberately compromising its own encryption.
Needless to say, this has produced another set of enemies for Apple. Governments all over the world sincerely hoped voluntary client-side scanning by a major US tech company would allow them to pass laws demanding similar compliance from other tech companies. Groups involved in deterring the sharing of CSAM hoped Apple’s proactive scanning would prompt others to similarly compromise the security and privacy of their customers — something that might make it a bit easier to round up child abusers and deter future victimization.
Apple appears to have moved past the “mothball” stage to a permanent rejection of client-side scanning efforts. That move has generated a new round of criticism. This time it’s not a government demanding Apple do more. It’s child safety group Heat Initiative, which sent Apple an email criticizing its move away from proactive client-side scanning of uploaded content.
Heat Initiative wanted answers. It got them… but not the answers it wanted. Not only that, but Apple has chosen to make its answer public, as Lily Hay Newman reports for Wired:
Today, in a rare move, Apple responded to Heat Initiative, outlining its reasons for abandoning the development of its iCloud CSAM scanning feature and instead focusing on a set of on-device tools and resources for users known collectively as Communication Safety features. The company’s response to Heat Initiative, which Apple shared with WIRED this morning, offers a rare look not just at its rationale for pivoting to Communication Safety, but at its broader views on creating mechanisms to circumvent user privacy protections, such as encryption, to monitor data. This stance is relevant to the encryption debate more broadly, especially as countries like the United Kingdom weigh passing laws that would require tech companies to be able to access user data to comply with law enforcement requests.
Heat Initiative may have preferred this communication (especially since it ended with a powerful rebuttal of the group’s demands) remained private. Apple has chosen to make this response [PDF] public because it makes the points groups and governments tend to desire aren’t made publicly, like the fact that engaging in client-side scanning tends to benefit autocrats, surveillance state participants, and, yes, other criminals far more than it benefits even the most helpless members of our society: the children being exploited by sexual abuse.
Scanning of personal data in the cloud is regularly used by companies to monetize the information of their users. While some companies have justified those practices, we’ve chosen a very different path — one that prioritizes the security and privacy of our users. Scanning every user’s privately stored iCloud content would in our estimation pose serious unintended consequences for our users. Threats to user data are undeniably growing — globally the total number of data breaches more than tripled between 2013 and 2021, exposing 1.1 billion personal records in 2021 alone. As threats become increasingly sophisticated, we are committed to providing our users with the best data security in the world, and we constantly identify and mitigate emerging threats to users’ personal data, on device and in the cloud. Scanning every user’s privately stored iCloud data would create new threat vectors for data thieves to find and exploit.
It would also inject the potential for a slippery slope of unintended consequences. Scanning for one type of content, for instance, opens the door for bulk surveillance and could create a desire to search other encrypted messaging systems across content types (such as images, videos, text, or audio) and content categories. How can users be assured that a tool for one type of surveillance has not been reconfigured to surveil for other content such as political activity or religious persecution? Tools of mass surveillance have widespread negative implications for freedom of speech and, by extension, democracy as a whole. Also, designing this technology for one government could require applications for other countries across new data types.
Scanning systems are also not foolproof and there is documented evidence from other platforms that innocent parties have been swept into dystopian dragnets that have made them victims when they have done nothing more than share perfectly normal and appropriate pictures of their babies.
These are all good answers. And they’re definitely not the answers Heat Initiative was hoping to receive. But they’re the most honest answers — ones that don’t pretend what this group wants will somehow be workable and free of negative consequences.
The unsurprising twist is that Heat Initiative already knew Apple would raise legitimate concerns about client-side scanning, rather than simply do what the activist group wanted it to do. Instead of engaging in the issue honestly and directly as Apple has done, Heat Initiative has already moved forward with a plan to (dishonestly) portray Apple as a willing participant in the spread of CSAM:
A child advocacy group, the Heat Initiative, has raised $2 million for a new national advertising campaign calling on Apple to detect, report and remove child sexual abuse materials from iCloud, its cloud storage platform.
Next week, the group will release digital advertisements on websites popular with policymakers in Washington, such as Politico. It will also put up posters across San Francisco and New York that say: “Child sexual abuse material is stored on iCloud. Apple allows it.”
The thing is: Apple doesn’t allow it. Apple simply refuses to undermine every user’s privacy and security to detect what is assuredly a very small amount of illegal content being transmitted via its services. Apple’s argument — stated directly and intelligently to Heat Initiative — is simply this: breaking encryption results in broken encryption. And that can be exploited by governments and criminals just as easily as it can be utilized to detect CSAM.
There is no perfect solution that benefits every stakeholder in CSAM cases. But what should never be considered the most acceptable solution is anything that converts innocent users into fodder for government oppression. That’s what Apple wants to prevent. And, for that, it will continue to be labeled as a participant in child sexual abuse by intellectually dishonest entities like Heat Initiative.
Pretty much everyone who isn’t a UK legislator backing the Online Safety Bill has come out against it. The proposal would give the UK government much more direct control of internet communications. Supposedly aimed at limiting the spread of child sexual abuse material (CSAM), the proposal would do the opposite of its moniker by making everyone less safe when interacting with others via internet services.
While proponents continue to offer up nonsensical defenses of a bill that would compromise encryption, if not actually outlaw it, people who actually know what they’re talking about have been pointing out the flawed logic of UK regulators, if not promising to exit the UK market entirely if the bill is passed.
As the bill heads for another round of votes, entities that actually want to ensure online safety continue to speak up against. The group of critics includes Apple, which knows from first hand experience the negative side effects created by demanding broken encryption and/or client-side scanning.
[I]n a statement Apple said: “End-to-end encryption is a critical capability that protects the privacy of journalists, human rights activists, and diplomats.
“It also helps everyday citizens defend themselves from surveillance, identity theft, fraud, and data breaches. The Online Safety Bill poses a serious threat to this protection, and could put UK citizens at greater risk.
“Apple urges the government to amend the bill to protect strong end-to-end encryption for the benefit of all.”
Also speaking up (again), but probably not being heard (again), are encrypted communication services WhatsApp and Signal — both of which have promised to stop offering their services in the UK if the Online Safety bill becomes law. Here are the statements given to the Evening Standard by WhatsApp, Element, and Signal:
“If the Online Safety Bill does not amend the vague language that currently opens the door for mass surveillance and the nullification of end-to-end encryption, then it will not only create a significant vulnerability that will be exploited by hackers, hostile nation states, and those wishing to do harm, but effectively salt the earth for any tech development in London and the UK at large,” Meredith Whittaker, president of not-for-profit secure messaging app Signal told The Standard.
[…]
“No-one, including WhatsApp, should have the power to read your personal messages,” Will Cathcart, head of WhatsApp at Meta told The Standard.
[…]
Element chief executive and chief of technology Matthew Hodgson told The Standard, “The Online Safety Bill is effectively giving the Government the remit to put a CCTV camera in everybody’s bedrooms, and the way people use their WhatsApp today is pretty personal — people use messaging apps more than they communicate with people in person.”
The Evening Standard also takes time to note some hypocrisy contained in the bill. Whatever burdens are placed on encrypted services won’t affect the legislators pushing this bill. They’ll still be free from snooping, even if none of their constituents are.
The Online Safety Bill concerns only online messages sent by UK citizens and residents, but not anything sent on messaging apps by law enforcement, the public sector, or emergency responders.
This is handy, given that The Standard understands that up to half of Government communications are still being sent over consumer apps like WhatsApp.
The UK government continues to insist — despite all the evidence it has provided to the contrary — that it’s not interested in breaking encryption, installing backdoors, or otherwise undermining users’ privacy and security. But its protestations are inept and absolutely not backed by any of the wording in the bill, which contains mandates that would absolutely do the things the bill’s defenders insist it won’t.
The opposition to the bill has gone from cacophonous to deafening in recent days. As Natasha Lomas reports for TechCrunch, a group of 68 security researchers have offered up their group opposition to the Online Safety Bill in a letter [PDF] that briefly, but incisively, points out the flaws in the legislation.
Here’s that letter’s take on client-side scanning — just one of several problematic mandates:
A popular deus ex machina is the idea to scan content on everybody’s devices before it is encrypted in transit. This would amount to placing a mandatory, always-on automatic wiretap in every device to scan for prohibited content. This idea of a “police officer in your pocket” has the immediate technological problem that it must both be able to accurately detect and reveal the targeted content and not detect and reveal content that is not targeted, even assuming a precise agreement on what ought to be targeted.
[…]
We note that in the event of the Online Safety Bill passing and an Ofcom order being issued, several international communication providers indicated that they will refuse to comply with such an order to compromise the security and privacy of their customers and would leave the UK market. This would leave UK residents in a vulnerable situation, having to adopt compromised and weak solutions for online interactions.
That’s actually the smaller (and shorter) of the two open letters issued in the past few days by security researchers. The second letter [PDF] contains seven pages of signatories from all over the world, as well as a more in-depth critique of the extremely flawed proposal.
The letter notes the issues scanning for CSAM using hashes already poses: namely, that hashes can be altered to avoid detection and that false positives still happen frequently. Now, take these existing problems, scale them to the nth degree, and throw some AI into the mix. This is what’s awaiting UK residents if the bill passes with the client-side scanning/encryption-breaking mandates in place:
At the scale at which private communications are exchanged online, even scanning the messages exchanged in the EU on just one app provider would mean generating millions of errors every day. That means that when scanning billions of images, videos, texts and audio messages per day, the number of false positives will be in the hundreds of millions. It further seems likely that many of these false positives will themselves be deeply private, likely intimate, and entirely legal imagery sent between consenting adults.
This cannot be improved through innovation: ‘false positives’ (content that is wrongly flagged as being unlawful material) are a statistical certainty when it comes to AI. False positives are also an inevitability when it comes to the use of detection technologies — even for known CSAM material.
Not only will the government be able to sift through all of this, if anything gets flagged, it will also get to sift through all of these personal messages even when the AI is wrong about what it thought it had observed. Narrowly targeted scanning only in situations where some evidence already exists that CSAM is being distributed could limit the collateral damage, but nothing in the bill or in supporters’ statements indicate the government is interested in any process that doesn’t give it the opportunity to collect it all.
Then there’s the mission creep, which is always present when a government expands its surveillance powers.
Even if such a CSS system could be conceived, there is an extremely high risk that it will be abused. We expect that there will be substantial pressure on policymakers to extend the scope, first to detect terrorist recruitment, then other criminal activity, then dissident speech. For instance, it would be sufficient for less democratic governments to extend the database of hash values that typically correspond to known CSAM content (as explained above) with hash values of content critical of the regime. As the hash values give no information on the content itself, it would be impossible for outsiders to detect this abuse. The CSS infrastructure could then be used to report all users with this content immediately to these governments.
Even if the UK government would never do this (and no one believes it wouldn’t), a Western nation with “liberal” values (as in enshrined human rights, etc.) passing this sort of law would embolden far less liberal nations to expand their domestic surveillance programs under the pretense of making the internet safer and/or detecting CSAM.
Whether or not all of this opposition will make a difference remains to be seen. So far, the steady stream of criticism and promises to exit the market haven’t managed to alter the bill’s mandates in any significant manner. Maybe the EU’s recent abandonment of encryption-breaking mandates in its internet-targeting legislation following months of criticism will force UK lawmakers to rethink their demands. Then again, this is the same government that decided it didn’t want to be part of any club that would accept it and Brexited its way into the wrong side of history.
The UK government desires direct control of the internet. This has been the plan for years. A bill that would criminalize encryption while mandating client-side scanning to control the spread of child sexual abuse material (CSAM) has been on the front burner for years.
The bill would also turn hate speech into a crime and punish tech companies directly for content generated by users. It’s a bad idea all over — something UK legislators realized early on, resulting in some rebranding. What used to be called the “Online Harms Act” is now the “Online Safety Act.” The harms to internet users remain the same. The only thing that has changed is the government’s preferred nomenclature.
While we’ve been keeping an eye on similar statutes proposed by the EU — something that would also criminalize encryption if end-to-end encryption prevented client-side scanning for CSAM — the UK’s policy proposal has been embraced by its farm team, the Australian government.
This government has been seeking ways to irreparably damage encryption while increasing its domestic surveillance powers. That it would embrace a proposal that threatens encryption while increasing monitoring demands for service providers is unsurprising.
A barely noticed announcement made this month by Australia’s online safety chief is the strongest signal yet that tech companies like Meta, Google and Microsoft will soon be legally required to scan all user content.
This indication came after the federal government’s eSafety commissioner and Australia’s tech industry couldn’t agree on how companies were going to stamp out child sexual abuse material (CSAM) and pro-terror content.
Now, eSafety commissioner Julie Inman Grant is writing her own binding rules and all signs point towards the introduction of a legal obligation that would force email, messaging and online storage services like Apple iCloud, Signal and ProtonMail to “proactively detect” harmful online material on their platforms — a policy that would be a first in the Western world if implemented today.
This all aligns with the worst aspects of the UK and EU proposals. The thing is: this won’t work. WhatsApp — Facebook’s messaging acquisition — has already made it clear it won’t break encryption to satisfy overreaching legislators. Apple has already been burnt by its own proactive client-side scanning proposal, so it’s unlikely it will be talked into further damaging its own reputation with subservience to governments demanding it do what it has decided it simply won’t do… at least not at the moment. And ProtonMail has extended a firm middle finger to any government demanding it break its encryption.
The end result of this Australian proposal won’t be greater insight into CSAM distribution. All this insistence on client-side scanning (with its obvious effects on E2EE) will do is ensure Australian residents will only have access to subpar communication platforms that have never been concerned enough about user privacy and security to implement end-to-end encryption.
As is par for the course, the ends are undeniably good: stopping the spread of CSAM and identifying those trafficking in this illegal content. It’s the means that are terrible, and not just because the proposed means mandate undermining encryption and/or fining tech companies $657,000/day over content created and distributed by their users.
Any scanning system is vulnerable to incorrect results. The DIS [designated internet services] code notes that “hash lists are not infallible” and points out an error, such as recording a false positive and then erroneously flagging someone for possessing CSAM, can have serious consequences. The use of machine learning or artificial intelligence for scanning adds to the complexity and, as a result, the likelihood that something would be wrongly identified. Similarly, systems may also record false negatives and miss harmful online content.
Even if scanning technology was completely error-proof, the application of this technology can still have problems. The eSafety commissioner expects pro-terror material like footage of mass shootings to be proactively detected and flagged but there are many legitimate reasons why an individual such as journalists and researchers may possess this content. While the national classification scheme has contextual carve-outs for these purposes, scanning technologies don’t have this context and could flag this content like any other user.
There are even examples of how content that appears to be CSAM material in a vacuum has legitimate purposes. For example, a father was automatically flagged, banned and reported to police by Google after it detected medical images taken of his child’s groin under orders of a doctor, immediately locking this user out of their email, phone and home internet.
The government has approached stakeholders (i.e., tech companies and service providers) for comments and suggestions. But it has also decided that it’s free to reject any comments or suggestions it doesn’t like, including comments that logically point out how this won’t work and will make internet users less secure.
The Australian government — at least as personified by Inman Smith — believes the tech world has had its say. Now, all that’s left is to force them to bend to the new rules.
The rejection of these two industry codes now leaves the eSafety commissioner’s office free to come up with its own enforceable regulations. Other than taking part in a mandatory consultation for the eSafety commissioner’s proposed code, Australian tech companies have no further say in what they’ll be legally required to do.
If this keeps moving forward, Australian residents will be expected to use the internet the government feels is acceptable, rather than a wide variety of services that actually seek to protect their users from malicious hackers and/or human rights violators who have no qualms about engaging in extraterritorial spying on journalists, activists, and dissidents.
This won’t end well for Australia. Hopefully, this will be met with the same push back that has forced the EU and UK to reconsider their demands for broken encryption and privacy-violating client-side scanning.
Well, here’s some welcome news! It appears the EU Commission may have learned something from the less-than-wholehearted support it received following the introduction of its CSA (Child Sexual Abuse) bill.
The proposal hoped to curb the spread of CSAM (child sexual abuse material) by mandating (among other things) client-side scanning of user content. All well and good if the communications aren’t encrypted. But many of them are, thanks to companies offering end-to-end encryption by default to better secure users’ content and communications.
Sure, the bill had its defenders. One in particular (EU Commissioner for Home Affairs Yiva Johansson) has offered multiple incoherent defenses of the proposal that would, in effect, criminalize encryption (at worst) or make encryption completely useless as a security option (at best).
Most EU member nations were reluctant to embrace these extremities. There were, of course, a few exceptions. Spain, for example, thought the far-reaching, extremely broad proposal didn’t go far enough when it came to increasing the government’s powers and its surveillance options. On the other side, the EU Commission saw flat-out rejections from a couple of countries, both of which pointed out the CSA law would violate other existing EU privacy laws.
A recent leak of EU members’ positions on the bill likely factored into this recent decision by the EU Commission to scrub the anti-encryption wording from the CSA proposal. Joseph Hall of the Internet Society posted the alterations to Twitter, noting that this was a “huge win for encryption, confidentiality, and integrity in the EU.”
The changes can be seen starting on page 5 of the updated CSA proposal [PDF]. Here’s where the EU Commission changes tack and decides it’s time to leave encryption alone:
This Regulation shall not lead to any general obligation to monitor the information which providers of hosting services transmit or store, nor to actively seek facts or circumstances indicating illegal activity.
This Regulation shall not prohibit, make impossible, weaken, circumvent or otherwise undermine cybersecurity measures, in particular encryption, including end-to-end encryption, implemented by the relevant information society services or by the users. This Regulation shall not create any obligation to decrypt data.
Breaking/backdooring/criminalizing encryption is off the table for the time being. This proposal still seems like it’s a long way from adoption, but with just a couple of paragraphs, it has suddenly become a whole lot more palatable.
The PCY (presidency of the council, a rotating office shared by all EU members) has also appended a footnote to the paragraph forbidding the weakening of encryption which, if adopted, would take anti-encryption proposals off the table for far longer.
PCY comment: the following recital could be included: “Cybersecurity measures, in particular encryption technologies, including end-to-end encryption, are critical tools to safeguard the security of information within the Union as well as trust, accountability and transparency in the online environment. Therefore, this Regulation should not adversely affect the use of such measures, notably encryption technologies. Any weakening or circumventing of encryption could potentially be abused by malicious third parties. In particular, any mitigation or detection measures should not prohibit, make impossible, weaken, circumvent or otherwise undermine cybersecurity measures irrespective of whether the data is processed at the device of the user before the encryption is applied or while the data is processed in transit or stored by the service provider.”
This recital adds facts that have been conveniently overlooked by those who support undermining encryption to combat CSAM. The recital would also expand this protection against government interference to cover more than just the end-to-end variety.
This is the direction this legislation needs to go. Fighting CSAM is a noble and important goal. But as noble and important as it is, it still doesn’t justify subjecting everyone in the EU to decreased security and worthless faux encryption options. Encryption protects far more than criminals. And I’m heartened to see the push back against this draconian proposal is finally paying off.