The DOJ Is Conflating The Content Moderation Debate With The Encryption Debate: Don't Let Them
from the it's-not-the-same dept
As we’ve detailed a lot over the last week, the DOJ has decided that after years of failing to get backdoors mandated by warning about the “terrorism” bogeyman, it’s decided to pick up the FOSTA playbook, and instead start focusing on child porn — or what “serious people” now refer to as Child Sexual Abuse Material (CSAM). It did this last week with an assist from the NY Times, who published an article with (legitimately) scary stories, but somehow blaming the internet companies… because they actually report it when they find such content on their networks. I’ve seen more than a few people, even those who generally have been strong voices on the encryption debate and against backdoors, waver a bit on this particular subject, and note that maybe there shouldn’t be encryption on social media networks, because it might (as the narrative says) help awful people hide their child porn.
Except… that’s confusing a few different things. Mainly, it’s mixing up the content moderation debate with the “lawful access” or “backdoors” debate. Yes, encryption makes it harder for the police to get in and see certain things, but that’s by design. We live in a country with the 4th Amendment, in which we believe that it should be difficult for law enforcement to snoop deeply into our lives — and that’s always meant that some people will do and plot bad stuff out of the sight and hearing of law enforcement. Yet, if you were to look at law enforcement over the past 100 years, you can bet that they have many times more access to information about people today than they have in the past. The claim of “going dark” is laughable when you compare the information that law enforcement can get today even to what it could get 15 or 30 years ago.
But, importantly, bringing CSAM into the debate muddies the water by pretending — incorrectly — that in an end-to-end encrypted world you can’t do any content moderation, and there’s simply no way for platforms to block or report certain kinds of content. Yet, as Princeton professor Jonathan Mayer highlights in a new paper, content moderation is not impossible in an encrypted system. It may be different than it is today, but it’s still very much possible:
Much of the public discussion about content moderation and end-to-end encryption over the past week has appeared to reflect two common technical assumptions:
- Content moderation is fundamentally incompatible with end-to-end encrypted messaging.
- Enabling content moderation for end-to-end encrypted messaging fundamentally poses the same challenges as enabling law enforcement access to message content.
In a new discussion paper, I provide a technical clarification for each of these points.
- Forms of content moderation may be compatible with end-to-end encrypted messaging, without compromising important security principles or undermining policy values.
- Enabling content moderation for end-to-end encrypted messaging is a different problem from enabling law enforcement access to message content. The problems involve different technical properties, different spaces of possible designs, and different information security and public policy implications.
You can read the whole thing, but as the paper notes, user reporting of such content still works in an end-to-end encrypted world, as does hash matching if done at the client end. There’s a lot more in there as well, but what you realize in reading the paper is that while law enforcement has now latched onto the CSAM issue as its hook to break encryption (in part, I’ve been told by someone working with the DOJ, because they found it “polled well”), it’s an entirely different problem. This is yet another “but think of the children” argument, which ignores the technical and societal realities.