The DOJ Is Conflating The Content Moderation Debate With The Encryption Debate: Don't Let Them

from the it's-not-the-same dept

As we've detailed a lot over the last week, the DOJ has decided that after years of failing to get backdoors mandated by warning about the "terrorism" bogeyman, it's decided to pick up the FOSTA playbook, and instead start focusing on child porn -- or what "serious people" now refer to as Child Sexual Abuse Material (CSAM). It did this last week with an assist from the NY Times, who published an article with (legitimately) scary stories, but somehow blaming the internet companies... because they actually report it when they find such content on their networks. I've seen more than a few people, even those who generally have been strong voices on the encryption debate and against backdoors, waver a bit on this particular subject, and note that maybe there shouldn't be encryption on social media networks, because it might (as the narrative says) help awful people hide their child porn.

Except... that's confusing a few different things. Mainly, it's mixing up the content moderation debate with the "lawful access" or "backdoors" debate. Yes, encryption makes it harder for the police to get in and see certain things, but that's by design. We live in a country with the 4th Amendment, in which we believe that it should be difficult for law enforcement to snoop deeply into our lives -- and that's always meant that some people will do and plot bad stuff out of the sight and hearing of law enforcement. Yet, if you were to look at law enforcement over the past 100 years, you can bet that they have many times more access to information about people today than they have in the past. The claim of "going dark" is laughable when you compare the information that law enforcement can get today even to what it could get 15 or 30 years ago.

But, importantly, bringing CSAM into the debate muddies the water by pretending -- incorrectly -- that in an end-to-end encrypted world you can't do any content moderation, and there's simply no way for platforms to block or report certain kinds of content. Yet, as Princeton professor Jonathan Mayer highlights in a new paper, content moderation is not impossible in an encrypted system. It may be different than it is today, but it's still very much possible:

Much of the public discussion about content moderation and end-to-end encryption over the past week has appeared to reflect two common technical assumptions:

  1. Content moderation is fundamentally incompatible with end-to-end encrypted messaging.
  2. Enabling content moderation for end-to-end encrypted messaging fundamentally poses the same challenges as enabling law enforcement access to message content.

In a new discussion paper, I provide a technical clarification for each of these points.

  1. Forms of content moderation may be compatible with end-to-end encrypted messaging, without compromising important security principles or undermining policy values.
  2. Enabling content moderation for end-to-end encrypted messaging is a different problem from enabling law enforcement access to message content. The problems involve different technical properties, different spaces of possible designs, and different information security and public policy implications.

You can read the whole thing, but as the paper notes, user reporting of such content still works in an end-to-end encrypted world, as does hash matching if done at the client end. There's a lot more in there as well, but what you realize in reading the paper is that while law enforcement has now latched onto the CSAM issue as its hook to break encryption (in part, I've been told by someone working with the DOJ, because they found it "polled well"), it's an entirely different problem. This is yet another "but think of the children" argument, which ignores the technical and societal realities.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: chris wray, content moderation, doj, encryption, going dark, jonathan mayer, william barr
Companies: facebook


Reader Comments

Subscribe: RSS

View by: Time | Thread


  1. icon
    urza9814 (profile), 10 Oct 2019 @ 10:38am

    Re: Re: Out of sight, out of their minds

    There's different kinds of moderation, and plenty of sites DO use mid-stream moderation that requires access. Facebook, for example.

    I like the idea posted by AC below where they suggest moderation vs filtering, although those words already have different uses so they probably aren't the best choice. I'd call it something like "policy moderation" vs "user moderation". Policy moderation is like Facebook, where you set a bunch of rules about what is and is not allowed, you let users file reports of specific content, but then you have hired moderators who review that content and determine if it is actually in violation. Some sites also use immediate policy moderation, where your post will be reviewed by a human to see if it complies before it is ever visible. Some sites use a mix, with automated filters which will determine if a comment should be held for human review. But all of those options require administrators at the company to be able to review the posted content. So either the company needs to be able to decrypt everything, or at the very least they need to insert code that will take the decrypted message from the user and pass that back to the company unencrypted. Either way they're getting unencrypted access. And obviously you can't count on any automatic filtering on the client end -- for example, if you do the thing where automatic filtering can flag a comment as requiring human review, the client can easily prevent that code from running on their end. You can use that to prevent things from being viewed, but not from being posted and distributed.

    For "user moderation", you just count downvotes and hide anything with enough downvotes. That could be done without direct access by the company to the decrypted content. But it doesn't let you set any kind of consistent rules, and it can often get abused, especially in larger communities. Things will get flagged because people just don't like the opinion expressed or the person expressing it...and there's not much you can do to prevent that.


Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here



Subscribe to the Techdirt Daily newsletter




Comment Options:

  • Use markdown. Use plain text.
  • Make this the First Word or Last Word. No thanks. (get credits or sign in to see balance)    
  • Remember name/email/url (set a cookie)

Follow Techdirt
Insider Shop - Show Your Support!

Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Chat
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.