EARN ITs Big Knowledge 1st Amendment Problem

from the why-doesn't-anyone-understand-this dept

We’ve talked about so many problems with the EARN IT Act, but there are more! I touched on this a bit in my post about how EARN IT is worse than FOSTA, but it came up a bit in the markup last week, and it showed that the Senators pushing for this do not understand the issues around the knowledge standard required here, and how various state laws complicate things. Is it somewhat pathetic that the very senators pushing for a law that would make major changes impacting a wide variety of things don’t seem to understand the underlying mechanisms at play? Sure is! But rest assured that you can be smarter than a senator.

First, let’s start here: the senators supporting EARN IT seem to think that if you remove Section 230 for a type of law-violating content (in this case, child sexual abuse material, or CSAM), that magically means that website will be liable for that content — and because of that they’ll magically make it disappear. The problem is that this is not how any of this actually works. Section 230 expert and law professor Jeff Kosseff broke the details down in a great thread, but I want to make it even more clear.

As a reminder, Section 230 has never been a “get out of jail free” card, as some of its critics suggest. It’s a procedural benefit that gets cases that would otherwise lose on 1st Amendment grounds tossed out at an earlier stage (when it’s much less costly, and thus, much less likely to destroy a smaller company).

So, here, the senators supporting EARN IT seem to think, falsely, that if they remove Section 230 for CSAM that (1) it will make websites automatically liable for CSAM, and (2) that will somehow spur them into action to take down all CSAM because of the legal risk and that this will somehow make CSAM go away. Both of these assumptions are wrong, and wrong in such stupid ways that, again, EARN IT would likely make problems worse, not better. The real problem underlying both of these is the question of “knowledge.” The legal folks like Jeff Kosseff dress this up as “mens rea” but the key thing is about whether or not a website knows about the illegal content.

This impacts everything in multiple ways. As Kosseff points out in his thread, Supreme Court precedent (which you would know if you read just the first chapter of his Section 230 book) says that for a distributor to be held liable for content that is not protected by the 1st Amendment, it needs to have knowledge of the illegal content. Supporters of EARN IT counteract with the correct, but meaningless, line that “CSAM is not protected by the 1st Amendment.” And, it’s not. But that’s not the question when it comes to distributor liability. In Smith v. California, the Supreme Court overturned a conviction of Eleazar Smith (his bookstore sold a book the police believed was obscene), noting that even if the book’s content was not protected by the 1st Amendment, the 1st Amendment cannot impose liability on a distributor, if that distributor does not have knowledge of the unprotected nature of the content. Any other result, Justice Brennan correctly noted, would lead distributors to be much more censorial, including of protected speech:

There is no specific constitutional inhibition against making the distributors of good the strictest censors of their merchandise, but the constitutional guarantees of the freedom of speech and of the press stand in the way of imposing a similar requirement on the bookseller. By dispensing with any requirement of knowledge of the contents of the book on the part of the seller, the ordinance tends to impose a severe limitation on the public’s access to constitutionally protected matter. For if the bookseller is criminally liable without knowledge of the contents, and the ordinance fulfills its purpose, he will tend to restrict the books he sells to those he has inspected; and thus the State will have imposed a restriction upon the distribution of constitutionally protected as well as obscene literature. It has been well observed of a statute construed as dispensing with any requirement of scienter that: ‘Every bookseller would be placed under an obligation to make himself aware of the contents of every book in his shop. It would be altogether unreasonable to demand so near an approach to omniscience.’ The King v. Ewart, 25 N.Z.L.R. 709, 729 (C.A.). And the bookseller’s burden would become the public’s burden, for by restricting him the public’s access to reading matter would be restricted. If the contents of bookshops and periodical stands were restricted to material of which their proprietors had made an inspection, they might be depleted indeed. The bookseller’s limitation in the amount of reading material with which he could familiarize himself, and his timidity in the face of his absolute criminal liability, thus would tend to restrict the public’s access to forms of the printed word which the State could not constitutionally suppress directly. The bookseller’s self-censorship, compelled by the State, would be a censorship affecting the whole public, hardly less virulent for being privately administered. Through it, the distribution of all books, both obscene and not obscene, would be impeded.

While there are some other cases, this remains precedent and it’s difficult to see how the courts would (or could) say that a website is strictly liable for content that it does not know about.

This creates a bunch of problems. First and foremost, removing 230 in this context then gives websites not an incentive to do more to find CSAM, but actually to do less to find CSAM, because the lack of knowledge would most likely protect them from liability. That is the opposite of what everyone should want.

Second, it creates various problems in how EARN IT interacts with various state laws. As we’ve pointed out in the past, EARN IT isn’t just about the federal standards for CSAM, but it opens up websites to legal claims regarding state laws as well. And the knowledge standards regarding CSAM in state laws is, literally, all over the map. Many do require actual knowledge (which again, reverses the incentives here). Others, however, have much more troubling standards around “should have known” or “good reason to know” or in some cases, they set a standard of “recklessness” for not knowing.

Some of those, if challenged, might not stand up to 1st Amendment scrutiny, such as what’s found in Smith v. California, which should require actual knowledge, but either way the law would create a huge mess — with it mostly incentivizing companies not to look for this. And considering that the sponsors of the bill keep saying that the whole reason of the bill is to get companies to do more looking for CSAM, they’ve literally got the entire law backwards.

What’s most troubling, is that when Senator Blumenthal was pushed on this point during the markup, and it was mentioned that different states have different standards, rather than realizing one of the many (many) problems with the bill, he literally suggested that he hoped more states would change their standards to a potentially unconstitutional level, in which actual knowledge is not required for liability. And that’s just setting up a really dangerous confrontation with the 1st Amendment.

If Senator Blumenthal and his legislative staffers actually cared about stopping CSAM, they would be willing to engage and talk about this. Instead, they refuse to engage, and mock anyone who brings up these points. Perhaps it’s fun for them to generate false headlines while fundamentally causing massive problems for the internet and speech and making the CSAM problem worse while pretending the reverse is happening. But some of us find it immensely problematic.

Filed Under: , , , , , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “EARN ITs Big Knowledge 1st Amendment Problem”

Subscribe: RSS Leave a comment
39 Comments
This comment has been deemed funny by the community.
Annonymouse says:

Fallacies

Two inaccuracies in the first two paragraphs.

1 – You to can be smarter than a senator.

A slug in the middle of the freeway is smarter than the average senator.

2 – senators supporting EARN IT seem to think

When has it ever been established that senators have the ability to think?

Now back to your regularly scheduled rants.

Anonymous Coward says:

Re: Re:

Given what we know, it’s likely that they will start heavily restricting what you could post, possibly pre-screen and/or scan it before allowing it to go through. The bigger players can probably absorb the cost of hiring new moderators and deploying such technologies but smaller ones? They’re more likely to close up shop.

Reddit and Discord I can see sticking around but heavily restricting what you can and cannot say. Discord already stores everything you send.

This comment has been flagged by the community. Click here to show it.

Koby (profile) says:

Privacy Benefit

but actually to do less to find CSAM, because the lack of knowledge would most likely protect them from liability. That is the opposite of what everyone should want.

There was the Cubby v. Compuserve model prior to section 230’s existence. It did have the side effect of keeping the busybodies away. This might discourage systems such as the controversial Apple Neuralhash scans. If it’s on iCloud, and they’re scanning, then they might be held liable if it doesn’t work.

This isn’t a good reason to support the legislation. I’m just saying that we complain about companies and government collecting and scanning data to judge whether they approve. Sometimes, others’ lack of knowledge is a good thing for the privacy advocate.

This comment has been flagged by the community. Click here to show it.

Herman 'Testy' Coolzip says:

Re: Re: A. Stephen Stone doesn't just look hateful, but IS.

You fanboys STILL seem intent on making the site a cesspit, running off anyone reasonable from even reading. YOU alone have surely run off hundreds.

Keep it up. Opposition now doesn’t even need to show up here. MM is an unconvincing lunatic who allows you kids to make his site disgusting.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Herman 'Testy' Coolzip says:

Biden / admin advise "Big Tech" often on what's "disinformation"

You’ve made no objection to active collusion / pressure from "Democrats" to have entirely legal 1A speech taken down.

Yourself (and fanboys) advocate removal of "mis/disinformation".

You are way over the line to censorship, just don’t want CORPORATE CONTROL interfered.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: Re: Re: Re:

Asked to give a specific example of which speech you/Woody are worried about being removed and your first response is to say that if you had the power you would take someone else’s speech down simply out of spiteful hypocrisy.

It takes some real stupid to turn that question into an own-goal but damn if you didn’t just manage it.

This comment has been deemed funny by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:

It’s kind of hilarious in a mundane way: Given the chance to detail the kind of speech you feel is under threat from government censorship, you pivot to directly threatening my First Amendment rights because…reasons.

Did you fix up a room for me in your head before or after I started living there rent-free? ????

That Anonymous Coward (profile) says:

Anyone have results of the testing for lead in Congress members?
I mean its either they’ve been poisoned & that caused the brain damage or they’ve just given up trying to hide their corruption and blind ambition for more power.

This shouldn’t be a thing and the fact they have been told…
If you do this, they will stop looking for CSAM because you’ve given them a reason to never look for it you fucking morons.
Yet they still pretend this will fix it.

We really need leadership that doesn’t live in a fantasy bubble detached from reality where their magic underpants gnome thinking doesn’t solve anything & makes it much worse.

Sok Puppette says:

There's more than one kind of knowledge

This article and the linked thread seem to be centered on the kind of "knowledge" where you know "this piece of content over here in this file is child porn".

But there’s another kind of knowledge, of the form "I run a social site with 100,000,000 users. It is a practical certainty that there’s child porn going through my system.". It’s not just "I’m ignoring a real possiblity". It’s "I’m sure there’s some here; I just don’t know exactly where it is". Especially after the first few unrelated cases in which you find some.

That kind of thing really isn’t captured by the normal lay idea of "recklessness". And if it falls within some legal definition of recklessness, then it’s still at least an extremely strong form, way out near the boundary with actual knowledge… which is probably a boundary that can move given the right kind of bad-law-making case.

I think that the "EARN-IT" people are hoping to be able to go after the second kind of knowledge, and I’m afraid that Smith may not be protection enough.

A bookseller in 1958 who happened to have one "obscene" book could reasonably argue that they didn’t know what was in it and also didn’t know, or even have any reason to believe, that there was anything like that in their stock at all.

A large social site in 2022 knows there’s some child porn in the mix somewhere. I suspect that the proponents are hoping that they can use that as enough scienter to get around Smith completely.

It’s true that it’s still just as impractical for a site to find every single bit of child porn as it would be for a bookseller to find every "obscene" book… but they can still push for the idea that the First Amendment allows them to require a site to do "everything reasonably possible". Not just because it’s supposedly a "best practice". Not just because not doing it would risk not finding child porn. Because the site has actual knowledge that there’s a problem on their particular system.

That means they can still try to demand scanning, whether via state law or via some other pass. Scanning, of course, means no effective encryption. They will try to get those in through the back door even if they’re not in the bill, and given the subject matter I’d be really worried that they’d win in court.

The right answer, of course, is "Yeah, I’m sure there’s some child porn on every major site. Tough". But nobody seems to have the guts to say that.

That One Guy (profile) says:

Re: There's more than one kind of knowledge

The right answer, of course, is "Yeah, I’m sure there’s some child porn on every major site. Tough". But nobody seems to have the guts to say that.

That would be far too easy to use against the platform owner, a perhaps better phrasing would be ‘Yeah that content is on our platform despite our best efforts to keep it off, if you can come up with a better way to handle it other than telling us to try harder we’re all ears’.

That puts the onus on those claiming that more could be done to actually come up with a better way and allows whatever they come up with to be put under scrutiny for viability, with any blame easier to dump on them rather than the platform.

Sok Puppette says:

Re: Re: There's more than one kind of knowledge

I’m not saying the "platform owners" should say that. I’m saying EVERYBODY should say it.

The problem with asking them to come up with suggestions is that they WILL. And they will claim that their suggestions are workable when they’re actually not. And they’ll claim that their suggestions don’t force disabling security measures when they actually do. And they’ll claim that their suggestions don’t put people at risk when they actually do.

They will never come up with any suggestions that don’t have those problems, because that is not possible. However, every time you manage to argue away one suggestion, they’ll reword things a bit, come up with a slightly modified one, and claim this one is the fix. They can do this forever.

… and their message to people who are not closely engaged with the issue will be that they’ve tried and tried to be reasonable and address the sane people’s concerns, but the sane people are unreasonable and hate compromise and won’t accept anything at all.

It is incredibly bad strategy to adopt any message that suggests there’s could be an acceptable way to do what those people want, because there is not.

Anonymous Coward says:

Re: Re: There's more than one kind of knowledge

It still crosses the line. It’s punishing a platform because they did not have "knowledge" of CSAM material but "should’ve known". (plus as many have pointed out, some still slips though despite their best efforts)

As CDT noted: "Having a “ground to believe” something is not “knowing” it. Opening up a social media platform to liability when the platform merely has a ground to believe that CSAM is carried on the platform may “so strongly encourage” the provider to search for it that the search no longer becomes merely a private initiative."

So it seems reckless to punish a platform for CSAM it did not know about plus it feeds into the 4A issues as well.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Enough projection to reach the moon...

Senators: ‘We don’t care about CSAM and can’t be bothered to actually do anything about it so obviously internet platforms don’t care about CSAM and can’t be bothered to do anything about it either. We’ll blame them for not getting rid of all of it and if anyone tries to point out that we haven’t done anything but blame them we’ll just call them Big Tech lobbyists/shill, it’s brilliant!’

And as always, A vote for EARN IT is a vote for CSAM.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
09:29 Canadian Court Upholds 'God Given Right' To Give Your Obnoxious Neighbor The Finger (26)
13:37 Appeals Court Says Nope To Florida Governor's 'Stop Woke' Law, Denies Request To Lift Injunction Against It (22)
12:04 Setting 1st Amendment Myths On Fire In A Crowded Theater (157)
09:18 Is Gavin Newsom Attacking Walgreens For Its Choices Different From DeSantis Attacking Disney? (142)
10:48 Fifth Circuit Passes Up Opportunity To Correct Ruling That Said Gov't Retaliation Against Protected Speech Is Fine (9)
12:06 Another Day, Another Blatant Attack On The 1st Amendment From The Florida GOP (125)
09:34 New DeSantis-Endorsed Florida Bill An Outright Attack On The 1st Amendment And Free Speech (106)
09:21 Getting Kicked Off Social Media For Breaking Its Rules Is Nothing Like Being Sent To A Prison Camp For Retweeting Criticism Of A Dictator (243)
05:26 It's Time To Codify The 'NY Times v. Sullivan' Standard Into Law (57)
12:11 Free Speech Absolutist Elon Musk Pulls Down Documentary About India PM Modi (26)
10:43 Appeals Court Judge Suggests Hate Speech Shouldn't Be Protected In Decision Against Students Expelled For Bigoted Social Media Posts (40)
09:30 Saudi Government Narrative Control Efforts Now Include The Jailing Of Wikipedia Administrators (61)
15:36 Court Reminds St. Louis City Council That Blocking Taxpayers On Social Media Violates 1st Amendment (15)
13:28 Yes, Elon Musk Is Fucking Up Twitter; But No, The Government Has No Business Getting Involved (118)
10:50 Congress Is About To Make This Post Telling You When To Celebrate SCOTUS Justice Birthdays Illegal (47)
10:44 Actual Free Speech Matters, Elon Musk's Understanding Of It Puts Free Speech At Risk (210)
12:03 The First Amendment Needs To Protect Everyone (Even Homophobic Web Designers) To Protect Anyone (248)
10:45 First Circuit Trims First Amendment, Upholds New Hampshire's Criminal Defamation Law (10)
12:48 Court Tells Florida Town That Law Forbidding Resident's 'Fuck Trump' Sign Is Unconstitutional (19)
12:26 Iowa City Officials Prove They Aren't Fascists By Arresting An Activist Twice For Calling Them Fascists (13)
09:42 Elon Musk's First Move Is To Fire The Person Most Responsible For Twitter's Strong Free Speech Stance (311)
10:50 Brazil's Governments Amps Up Anti-Free Speech Tactics Ahead Of National Election (12)
12:18 Saudi Arabia Imprisons An American Citizen For 16 Years Over Critical Tweets (22)
15:31 Some Good News: Planet Aid Agrees To Pay $1.9 Million To Settle Its SLAPP Suit Against Reveal News (1)
12:23 Turkey Still Thinks It Hasn't Jailed Enough Journalists, Add Prison Sentences To Its 'Fake News' Law (3)
09:31 John Stossel Loses His Pathetic SLAPP Suit Against Facebook And Fact Checkers (91)
15:53 Vietnamese Government Pushes Plan To Restrict Dissemination Of News Stories By Social Media Platforms (17)
10:43 California Governor Signs Bill Forbidding The Use Of Rap Lyrics As Criminal Evidence (12)
10:45 The Onion Files Hilarious Amicus Brief In An Important Case, And Actually Makes A Key Point In The Best Way Possible (23)
09:31 There Are Real Threats To Free Speech Everywhere. Cancel Culture Is Far Down The List (340)
More arrow