Jess Miers's Techdirt Profile

Jess Miers

About Jess Miers

Posted on Techdirt - 17 March 2023 @ 11:59am

Yes, Section 230 Should Protect ChatGPT And Other Generative AI Tools

Question Presented: Does Section 230 Protect Generative AI Products Like ChatGPT?

As the buzz around Section 230 and its application to algorithms intensifies in anticipation of the Supreme Court’s response, ‘generative AI’ has soared in popularity among users and developers, begging the question: does Section 230 protect generative AI products like ChatGPT? Matt Perault, a prominent technology policy scholar and expert, thinks not, as he discussed in his recently published Lawfare article: Section 230 Won’t Protect ChatGPT.

Perault’s main argument follows as such: because of the nature of generative AI, ChatGPT operates as a co-creator (or material contributor) of its outputs and therefore could be considered the ‘information content provider’ of problematic results, ineligible for Section 230 protection. The co-authors of Section 230, former Representative Chris Cox and Sen. Ron Wyden, have also suggested that their law doesn’t grant immunity to generative AI. 

I respectfully disagree with both the co-authors of Section 230 and Perault, and offer the counter argument: Section 230 does (and should) protect products like ChatGPT.

It is my opinion that generative AI does not demand exceptional treatment. Especially since, as it currently stands, generative AI is not exceptional technology; an understandably provocative take to which we’ll soon return. 

But first, a refresher on Section 230.

Section 230 Protects Algorithmic Curation and Augmentation of Third-Party Content 

Recall that Section 230 says websites and users are not liable for the content they did not create, in whole or in part. To evaluate whether the immunity applies, the Barnes v. Yahoo! Court provided a widely accepted three-part test:

  1. The defendant is an interactive computer service; 
  2. The plaintiff’s claim treats the defendant as a publisher or speaker; and
  3. The plaintiff’s claim derives from content the defendant did not create. 

The first prong is not typically contested. Indeed, the latter prongs are usually the flashpoint(s) of most Section 230 cases. And in the case of ChatGPT, the third prong seems especially controversial. 

Section 230’s statutory language states that a website becomes an information content provider when it is “responsible, in whole or in part, for the creation or development” of the content at issue. In their recent Supreme Court case challenging Section 230’s boundaries, the Gonzalez Petitioners assert that the use of algorithms to manipulate and display third-party content precludes Section 230 protection because the algorithms, as developed by the defendant website, convert the defendant into an information content provider. But existing precedent suggests otherwise.

For example, the Court in Fair Housing Council of San Fernando Valley v. Roommate.com (aka ‘the Roommates case’)—a case often invoked to evade Section 230—held that it is not enough for a website to merely augment the content at issue to be considered a co-creator or developer. Rather, the website must have materially contributed to the content’s alleged unlawfulness.  Or, as the majority put it, “[i]f you don’t encourage illegal content, or design your website to require users to input illegal content, you will be immune.” 

The majority also expressly distinguished Roomates.com from “ordinary search engines,” noting that unlike Roommates.com, search engines like Google do not use unlawful criteria to limit the scope of searches conducted (or results delivered), nor are they designed to achieve illegal ends. In other words, the majority suggests that websites retain immunity when they provide neutral tools to facilitate user expression. 

While “neutrality” brings about its own slew of legal ambiguities, the Roommates Court offers some clarity suggesting that websites with a more hands-off approach to content facilitation are safer than websites that guide, encourage, coerce, or demand users produce unlawful content. 

For example, while the Court rejected Roommate’s Section 230 defense for its allegedly discriminatory drop-down options, the Court simultaneously upheld Section 230’s application to the “additional comments” option offered to Roommates.com users. The “additional comments” were separately protected because Roommates did not solicit, encourage, or demand their users provide unlawful content via the web form. In other words, a blank web form that simply asks for user input is a neutral tool, eligible for Section 230 protection, regardless of how the user actually uses the tool. 

The Barnes Court would later reiterate the neutral tools argument, noting that the provision of neutral tools to carry out what may be unlawful or illicit content does not amount to ‘development’ for the purposes of Section 230. Hence, while the ‘material contribution’ test is rather nebulous (especially for emerging technologies), it is relatively clear that a website must do something more than just augmenting, curating, and displaying content (algorithmically or otherwise) to transform into the creator or developer of third-party content.

The Court in Kimzey v. Yelp offers further clarification: 

“the material contribution test makes a “‘crucial distinction between, on the one hand, taking actions (traditional to publishers) that are necessary to the display of unwelcome and actionable content and, on the other hand, responsibility for what makes the displayed content illegal or actionable.’”).”

So, what does this mean for ChatGPT?

The Case For Extending Section 230 Protection to ChatGPT

In his line of questioning during the Gonzalez oral arguments, Justice Gorsuch called into question Section 230’s application to generative AI technologies. But before we can even address the question, we need to spend some time understanding the technology. 

Products like ChatGPT use large language models (LLMs) to produce a reasonable continuation of human-sounding responses. In other words, as discussed here by Stephen Wolfram, renown computer scientist, mathematician, and creator of WolframAlpha, ChatGPT’s core function is to “continue text in a reasonable way, based on what it’s seen from the training it’s had (which consists in looking at billions of pages of text from the web, etc).” 

While ChatGPT is impressive, the science behind it is not necessarily remarkable. Computing technology reduces complex mathematical computations into step-by-step functions that the computer can then solve at tremendous speeds. As humans, we do this all the time, just much slower than a computer. For example, when we’re asked to do non-trivial calculations in our heads, we start by breaking up the computation into smaller functions on which mental math is easily performed until we arrive at the answer.

Tasks that we assume are fundamentally impossible for computers to solve are said to involve ‘irreducible computations’ (i.e. computations that cannot be simply broken up into smaller mathematical functions, unaided by human input). Artificial intelligence relies on neural networks to learn and then ‘solve’ said computations. ChatGPT approaches human queries the same way. Except, as  Wolfram notes, it turns out that said queries are not as sophisticated to compute as we may have thought: 

“In the past there were plenty of tasks—including writing essays—that we’ve assumed were somehow “fundamentally too hard” for computers. And now that we see them done by the likes of ChatGPT we tend to suddenly think that computers must have become vastly more powerful—in particular surpassing things they were already basically able to do (like progressively computing the behavior of computational systems like cellular automata).

But this isn’t the right conclusion to draw. Computationally irreducible processes are still computationally irreducible, and are still fundamentally hard for computers—even if computers can readily compute their individual steps. And instead what we should conclude is that tasks—like writing essays—that we humans could do, but we didn’t think computers could do, are actually in some sense computationally easier than we thought.

In other words, the reason a neural net can be successful in writing an essay is because writing an essay turns out to be a “computationally shallower” problem than we thought. And in a sense this takes us closer to “having a theory” of how we humans manage to do things like writing essays, or in general deal with language.”

In fact, ChatGPT is even less sophisticated when it comes to its training. As Wolfram asserts:

“ChatGPT as it currently is, the situation is actually much more extreme, because the neural net used to generate each token of output is a pure “feed-forward” network, without loops, and therefore has no ability to do any kind of computation with nontrivial “control Flow.””

Put simply, ChatGPT uses predictive algorithms and an array of data made up entirely of publicly available information online to respond to user-created inputs. The technology is not sophisticated enough to operate outside of human-aided guidance and control. Which means that ChatGPT (and similarly situated generative AI products) are functionally akin to “ordinary search engines” and predictive technology like autocomplete. 

Now we apply Section 230. 

For the most part, the courts have consistently applied Section 230 to algorithmically generated outputs. For example, the Sixth Circuit in O’Kroley v. Fastcase Inc. upheld Section 230 for Google’s automatically generated snippets that summarize and accompany each Google result. The Court notes that even though Google’s snippets could be considered a separate creation of content, the snippets derive entirely from third-party information found at each result. Indeed, the Court concludes that contextualization of third-party content is in fact a function of an ordinary search engine. 

Similarly, in Obado v. Magedson, Section 230 applies to search result snippets. The Court says: 

Plaintiff also argues that Defendants displayed through search results certain “defamatory search terms” like “Dennis Obado and criminal” or posted allegedly defamatory images with Plaintiff’s name. As Plaintiff himself has alleged, these images at issue originate from third-party websites on the Internet which are captured by an algorithm used by the search engine, which uses neutral and objective criteria. Significantly, this means that the images and links displayed in the search results simply point to content generated by third parties. Thus, Plaintiff’s allegations that certain search terms or images appear in response to a user-generated search for “Dennis Obado” into a search engine fails to establish any sort of liability for Defendants. These results are simply derived from third-party websites, based on information provided by an “information content provider.” The linking, displaying, or posting of this material by Defendants falls within CDA immunity.

The Court also nods to Roommates

“None of the relevant Defendants used any sort of unlawful criteria to limit the scope of searches conducted on them; “[t]herefore, such search engines play no part in the ‘development’ of the unlawful searches” and are acting purely as an interactive computer service…

The Court goes further, extending Section 230 to autocomplete (i.e. when the service at issue uses predictive algorithms to suggest and preempt a user’s query): 

“suggested search terms auto-generated by a search engine do not remove that search engine from the CDA’s broad protection because such auto-generated terms “indicates only that other websites and users have connected plaintiff’s name” with certain terms.”

Like Google Search, ChatGPT is entirely driven by third-party input. In other words, ChatGPT does not invent, create, or develop outputs absent any prompting from an information content provider  (i.e. a user). Further, nothing on the service expressly or impliedly encourages users to submit unlawful queries. In fact, OpenAI continues to implement guardrails that force ChatGPT to ignore requests that would demand problematic and / or unlawful responses. Compare this to Google Search which may actually still provide a problematic or even unlawful result. Perhaps ChatGPT actually improves the baseline for ordinary search functionality. 

Indeed, ChatGPT essentially functions like the “additional comments” web form in Roommates. And while ChatGPT may “transform” user input into a result that responds to the user-driven query, that output is entirely composed of third-party information scraped from the web. Without more, this transformation is simply an algorithmic augmentation of third-party content (much like Google’s snippets). And as discussed, algorithmic compilations or augmentations of third-party content are not enough to transform the service into an information content provider (e.g. Roommates; Batzel v. Smith; Dyroff v. The Ultimate Software Group, Inc.; Force v. Facebook). 

The Limit Does Exist

Of course, Section 230’s coverage is not without its limits. There’s no doubt that future generative AI defendants, like OpenAI, will face an uphill battle in persuading a court. Not only do defendants have the daunting challenge of explaining generative AI technologies for less technologically savvy judges, the current judicial swirl around Section 230 and algorithms does defendants no favors. 

For example, the Supreme Court could very well hand-down a convoluted opinion in Gonzalez that introduces ambiguity as to when Section 230 applies to algorithmic curation / augmentation. Such an opinion would only serve to undermine the precedence discussed above. Indeed, future defendants may find themselves embroiled in convoluted debate about AI’s capacity for neutrality. In fact, it would be intellectually dishonest to ignore emerging common law developments that preclude Section 230 from claims alleging dangerous / defective product designs (e.g. Lemmon v. Snap, A.M. v. Omegle, Oberdorf v. Amazon). 

Further, the Fourth Circuit’s recent decision in Henderson v. Public Data could also prove to be problematic for future AI defendants as it imposes contributive liability for publisher activities that go beyond those of “traditional editorial functions” (which could include any and all publisher functions done via algorithms). 

Lastly, as we saw in the Meta / DOJ settlement regarding Meta’s discriminatory practices involving algorithmic targeting of housing advertisements, AI companies cannot easily avoid liability when they materially contribute to the unlawfulness of the result. If OpenAI were to hard-code ChatGPT with unlawful responses, Section 230 will likely be unavailable. However, as you might imagine, this is a non-trivial distinction. 

Public Policy Demands Section 230 Protections for Generative AI Technologies

Section 230 was initially established with the recognition that the online world would undergo frequent advancements, and that the law must accommodate these changes to promote a thriving digital ecosystem. 

Generative AI is the latest iteration of web technology that has enormous potential to bring about substantial benefits for society and transform the way we use the Internet. And it’s already doing good. Generative AI is currently used in the healthcare industry, for instance, to improve medical imaging and to speed up drug discovery and development. 

As discussed, courts have developed precedence in favor of Section 230 immunity for online services that solicit or encourage users to create and provide content. Courts have also extended the immunity to online services that facilitate the submission of user-created content. From a legal standpoint, generative AI tools are not unique from any other online service that encourages user interaction and contextualizes third-party results. 

From a public policy perspective, it is crucial that courts uphold Section 230 immunity for generative AI products. Otherwise, we risk foreclosing on the technology’s true potential. Today, there are tons of variations of ChatGPT-like products offered by independent developers and computer scientists who are likely unequipped to deal with an inundation of litigation that Section 230 typically preempts. 

In fact, generative AI products are arguably more vulnerable to frivolous lawsuits because they depend entirely upon whatever query or instructions its users may provide, malicious or otherwise. Without Section 230, developers of generative AI services must anticipate and guard against every type of query that could cause harm. 

Indeed, thanks to Section 230, companies like OpenAI are doing just that by providing guardrails that limit ChatGPT’s responses to malicious queries. But those guardrails are neither comprehensive nor perfect. And like with all other efforts to moderate awful online content, the elimination of Section 230 could discourage generative AI companies from implementing said guardrails in the first place; a countermove that would enable users to prompt LLMs with malicious queries to bait out unlawful responses subject to litigation. In other words, plaintiffs could transform ChatGPT into their very own personal perpetual litigation machine. 

And as Perault rightfully warns: 

“If a company that deploys an LLM can be dragged into lengthy, costly litigation any time a user prompts the tool to generate text that creates legal risk, companies will narrow the scope and scale of deployment dramatically. Without Section 230 protection, the risk is vast: Platforms using LLMs would be subject to a wide array of suits under federal and state law. Section 230 was designed to allow internet companies to offer uniform products throughout the country, rather than needing to offer a different search engine in Texas and New York or a different social media app in California and Florida. In the absence of liability protections, platforms seeking to deploy LLMs would face a compliance minefield, potentially requiring them to alter their products on a state-by-state basis or even pull them out of certain states entirely…

…The result would be to limit expression—platforms seeking to limit legal risk will inevitably censor legitimate speech as well. Historically, limits on expression have frustrated both liberals and conservatives, with those on the left concerned that censorship disproportionately harms marginalized communities, and those on the right concerned that censorship disproportionately restricts conservative viewpoints.

The risk of liability could also impact competition in the LLM market. Because smaller companies lack the resources to bear legal costs like Google and Microsoft may, it is reasonable to assume that this risk would reduce startup activity.”

Hence, regardless of how we feel about Section 230’s applicability to AI, we will be forced to reckon with the latest iteration of Masnick’s Impossibility Theorem: there is no content moderation system that can meet the needs of all users. The lack of limitations on human awfulness mirrors the constant challenge that social media companies encounter with content moderation. The question is whether LLMs can improve what social media cannot.

Posted on Techdirt - 2 November 2020 @ 09:35am

Your Problem Is Not With Section 230, But The 1st Amendment

Everyone wants to do something about Section 230. It?s baffling how seldom we talk about what happens next. What if Section 230 is repealed tomorrow? Must Twitter cease fact-checking the President? Must Google display all search results in chronological order? Perhaps PragerU would finally have a tenable claim against YouTube; and Jason Fyk might one day return to showering the Facebook masses with his prized collection of pissing videos.

Suffice to say, that?s not how any of this works.

Contrary to what seems to be popular belief, Section 230 isn?t what?s stopping the government from pulling the plug on Twitter for taking down NY Post tweets or exposing bloviating, lying, elected officials. Indeed, without Section 230, plaintiffs with a big tech axe to grind still have a significant hurdle to overcome: The First Amendment.

As private entities, websites have always enjoyed First Amendment?freedom of speech?protections for the content they choose (and choose not) to carry. What many erroneously (and ironically) declare as ?censorship? is really no different from the editorial discretion enjoyed by newspapers, broadcasters, and your local bookstore. When it comes to the online world, we simply call it content moderation. The decision to fact-check, remove, reinstate, or simply leave content up, is wholly within the First Amendment?s purview. On the flip side, as private, non-government actors, websites do not owe their users the same First Amendment protection for their content.

Or, as TechFreedom?s brilliant Ashkhen Kazaryan wisely puts it, the First Amendment protects Twitter from Trump, but not Trump from Twitter.

What then is Section 230?s use if the First Amendment already stands in the way? Put simply, Section 230 says websites are not liable for third-party content. In practice, Section 230 merely serves as a free speech fast-lane. Under Section 230, websites can reach the same inevitable conclusions they would reach under the First Amendment, only faster and cheaper. Importantly, Section 230 grants websites and users peace of mind knowing that plaintiffs are less likely to sue them for exercising their editorial discretion?and even if they do?websites and users are almost always guaranteed a fast, cheap, and painless win. That peace of mind is especially crucial for market entrants posed to unseat the big tech incumbents.

With that, it seems that Americans haven?t fallen out of love with Section 230, rather, alarmingly, they?ve fallen out of love with the First Amendment. In case you?re wondering if you too have fallen out of love with the freedom of speech, consider the following:

If you’re upset that Twitter and Facebook keep removing content that favors your political viewpoints,

Your problem is with the First Amendment, not Section 230.

If you’re upset that your favorite social media site won’t take down content that offends you,

Your problem is with the First Amendment, not Section 230.

If you’re mad at search engines for indexing websites you don’t agree with,

Your problem is with the First Amendment, not Section 230.

If you’re mad at a website for removing your posts – even when it seems unreasonable

Your problem is with the First Amendment, not Section 230.

If you don’t like the way a website aggregates content on your feed or in your search results,

Your problem is with the First Amendment, not Section 230.

If you wish websites had to carry and remove only specific pre-approved types of content

Your problem is with the First Amendment, not Section 230.

If you wish social media services had to be politically neutral,

Your problem is with the First Amendment, not Section 230.

If someone wrote a negative online review about you or your business,

Your problem is with the First Amendment, not Section 230.

If you hate pornography,

Your problem is with the First Amendment, not Section 230.

If you hate Trump?s Tweets

Your problem is with the First Amendment, not Section 230.

If you hate fact-checks,

Your problem is with the First Amendment, not Section 230.

If you love fact-checks and wish Facebook had to do more of them,

Your problem is with the First Amendment, not Section 230.

And at the end of the day, If you hate editorial discretion and free speech,

You probably just hate the First Amendment… not Section 230.

Posted on Techdirt - 10 August 2020 @ 10:39am

Section 230 Isn't Why Omegle Has Awful Content, And Getting Rid Of 230 Won't Change That

Last year, I co-authored an article with my law school advisor, Prof. Eric Goldman, titled “Why Can’t Internet Companies Stop Awful Content?” In our article, we concluded that the Internet is just a mirror of our society. Unsurprisingly, anti-social behavior exists online just as it does offline. Perhaps though, the mirror analogy doesn’t go far enough. Rather, the Internet is more like a magnifying glass, constantly refocusing our attention on all the horrible aspects of the human condition.

Omegle, the talk-to-random-strangers precursor to Chatroulette, might be that magnifying glass, intensifying our urge to do something about awful content.

Unfortunately, in our quest for a solution, we often skip a step, jumping to Section 230—the law that shields websites from liability for third-party content—instead of thinking carefully about the scalable, improvable, and measurable strides to be made through effective content moderation efforts.

Smaller companies make for excellent content moderation case studies, especially relatively edgier companies like Omegle. It’s no surprise that Omegle is making a massive comeback. After 100+ days of quarantine, anything that recreates at least a semblance of interaction with humans, not under the same roof, is absolutely enticing. And that’s just what Omegle offers. For those that are burnt out on monotonous Zoom “coffee chats,” Omegle grants just the right amount of spontaneity and nuanced human connection that we used to enjoy before “social distancing” became a household phrase.

Of course, it also offers a whole lot of dicks.

When I was a teen, Omegle was a sleepover staple. If you’re unfamiliar, Omegle offers two methods of randomly connecting with strangers on the Internet: text or video. Both are self-explanatory. Text mode pairs two anonymous strangers in a chat room whereas video mode pairs two anonymous strangers via their webcams.

Whether you’re on text or video, there’s really no telling what kinds of terrible content—and people—you’ll likely encounter. It’s an inevitable and usual consequence of online anonymity. While the site might satisfy some of our deepest social cravings, it might also expose us to some incredibly unpleasant surprises outside the watered-down and sheltered online experiences provided to us by big tech. Graphic pornography, violent extremism, hate speech, child predators, CSAM, sex trafficking, etc., are all fair game on Omegle; all of which is truly awful content that has always existed in the offline world, now magnified by the unforgiving, unfiltered, use-at-your-own-risk, service.

Of course, like with any site that exposes us to the harsh realities of the offline world, critics are quick to blame Section 230. Efforts to curtail bad behavior online usually start with calls to amend Section 230.

At least to Section 230’s critics, the idea is simple: get rid of Section 230 and the awful content will follow. Their reason, as I understand it, is that websites will then “nerd harder” to eliminate all awful content so they won’t be held liable for it. Some have suggested the same approach for Omegle.

Obvious First Amendment constraints aside (because remember, the First Amendment protects a lot of the “lawful but awful content,” like pornography, that exists on Omegle’s service), what would happen to Omegle if Section 230 were repealed? Rather, what exactly is Omegle supposed to do?

For starters, Section 230 excludes protection for websites that violate federal criminal law. So, Omegle would continue to be on the hook if it started to actively facilitate the transmission of illegal content such as child pornography. No change there.

But per decisions like Herrick v. Grindr, Dyroff v. Ultimate Software, and Roommates.com, it is well understood that Section 230 crucially protects sites like Omegle that merely facilitate user-to-user communication without materially contributing to the unlawfulness of the third-party content. Hence, even though there exists an unfortunate reality where nine year-olds might get paired randomly with sexual predators, Omegle doesn’t encourage or materially contribute to that awful reality. So, Omegle is afforded Section 230 protection.

Without Section 230, Omegle doesn’t have a lot of options as a site dedicated to connecting strangers on the fly. For example, the site doesn’t even have a reporting mechanism like its big tech counterparts. This is probably for two reasons: (1) The content on Omegle is ephemeral so by the time it’s reported, the victim and the perpetrator have likely moved on and the content has disappeared; and (2) it would be virtually impossible for Omegle to issue suspensions because Omegle users don’t have dedicated accounts. In fact the only option Omegle has for repeat offenders is a permanent IP ban. Such an option is usually considered so extreme that it’s reserved for only the most heinous offenders.

There are a few things Omegle could do to reduce their liability in a 230-less world. They might consider requiring users to have dedicated handles. It’s unclear though whether account creation would truly curb the dissemination of awful content anyway. Perhaps Omegle could act on the less heinous offenders, but banned, suspended, or muted users could always just generate new handles. Plus, where social media users risk losing their content, subscribers, and followers, Omegle users realistically have nothing to lose. So, generating a new handle is relatively trivial, leaving Omegle with the nuclear IP ban.

Perhaps Omegle could implement some sort of traditional reporting mechanism. Reporting mechanisms are only effective if the service has the resources to properly respond to and track issues. This means hiring more human moderators to analyze the contextually tricky cases. Additionally, it means hiring more engineers to stand up robust internal tooling to manage reporting queues and to perform some sort of tracking for repeat offenders.

For Omegle, implementing a reporting mechanism might just be doing something to do something. For traditional social media companies, a reporting mechanism ensures that violating content is removed and the content provider is appropriately reprimanded. Neither of those goals are particularly relevant to Omegle’s use case. The only goal a reporting mechanism might accomplish is in helping Omegle track pernicious IP addresses. Omegle could set up an internal tracking system that applies strikes to each IP before the address is sanctioned. But if the pernicious user can just stand up a new IP and continue propagating abuse, the entire purpose of the robust reporting mechanism is moot.

Further, reporting mechanisms are great for victimized users that might seek an immediate sense of catharsis after encountering abusive content. But if the victim’s interaction with the abusive content and user is ephemeral and fleeting, the incentive to report is also debatable.

All of this is to drive home the point that there is no such thing as a one-size-fits-all approach to content moderation. Even something as simple as giving users an option to report might be completely out of scope depending on the company’s size, resources, bandwidth, and objectives.

Another suggestion is that Omegle simply stop allowing children to be paired with sexual predators. This would require Omegle to (1) perform age verification on all of its users with the major trade-off being privacy—not to mention the obvious that it may not even work. Nothing really stops a teen from stealing and uploading their parents’ credit card or license; and (2) require all users to prove they aren’t sexual predators (???)—an impossible (and invasive) task for a tiny Internet company.

Theoretically, Omegle could pre-screen all content and users. Such an approach would require an immense team of human content moderators, which is incredibly expensive for a website that has an estimated annual revenue of less than $1 million and less than 10 employees. Plus, it would completely destroy the service’s entire point. The reason Omegle hasn’t been swallowed up by tech incumbents is because it offers an interesting online experience completely unique from Google, Facebook, and Twitter. Pre-screening might dilute that experience.

Another extreme solution might be to just strip out anonymity entirely and require all users to register all of their identifying information with the service. The obvious trade-off: most users would probably never return.

Clearly, none of these options are productive or realistic for Omegle; all of which are consequences of attacking the awful content problem via Section 230.

Without any amendments to Section 230, Omegle has actually taken a few significant steps to effectively improve their service. For example, Omegle now has an 18+ Adult, “unmoderated section” in which users are first warned about sexual content and required to acknowledge that they’re 18 or older before entering. Additionally, Omegle clarifies that the “regular” video section is monitored and moderated to the best of their abilities. Lastly, Omegle recently included a “College student chat” which verifies students via their .edu addresses. Of course, to use any of Omegle’s features, a user must be 18+ or 13+ with parental permission.

The “unmoderated section” is an ingenious example of a “do better” approach for a service that’s strapped for content moderation options. Omegle’s employees likely know that a primary use case of the service is sex. By partitioning the service, Omegle might drastically cut down on the amount of unsolicited sexual content encountered by both adult and minor users of the regular service, without much interruption to the service’s overall value-add. These experiments in mediating the user to user experience can only improve from here. Thanks to Section 230, websites like Omegle increasingly pursue such experiments to help their users improve too.

But repealing Section 230 leaves sites like Omegle with one option: exit the market.

I’m not allergic to conversations about how the market can self-correct these types of services and whether they should be supported by the market at all. Maybe sites like Omegle—that rely on their users to not be awful to each other as a primary method of content moderation—are not suitable for our modern day online ecosystem.

There’s a valid conversation to be had within technology policy and Trust and Safety circles about websites like Omegle and whether the social good they provide outweigh the harms they might indirectly cater to. Perhaps, sites like Omegle should exit the market. However, that’s a radically different conversation; one that inquires into whether current innovations in content moderation support sites like Omegle, and whether such sites truly have no redeemable qualities worth preserving in the first place. That’s an important conversation; one that shouldn’t involve speculating about Section 230’s adequacy.

Jess Miers is a third-year law student at Santa Clara University School of Law and a Legal Policy Specialist at Google. Her scholarship primarily focuses on Section 230 and content moderation. Opinions are her own and do not represent Google.

Posted on Techdirt - 25 February 2020 @ 09:24am

Barr's Motives, Encryption and Protecting Children; DOJ 230 Workshop Review, Part III

In Part I of this series on the Department of Justice’s February 19 workshop, “Section 230 — Nurturing Innovation or Fostering Unaccountability?” (archived video and agenda), we covered why Section 230 is important, how it works, and how panelists proposed to amend it. Part II explored Section 230’s intersection with criminal law.

Here, we ask what DOJ’s real objective with this workshop was. The answer to us seems clear: use Section 230 as a backdoor for banning encryption — a “backdoor to a backdoor” — in the name of stamping out child sexual abuse material (CSAM) while, conveniently, distracting attention from DOJ’s appalling failures to enforce existing laws against CSAM. We conclude by explaining how to get tough on CSAM to protect kids without amending Section 230 or banning encryption.

Banning Encryption

In a blistering speech, Trump’s embattled Attorney General, Bill Barr, blamed the 1996 law for a host of ills, especially the spread of child sexual abuse material (CSAM). But he began the speech as follows:

[Our] interest in Section 230 arose in the course of our broader review of market-leading online platforms, which we announced last summer. While our efforts to ensure competitive markets through antitrust enforcement and policy are critical, we recognize that not all the concerns raised about online platforms squarely fall within antitrust. Because the concerns raised about online platforms are often complex and multi-dimensional, we are taking a holistic approach in considering how the department should act in protecting our citizens and society in this sphere.

In other words, the DOJ is under intense political pressure to “do something” about “Big Tech” — most of all from Republicans, who have increasingly fixated on the idea that “Big Tech” is the new “Liberal Media” out to get them. They’ve proposed a flurry of bills to amend Section 230 — either to roll back its protections or to hold companies hostage, forcing them to do things that really have nothing to do with Section 230, like be “politically neutral” (the Hawley bill) or ban encryption (the Graham-Blumenthal bill), because websites and Internet services simply can’t operate without Section 230’s protections.

Multiple news reports have confirmed our hypothesis going into the workshop: that its purpose was to tie Section 230 to encryption. Even more importantly, the closed-door roundtable after the workshop (to which we were, not surprisingly, not invited) reportedly concluded with a heated discussion of encryption, after the DOJ showed participants draft amendments making Section 230 immunity contingent on compromising encryption by offering a backdoor to the U.S. government. Barr’s speech said essentially what we predicted he would say right before the workshop:

Technology has changed in ways that no one, including the drafters of Section 230, could have imagined. These changes have been accompanied by an expansive interpretation of Section 230 by the courts, seemingly stretching beyond the statute’s text and original purpose. For example, defamation is Section 230’s paradigmatic application, but Section 230 immunity has been extended to a host of additional conduct — from selling illegal or faulty products to connecting terrorists to facilitating child exploitation. Online services also have invoked immunity even where they solicited or encouraged unlawful conduct, shared in illegal proceeds, or helped perpetrators hide from law enforcement. …

Finally, and importantly, Section 230 immunity is relevant to our efforts to combat lawless spaces online. We are concerned that internet services, under the guise of Section 230, can not only block access to law enforcement — even when officials have secured a court-authorized warrant — but also prevent victims from civil recovery. This would leave victims of child exploitation, terrorism, human trafficking, and other predatory conduct without any legal recourse. Giving broad immunity to platforms that purposefully blind themselves ? and law enforcers ? to illegal conduct on their services does not create incentives to make the online world safer for children. In fact, it may do just the opposite.

Barr clearly wants to stop online services from “going dark” through Section 230 — even though Section 230 has little (if any) direct connection to encryption. His argument was clear: Section 230 protections shouldn’t apply to services that use strong encryption. That’s precisely what the Graham-Blumenthal EARN IT Act would do: greatly lower the bar for enforcement of existing criminal laws governing child sexual abuse material (CSAM), allow state prosecutions, and civil lawsuits (under a lower burden of proof), but then allow Internet services to “earn” back their Section 230 protection against this increased liability by doing whatever a commission convened and controllled by the Attorney General tells them to do.

Those two Senators are expected to formally introduce their bill in the coming weeks. Undoubtedly, they’ll refer back to Barr’s speech, claiming that law enforcement needs their bill passed ASAP to “protect the children.”

Barr’s speech on encryption last July didn’t mention 230 but went much further in condemning strong encryption. If you read it carefully, you can see where Graham and Blumenthal got their idea of lowering the standard of existing federal law on CSAM from “actual knowledge” to “recklessness, which would allow the DOJ to sue websites that offer stronger encryption than the DOJ thinks is really necessary. Specifically, Barr said:

The Department has made clear what we are seeking. We believe that when technology providers deploy encryption in their products, services, and platforms they need to maintain an appropriate mechanism for lawful access. This means a way for government entities, when they have appropriate legal authority, to access data securely, promptly, and in an intelligible format, whether it is stored on a device or in transmission. We do not seek to prescribe any particular solution. …

We are confident that there are technical solutions that will allow lawful access to encrypted data and communications by law enforcement without materially weakening the security provided by encryption. Such encryption regimes already exist. For example, providers design their products to allow access for software updates using centrally managed security keys. We know of no instance where encryption has been defeated by compromise of those provider-maintained keys. Providers have been able to protect them. …

Some object that requiring providers to design their products to allow for lawful access is incompatible with some companies’ “business models.” But what is the business objective of the company? Is it “A” — to sell encryption that provides the best protection against unauthorized intrusion by bad actors? Or is it “B” — to sell encryption that assures that law enforcement will not be able to gain lawful access? I hope we can all agree that if the aim is explicitly “B” — that is, if the purpose is to block lawful access by law enforcement, whether or not this is necessary to achieve the best protection against bad actors — then such a business model, from society’s standpoint, is illegitimate, and so is any demand for that product. The product jeopardizes the public’s safety, with no countervailing utility. …

The real question is whether the residual risk of vulnerability resulting from incorporating a lawful access mechanism is materially greater than those already in the unmodified product. The Department does not believe this can be demonstrated.

In other words, companies choosing to offer encryption should have to justify their decision to do so, given the risks created by denying law enforcement access to user communications. That’s pretty close to a “recklessness” standard.

Again, for more on this, read Berin’s previous Techdirt piece. According to the most recently leaked version of the Graham-Blumenthal bill, the Attorney General would no longer be able to rewrite the “best practices” recommended by the Commission. But he would gain greater ability to steer the commission by continually vetoing its recommendations until it does what he wants. If the commission doesn’t make a recommendation, the safe harbor offered by complying with the “best practices” doesn’t go into effect — but the rest of the law still would. Specifically, website and Internet service operators would still face vague new criminal and civil liability for “reckless” product design. The commission and its recommendations are a red herring; the truly coercive aspects of the bill will happen regardless of what the commission does. If the DOJ signals that failing to offer a backdoor (or retain user data) will lead to legal liability, companies will do it — even absent any formalized “best practices.”

The Real Scandal: DOJ’s Inattention to Child Sexual Abuse

As if trying to compromise the security of all Internet services and the privacy of all users weren’t bad enough, we suspect Barr had an even more devious motive: covering his own ass, politically.

Blaming tech companies generally and encryption in particular for the continued spread of CSAM kills two birds with one stone. Not only does it offer them a new way to ban encryption, it also deflects attention from the real scandal that should appall us all: the collective failure of Congress, the Trump Administration, and the Department of Justice to prioritize the fight against the sexual exploitation of children.

The Daily, The New York Times podcast, ran part one of a two-part series on this topic on Wednesday. Reporters Michael Keller and Gabriel Dance summarized a lengthy investigative report they published back in September, but which hasn’t received the attention it deserves. Here’s the key part:

The law Congress passed in 2008 foresaw many of today’s problems, but The Times found that the federal government had not fulfilled major aspects of the legislation.

The Justice Department has produced just two of six required reports that are meant to compile data about internet crimes against children and set goals to eliminate them, and there has been a constant churn of short-term appointees leading the department’s efforts. The first person to hold the position, Francey Hakes, said it was clear from the outset that no one “felt like the position was as important as it was written by Congress to be.”

The federal government has also not lived up to the law’s funding goals, severely crippling efforts to stamp out the activity.

Congress has regularly allocated about half of the $60 million in yearly funding for state and local law enforcement efforts. Separately, the Department of Homeland Security this year diverted nearly $6 million from its cybercrimes units to immigration enforcement — depleting 40 percent of the units’ discretionary budget until the final month of the fiscal year.

So, to summarize:

  1. Congress has spent has half as much as it promised to;
  2. DOJ hasn’t bothered issuing reports required by law — the best way to get lawmakers to cough up promised funding; and
  3. The Trump Administration has chosen to spend money on the political theatre of immigration enforcement rather than stopping CSAM trafficking.

Let that sink in. In a better, saner world, Congress would be holding hearings to demand explanations from Barr. But they haven’t, and the workshop will allow Barr to claim he’s getting tough on CSAM without actually doing anything about it — while also laying the groundwork for legislation that would essentially allow him to ban encryption.

Even for Bill Barr, that’s pretty low.

Posted on Techdirt - 21 February 2020 @ 01:30pm

Section 230 and Criminal Law; DOJ 230 Workshop Review, Part II

In Part I of this series on the Department of Justice?s February 19 workshop, Section 230 ? Nurturing Innovation or Fostering Unaccountability? (archived video and agenda), we covered why Section 230 is important, how it works, and how panelists proposed to amend it.

Here, Part II covers how Section 230 intersects with criminal law, especially around child sexual abuse material (CSAM). Part III will ask what?s really driving DOJ, and explore how to get tough on CSAM without amending Section 230 or banning encryption.

Section 230 Has Never Stopped Enforcement of Most Criminal Laws

The second panel in particular focused on harms that either already are covered by federal criminal law (like CSAM) or that arguably should be (like revenge porn). So it?s worth reiterating two things up front:

  • Section 230?s protections for websites have always excluded federal criminal law

  • Section 230 has never stopped state or local prosecutors from enforcing state criminal laws against the users responsible for harmful conduct online.

Plaintiff?s lawyer Carrie Goldberg repeatedly mentioned Herrick v. Grindr. Her client Matthew Herrick sued Grindr for failing to stop his ex-boyfriend from repeatedly creating fake Grindr profiles of Herrick, each claiming he had a rape fantasy, and using these profiles to send over 1200 men to attempt to rape him. Both state criminal law and federal harassment law already cover such conduct. In fact, contrary to Goldberg?s claims that law enforcement did nothing to help her client, Herrick?s ex was arrested in 2017 and charged with stalking, criminal impersonation, making a false police report, and disobeying a court order.

On the same panel, Yiota Souras, Senior Vice President and General Counsel, National Center for Missing and Exploited Children, acknowledged that Section 230 didn?t stop federal prosecutors from charging executives of Backpage.com. Indeed, the former CEO plead guilty literally one day after President Trump signed FOSTA-SESTA — the first legislation to amend Section 230 since the law was enacted in 1996. Souras claimed that the only reason other sites haven’t rushed to fill the gap left by Backpage (in hosting ads for child sex trafficking) was the the deterrence effect of the new law.

Correction Notice: This post originally misattributed the above to Prof. Mary Anne Franks, rather than Yiota Souras.

But since FOSTA-SESTA was enacted nearly two years ago, not a single prosecution has been brought under the new law. By contrast, the DOJ managed to actually shut down Backpage.com and its former CEO, Carl Ferrer. Ferrer is now awaiting sentencing and could face up to five years in prison plus a $250,000 fine. (You can read his plea bargain if you?re interested.) Meanwhile, the two other arrested Backpage executives are continuing to fight their legal case, in which there is increasing evidence that the Justice Department is trying to railroad them into a guilty plea by misrepresenting their efforts to help stop trafficking as evidence they were helping to promote it. It?s a messy case, but with one criminal plea under pre-existing law and zero prosecutions for the new law, it?s hard to argue that the new law accounts for all of the deterrence value Franks ascribes to it.

The Role of States and State Criminal Law

Nebraska Attorney General Doug Peterson said state AGs wanted only one tiny tweak to Section 230: adding state criminal law to the list of exceptions to Section 230?s protections. (The National Association of Attorneys General has been pushing this idea for nearly a decade). It may sound moderate: after all, since 230 doesn?t bar enforcement of federal criminal law, why stop the application of state criminal law? But, as Prof. Goldman noted, there?s a world of difference between the two.

The AGs? proposal would create four distinct problems:

  1. Section 230 has ensured that we have a consistent national approach to using criminal law to police how websites and Internet services operate. But if website operators could be charged under any state or local law, you?d have a crazy-quilt of inconsistent state laws. Every state and locality in America could regulate the entire Internet.

  2. Most scholars agree that federal criminal law has become far too broad, but compared to any one state?s body of criminal law, it?s narrow and tailored. State criminal law includes an almost endless array of offenses, from panhandling to disturbing the peace, etc. Few people would argue that such laws should be applied on the Internet — yet, if Section 230 were expanded to allow prosecution of all state laws, creative prosecutors could charge just about any website with just about anything.

  3. In particular, half the states in the country still criminalize defamation, so opening the door to the enforcement of state criminal law means making websites liable for defamation committed by users — the thing Section 230 was most specifically intended to prevent. Yes, criminal cases involve a higher burden of proof but also stiffer penalties. And if websites face criminal penalties whenever users can complain about other users? speech, the chilling effects would be enormous. Any potentially sensitive or objectionable speech would be censored before anyone even complains. Politicians would be in a particularly privileged position, able to silence their critics simply by threatening to have criminal charges filed. Think Trump on steroids — for every politician in America (and anyone else who could get prosecutors to file a criminal complaint, or at least threaten to do so).

  4. These laws weren?t written for the Internet and don?t reflect the difficult balancing that would have to be done to answer the critical questions: exactly when would a website be responsible for each of the potentially billions of pieces of content it hosts? What kind of knowledge is required? The example of Italian prosecutors charging a Google executive with criminal cyberbullying simply because Google was too slow to take down a video of students taunting an autistic classmate illustrates just how high the stakes could be (never mind that the charges were ultimately overturned by the Italian Supreme Court).

There?s no need to open this can of worms. If the problem is that we don?t have a law for something like revenge porn, we should have that debate — but in Congress, not in every state legislature or town hall. A new federal criminal law could be enforced without amending Section 230.

But if the problem is that federal law enforcement lacks the resources to enforce existing criminal law — again, this is absolutely true for CSAM — the obvious answer would be to enlist state prosecutors in the fight. In fact, the U.S. Attorney General can already designate state prosecutors as ?special attorneys? under 18 U.S.C. § 543. Section 230 wouldn?t stop them from prosecuting websites because Section 230(e)(1) preserves the enforceability of federal criminal law regardless of who?s doing the enforcing. The fact that you?ve almost certainly never heard of this provision ought to make clear that this has never really been about getting state prosecutors more engaged — and make you question the state AG?s motives. (The same goes for formalizing this process by amending specific federal criminal laws to allow state prosecutors to enforce them.)

We proposed using Section 543 in the SESTA-FOSTA debate back in 2017 but the idea was dismissed out of hand. As a practical matter, it would require state prosecutors to operate in federal court — and thus, in many cases, to learn new practice rules. But that can?t possibly be what?s stopping them from getting involved in CSAM cases.

In Part III, we?ll ask what?s really driving DOJ here. Hint: it?s not really about ?protecting the children.?

Posted on Techdirt - 21 February 2020 @ 12:13pm

Why Section 230 Matters And How Not To Break The Internet; DOJ 230 Workshop Review, Part I

Festivus came early this year — or perhaps two months late. The Department of Justice held a workshop Wednesday: Section 230 – Nurturing Innovation or Fostering Unaccountability? (archived video and agenda). This was perhaps the most official “Airing of Grievances” we’ve had yet about Section 230. It signals that the Trump administration has declared war on the law that made the Internet possible.

In a blistering speech, Trump’s embattled Attorney General, Bill Barr, blamed the 1996 law for a host of ills, especially the spread of child sexual abuse material (CSAM). That proved a major topic of discussion among panelists. Writing in Techdirt three weeks ago, TechFreedom’s Berin Szóka analyzed draft legislation that would use Section 230 to force tech companies to build in backdoors for the U.S. government in the name of stopping CSAM — and predicted that Barr would use this workshop to lay the groundwork for that bill. While Barr never said the word “encryption,” he clearly drew the connection — just as Berin predicted in a shorter piece just before Barr’s speech. Berin’s long Twitter thread summarized the CSAM-230 connection the night beforehand and continued throughout the workshop.

This piece ran quite long, so we’ve broken it into three parts:

  1. This post, on why Section 230 is important, how it works, and how panelists proposed to amend it.

  2. Part two, discussing how Section 230 has never applied to federal criminal law, but a host of questions remain about new federal laws, state criminal laws and more.

  3. Part three, which will be posted next week, discussing what?s really driving the DOJ. Are they just trying to ban encryption? And can we get tough on CSAM without amending Section 230 or banning encryption?

Why Section 230 Is Vital to the Internet

The workshop’s unifying themes were “responsibility” and “accountability.” Critics claim Section 230 prevents stopping bad actors online. Actually, Section 230 places responsibility and liability on the correct party: whoever actually created the content, be it defamatory, harassing, generally awful, etc. Section 230 has never prevented legal action against individual users — or against tech companies for content they themselves create (or for violations of federal criminal law, as we discuss in Part II). But Section 230 does ensure that websites won’t face a flood of lawsuits for every piece of content they publish. One federal court decision (ultimately finding the website responsible for helping to create user content and thus not protected by Section 230) put this point best:

Websites are complicated enterprises, and there will always be close cases where a clever lawyer could argue that something the website operator did encouraged the illegality. Such close cases, we believe, must be resolved in favor of immunity, lest we cut the heart out of section 230 by forcing websites to face death by ten thousand duck-bites, fighting off claims that they promoted or encouraged — or at least tacitly assented to — the illegality of third parties.

Several workshop panelists talked about “duck-bites” but none really explained the point clearly: One duck-bite can’t kill you, but ten thousand might. Likewise, a single lawsuit may be no big deal, at least for large companies, but the scale of content on today’s social media is so vast that, without Section 230, a large website might face far more than ten thousand suits. Conversely, litigation is so expensive that even one lawsuit could well force a small site to give up on hosting user content altogether.

A single lawsuit can mean death by ten thousand duck-bites: an extended process of appearances, motions, discovery, and, ultimately, either trial or settlement that can be ruinously expensive. The most cumbersome, expensive, and invasive part may be “discovery”: if the plaintiff’s case turns on a question of fact, they can force the defendant to produce that evidence. That can mean turning a business inside out — and protracted fights over what evidence you do and don’t have to produce. The process can easily be weaponized, especially by someone with a political ax to grind.

Section 230(c)(1) avoids all of that by allowing courts to dismiss lawsuits without defendants having to go through discovery or argue difficult questions of First Amendment case law or the potentially infinite array of potential causes of action. Some have argued that we don’t need Section 230(c)(1) because websites should ultimately prevail on First Amendment grounds or that the common law might have developed to allow websites to prevail in court. The burden of litigating such cases at the scale of the Internet — i.e., for each of the billions and billions of pieces of content created by users found online, or even the thousands, hundreds or perhaps even dozens of comments that a single, humble website might host — would be impossible to manage.

As Profs. Jeff Kosseff and Eric Goldman explained on the first panel, Congress understood that websites wouldn’t host user content if the law imposed on them the risk of even a few duck bites per posting. But Congress also understood that, if websites faced increased liability for attempting to moderate harmful or objectionable user content on their sites, they’d do less content moderation — and maybe none at all. That was the risk created by Stratton Oakmont, Inc. v. Prodigy Services Co. (1995): Whereas CompuServe had, in 1991, been held not responsible for user content because it did not attempt to moderate user content, Prodigy was held responsible because it did.

Section 230 solved both problems. And it was essential that, the year after Congress enacted Section 230, a federal appeals court in Zeran v. America Online, Inc. construed the law broadly. Zeran ensured that Section 230 would protect websites generally against liability for user content — essentially, it doesn’t matter whether plaintiffs call websites “publishers” or “distributors.” Pat Carome, a partner at WilmerHale and lead defense counsel in Zeran, deftly explained the road not taken: If AOL had a legal duty as a “distributor” to take down content anyone complained about, anything anyone complained about would be taken down, and users would lose opportunities to speak at all. Such a notice-and-takedown system just won’t work at the scale of the Internet.

Why Both Parts of Section 230 Are Necessary

Section 230(c)(1) says simply that “No provider or user of an interactive computer service [content host] shall be treated as the publisher or speaker of any information provided by another information content provider [content creator].” Many Section 230 critics, especially Republicans, have seized upon this wording, insisting that Facebook, in particular, really is a “publisher” and so should be held “accountable” as such. This misses the point of Section 230(c)(1), which is to abolish the publisher/distributor distinction as irrelevant.

Miami Law Professor Mary Anne Franks proposed scaling back, or repealing, 230(c)(1) but leaving 230(c)(2)(A), which shields “good faith” moderation practices. She claimed this section is all that tech companies need to continue operations as “Good Samaritans.”

But as Prof. Goldman has explained, you need both parts of Section 230 to protect Good Samaritans: (c)(1) protects decisions to publish or not to publish broadly, while (c)(2) protects only proactive decisions to remove content. Roughly speaking, (c)(1) protects against complaints that content should have been taken down or taken down faster, while (c)(2) protects against complaints that content should not have been taken down or that content was taken down selectively (or in a “biased” manner).

Moreover, (c)(2) turns on an operator’s “good faith,” which they must establish to prevail on a motion to dismiss. That question of fact opens the door to potentially ruinous discovery — many duck-bites. A lawsuit can usually be dismissed via Section 230(c)(1) for relatively trivial legal costs (say, <$10k). But relying on a common law or 230(c)(2)(A) defense — as opposed to a statutory immunity — means having to argue both issues of fact and harder questions of law, and thus could raise that cost to easily ten times or more. Having to spend, say, $200k to win even a groundless lawsuit creates an enormous “nuisance value” to such claims — which, in turn, encourages litigation for the purpose of shaking down companies to settle out of court.

Class action litigation increases legal exposure for websites significantly: Though fewer in number, class actions are much harder to defeat because plaintiff’s lawyers are generally sharp and intimately familiar with how to wield maximum pressure to settle through the legal system. This is a largely American phenomenon and helps to explain why Section 230 is so uniquely necessary in the United States.

Imagining Alternatives

The final panel discussed “alternatives” to Section 230. FTC veteran Neil Chilson (now at the Charles Koch Institute) hammered a point that can’t be made often enough: it’s not enough to complain about Section 230; instead, we have to evaluate specific proposals to amend section 230 and ask whether they would make users better off. Indeed! That requires considering the benefits of Section 230(c)(1) as a true immunity that allows websites to avoid the duck-bites of the litigation (or state/local criminal prosecution) process. Here are a few proposed alternatives, focused on expanding civil liability. Part II (to be posted later today) will discuss expanding state and local criminal liability.

Imposing Size Caps on 230’s Protections

Critics of Section 230 often try to side-step startup concerns by suggesting that any 230 amendments preserve the original immunity for smaller companies. For example, Sen. Hawley’s Ending Support For Internet Censorship Act would make 230 protections contingent upon FTC certification of the company’s political neutrality if the company had 30 million active monthly U.S. users, more than 300 million active monthly users worldwide, or more than $500 million in global annual revenue.

Julie Samuels, Executive Director of Tech:NYC, warned that such size caps would “create a moat around Big Tech,” discouraging the startups she represents from growing. Instead, a size cap would only further incentivize startups to become acquired by Big Tech before they lose immunity. Prof. Goldman noted two reasons why it’s tricky to distinguish between large and small players on the Internet: (1) several smaller companies are among the top 15 U.S. services, e.g., Craigslist, Wikipedia, and Reddit, with small staffs but large footprints; and (2) some enormous companies rarely deal with user generated content, e.g., Cloudflare, IBM, but these companies would still be faced with all of the obligations that apply to companies that had a bigger user generated footprint. You don’t have to feel sorry for IBM to see the problem for users: laws like Hawley could drive such companies to get out of the business of hosting user-generated content altogether, deciding that it’s too marginal to be worth the burden.

Holding Internet Services Liable for Violating their Terms of Service

Goldberg and other panelists proposed amending Section 230 to hold Internet services liable for violating their terms of service agreements. Usually, when breach of contract or promissory estoppel claims are brought against services, they involve post or account removals. Courts almost always reject such claims on 230(c)(1) grounds as indirect attempts to hold the service liable as a publisher for those decisions. After all, Congress clearly intended to encourage websites to engage in content moderation, and removing posts or accounts is critical to how social media keep their sites usable.

What Goldberg really wants is liability for failing to remove the type of content that sites explicitly disallow in their terms (e.g., harassment). But such liability would simply cause Internet services to make their terms of service less specific — and some might even stop banning harassment altogether. Making sites less willing to remove (or ban) harmful content is precisely the “moderator’s dilemma” that Section 230 was designed to avoid.

Conversely, some complain that websites’ terms of service are too vague — especially Republicans, who complain that, without more specific definitions of objectionable content, websites will wield their discretion in politically biased ways. But it’s impossible for a service to foresee all of the types of awful content its users might create, so if websites have to be more specific in their terms of service, they’d have to constantly update their terms of service, and if they could be sued for failing to remove every piece of content they say they prohibit… that’s a lot of angry ducks. The tension between these two complaints should be clear. Section 230, as written, avoids this problem by simply protecting websites operators from having to litigate these questions.

Finally, in general, contract law requires a plaintiff to prove both breach and damages/harm. But with online content, damages are murky. How is one harmed by a violation of a TOS? It’s unclear exactly what Goldberg wants. If she’s simply saying Section 230 should be interpreted, or amended, not to block contract actions based on supposed TOS violations, most of those are going to fail in court anyway for lack of damages. But if they allow a plaintiff to get a foot in the door, to survive an initial motion to dismiss based on some vague theory of alleged harm, even having to defend against lawsuits that will ultimately fail creates a real danger of death-by-duck-bites.

Compounding the problem — especially if Goldberg is really talking about writing a new statute — is the possibility that plaintiffs’ lawyers could tack on other, even flimsier causes of action. These should be dismissed under Section 230, but, again, more duck-bites. That’s precisely the issue raised by Patel v. Facebook, where the Ninth Circuit allowed a lawsuit under Illinois’ biometric privacy law to proceed based on a purely technical violation of the law (failure to deliver the exact form of required notice for the company’s facial recognition tool). The Ninth Circuit concluded that such a violation, even if it amounted to “intangible damages,” was sufficient to confer standing on plaintiffs to sue as a class without requiring individual damage showings by each member of the class. We recently asked the Supreme Court to overrule the Ninth Circuit but they declined to take the case, leaving open the possibility that plaintiffs can get into federal court without alleging any clear damages. The result in Patel, as one might imagine, was a quick settlement by Facebook in the amount of $500 million shortly after the petition for certiorari was denied, given that the total statutory damages that would have been available to the class would have amounted to many billions. Even the biggest companies can be duck-bitten into massive settlements.

Limiting Immunity to Traditional Publication Torts

Several panelists claimed Section 230(c)(1) was intended to only cover traditional publication torts (defamation, libel and slander) and that over time, courts have wrongly broadened the immunity’s coverage. But there’s just no evidence for this revisionist account. Prof. Kosseff found no evidence for this interpretation after exhaustive research on Section 230’s legislative history for his definitive book. Otherwise, as Carome noted, Congress wouldn’t have needed to contemplate the other non-defamation related exceptions in the statute, like intellectual property, and federal criminal law.

Anti-Conservative Bias

Republicans have increasingly fixated on one overarching complaint: that Section 230 allows social media and other Internet services to discriminate against them, and that the law should require political neutrality. (Given the ambiguity of that term and the difficulty of assessing patterns at the scale the content available on today’s Internet, in practice, this requirement would actually mean giving the administration the power to force websites to favor them.)

The topic wasn’t discussed much during the workshop, but, according to multiple reports from participants, it dominated the ensuing roundtable. That’s not surprising, given that the roundtable featured only guests invited by the Attorney General. The invite list isn’t public and the discussion was held under Chatham House rules, but it’s a safe bet that it was a mix of serious (but generally apolitical) Section 230 experts and the Star Wars cantina freak show of right-wing astroturf activists who have made a cottage industry out of extending the Trumpist persecution complex to the digital realm.

TechFreedom has written extensively on the unconstitutionality of inserting the government into the exercise of editorial discretion by website operators. Just for example, read our statement on Sen. Hawley’s proposed legislation on regulating the Internet and Berin’s 2018 Congressional testimony on the idea (and Section 230, at that shit-show of a House Judiciary hearing that featured Diamond and Silk). Also read our 2018 letter to Jeff Sessions, Barr’s predecessor, on the unconstitutionality of attempting to coerce websites in how they exercise their editorial discretion.

Conclusion

Section 230 works by ensuring that duck-bites can’t kill websites (though federal criminal prosecution can, as Backpage.com discovered the hard way — see Part II). This avoids both the moderator’s dilemma (being more liable if you try to clean up harmful content) and that websites might simply decide to stop hosting user content altogether. Without Section 230(c)(1)’s protection, the costs of compliance, implementation, and litigation could strangle smaller companies even before they emerge. Far from undermining “Big Tech,” rolling back Section 230 could entrench today’s giants.

Several panelists poo-pooed the “duck-bites” problem, insisting that each of those bites involve real victims on the other side. That’s fair, to a point. But again, Section 230 doesn’t prevent anyone from holding responsible the person who actually created the content. Prof. Kate Klonick (St. John’s Law) reminded the workshop audience of “Balk’s law”: “THE INTERNET IS PEOPLE. The problem is people. Everything can be reduced to this one statement. People are awful. Especially you, especially me. Given how terrible we all are it’s a wonder the Internet isn’t so much worse.” Indeed, as Prof. Goldman noted, however new technologies might aggravate specific problems, better technologies are essential to facilitating better interaction. We can’t hold back the tide of change; the best we can do is to try to steer the Digital Revolution in better directions. And without Section 230, innovation in content moderation technologies would be impossible.

For further reading, we recommend the seven principles we worked with a group of leading Section 230 experts to draft last summer. Several panelists referenced them at the workshop, but they didn’t get the attention they deserved. Signed by 27 other civil society organizations across the political spectrum and 53 academics, we think they represent the best starting point for how to think about Section 230 yet offered.

Next up, in Part II, how Section 230 intersects with the criminal law. And, in Part III… what’s really driving the DOJ, banning encryption, and how to get tough on CSAM.