The Tech Policy Greenhouse is an online symposium where experts tackle the most difficult policy challenges facing innovation and technology today. These are problems that don't have easy solutions, where every decision involves tradeoffs and unintended consequences, so we've gathered a wide variety of voices to help dissect existing policy proposals and better inform new ones.

Revisiting The Common Law Liability Of Online Intermediaries Before Section 230

from the nuts-and-bolts dept

On February 8, 1996, President Clinton signed into law the Telecommunication Act of 1996. Title V of that act was called the Communications Decency Act, and Section 509 of the CDA was a set of provisions originally introduced by Congressmen Chris Cox and Ron Wyden as the Internet Freedom & Family Empowerment Act. Those provisions were then codified at Section 230 of title 47 of the United States Code. They are now commonly referred to as simply “Section 230.”

Section 230 prohibits a “provider or user” of an “interactive computer service” from being “treated as the publisher or speaker” of content “provided by another information content provider.” 47 U.S.C. § 230(c)(1). The courts construed Section 230 as providing broad federal statutory immunity to the providers of online services and platforms from any legal liability for unlawful or tortious content posted on their systems by their users.

When it enacted Section 230, Congress specified a few important exceptions to the scope of this statutory immunity. It did not apply to liability for federal crimes or infringing intellectual property rights. And in 2018, President Trump signed into law an additional exception, making Section 230’s liability protections inapplicable to user content related to sex trafficking or the promotion of prostitution.

Nevertheless, critics have voiced concerns that Section 230 prevents the government from providing effective legal remedies for what those critics claim are abuses by users of online platforms. Earlier this year, legislation to modify Section 230 was introduced in Congress, and President Trump has, at times, suggested the repeal of Section 230 in its entirety.

As critics, politicians, and legal commentators continue to debate the future of Section 230 and its possible repeal, there has arisen a renewed interest in what the potential legal liability of online intermediaries was for the content posted by their users under the common law, before Section 230 was enacted. Thirty years ago, as a relatively young lawyer representing CompuServe, I embarked on a journey to explore that largely uncharted terrain.

In the pre-Section 230 world, every operator of an online service had two fundamental questions for their lawyers: (1) what is my liability for stuff my users post on my system that I don’t know about?; and (2) what is my liability for the stuff I know about and decide not to remove (and how much time do I have to make that decision)?

The answer to the first question was not difficult to map. In 1990, CompuServe was sued by Cubby, Inc. for an allegedly defamatory article posted on a CompuServe forum by one of its contributors. The article was online only for a day, and CompuServe became aware of its contents only after it had been removed, when it was served with Cubby’s libel lawsuit. Since there was no dispute that CompuServe was unaware of the contents of the article when it was available online in its forum, we argued to the federal district court in New York that CompuServe was no different from any ordinary library, bookstore, or newsstand, which, under both the law of libel and the First Amendment, are not subject to civil or criminal liability for the materials they disseminate to the public if they have no knowledge of the material’s content at the time they disseminate it. The court agreed and entered summary judgment for CompuServe, finding that CompuServe had not “published” the alleged libel, which a plaintiff must prove in order to impose liability on a defendant under the common law of libel.

Four years later, a state trial court in New York reached a different conclusion in a libel lawsuit brought by Stratton Oakmont against one of CompuServe’s competitors, Prodigy Services Co., based on an allegedly defamatory statement made in one of Prodigy’s online bulletin boards. In that case, the plaintiff argued that Prodigy was different because, unlike CompuServe, Prodigy had marketed itself as using software and real-time monitors to remove material from its service that it felt were inappropriate for a “family-friendly” online service. The trial court agreed and entered a preliminary ruling that, even though there was no evidence that Prodigy was ever actually aware of the alleged libel when it was available on its service, Prodigy should nevertheless be deemed the “publisher” of the statement, because, in the court’s view, “Prodigy has uniquely arrogated to itself the role of determining what is proper for its members to post and read on its bulletin boards.”

The Stratton Oakmont v. Prodigy ruling was as dubious as it was controversial and confusing in the months after it was issued. CompuServe’s general counsel, Kent Stuckey, asked me to address it in the chapter I was writing on defamation for his new legal treatise, Internet and Online Law. Tasked with this scholarly mission in the midst of one of the digital revolution’s most heated legal controversies, I undertook to collect, organize and analyze every reported defamation case and law review commentary in this country that I could find that might bear on the two questions every online service faced: when are we liable for user content we don’t know about and when are we liable for the user content we know about but decide not to remove?

With respect to the first question, the answer dictated by the case law for other types of defendants who disseminate defamatory statements by others was fairly clear. As I wrote in my chapter, “[t]wo common principles can be derived from these cases. First, a person is subject to liability as a ‘publisher’ only if he communicates a defamatory statement to another. Second, a person communicates that statement to another if, but only if, he is aware of its content at the time he disseminates it.” Hamilton, “Defamation,” printed as Chapter 2 in Stuckey, Internet & Online Law (Law Journal-Seminars Press 1996), at 2-31 (footnotes omitted).

I concluded that the trial court had erred in Stratton Oakmont because it failed to address what the term “publish” means in the common law of libel—to “communicate” a statement to a third party. When an intermediary disseminates material with no knowledge of its content, it does not “communicate” the material it distributes, and therefore does not “publish” it, at least as that term is used in the law of libel. Thus, whether the intermediary asserts the right of “editorial control” over the content provided by others, and the degree of such control the intermediary claims to exercise, are immaterial to the precise legal question at issue: did the defendant “communicate” the statement to another? I wrote:

While it is true that a publisher’s “choice of material to go into a newspaper” constitutes “the exercise of editorial control and judgment” by that publisher, his “increased liability” for defamation arises from the knowledge of content that he inherently acquires as a result of exercising that judgment to include the material in the newspaper; it does not arise from the mere fact that he has a right to make that judgment. All distributors, like primary publishers, exercise the very same right to determine what material they will disseminate and what material they will not. . . . Indeed, the liability standard applied to a distributor presumes that he has such a right to refuse distribution and requires him to exercise it whenever he knows or has reason to know that a particular publication contains unlawful or tortious content. His efforts to exercise that right, therefore, cannot create the very same general duty to inspect content that is prohibited by that common law standard (and by the First Amendment).

Id. at 2-62 (footnotes omitted, quoting Stratton Oakmont, 23 Media L. Rep. (BNA) 1794, 1796 (N.Y. Sup. Ct. May 25, 1995).

With respect to the second question online intermediaries had for their lawyers—when are we liable for stuff posted by users we decide not to remove—the answer dictated by the common law was anything but firmly established and settled. In the pre-digital world, the economics of communicating with the public made it far more practical for aggrieved plaintiffs to sue only the producers of such content rather than those who merely distributed it. The truth is that in the pre-Internet history of the common law of libel, entities in the business of distributing the printed content of others were rarely sued, and even then only as an afterthought to defeat diversity of citizenship and thereby prevent the defendants in a state court action from removing the lawsuit to the federal courts. And in only two of those rare cases was the distributor defendant alleged to have actual knowledge of the defamatory content it was selling to the public; in both cases, the distributor defendant was eventually dismissed before the case ever went to trial.

Thus, as I noted in my chapter, I did not find a single reported case of defamation liability actually being imposed on an entity in the business of distributing to the public printed content produced by others. That meant that, prior to the enactment of Section 230, when a lawyer advised his intermediary client as to when he might be held liable for deciding not to remove users’ content, the lawyer could refer only to dicta by courts and speculation by commentators as to how courts might apply the law in that circumstance.

And there certainly was no consensus in such speculation. As I noted in my chapter, Professor Keeton observed in 1984 that “[i]t would be rather ridiculous, under most circumstances, to expect a bookseller or a library to withhold distribution of a good book because of a belief that a derogatory statement contained in the book was both false and defamatory of the plaintiff.” Prosser and Keeton on the Law of Torts, § 113, at 811 (5th ed. 1984). Indeed, do we really expect Kroger to make decisions whether to pull an issue of the National Enquirer from the shelves in every one of its grocery stores across the country because the CFO’s spouse told her at breakfast that he read in that week’s issue that a celebrity claimed one of his critics was a “liar”?

As I observed in my chapter, it is also noteworthy that in 1992, the National Conference of Commissioners on Uniform State Laws considered, but did not adopt, a standard that would immunize from republisher liability any “library, archive, or similar information retrieval or transmission service” that provides access “to information originally published by others,” if it is not “reasonably understood to assert in the normal course of its business the truthfulness of” such information or if it “takes reasonable steps to inform users” that it makes no such assertion. See Perritt, “Tort Liability, the First Amendment, and Equal Access to Electronic Networks,” 5 Harv. J.L. & Tech. 65, 108 (1992).

In 1996, Section 230 was enacted into law at the same time my research and analysis of the applicable common law standards was published as Chapter 2, “Defamation,” in Kent Stuckey’s treatise, Internet & Online Law (Law Journal-Seminars Press 1996). I then added a section to the chapter discussing Section 230 and continued to update it for a few years to discuss the initial cases applying it. Eventually, however, it became apparent that, in light of the courts’ construction of Section 230, an extensive discussion of the pre-Section 230 case law with respect to the liability of online intermediaries for user content was no longer needed, and my chapter in the treatise was replaced with one that focuses instead on the cases applying Section 230.

Kent Stuckey’s treatise is still being updated and is available for purchase from Law Journal Press. The chapter I wrote, however, which details all of the reported case law and commentary I found that might bear on the potential liability of online intermediaries for defamation under the common law at that time, before Section 230 was enacted, has not been made available to the public for many years. In light of the renewed interest in this topic as part of the current debates about Section 230’s future, that chapter is being made available online, with permission, here (pdf link, also embedded below).

Robert W. Hamilton is Of Counsel at Jones Day. He has more than 36 years of experience in state, federal, and bankruptcy court litigation and in First Amendment and media cases. Bob represented CompuServe in Cubby v. CompuServe in 1990-1991. The views and opinions set forth herein are the personal views or opinions of the author; they do not necessarily reflect views or opinions of the law firm with which he is associated.

  1. See Spence v. Flynt, 647 F. Supp. 1266, 1273-1274 (D. Wyo. 1986) (convenience store continued to sell magazine for two days after plaintiff’s attorney notified store employee of its defamatory content); Janklow v. Viking Press, 378 N.W.2d 875, 876 S.D. 1985) (plaintiff sued bookstore owners and operators who “willfully refused to remove the book from the shelves” of their stores “even though he had notified them of its libelous nature”). In the Spence case, the convenience store was dismissed from the litigation prior to trial, at which liability was imposed on the original publisher for describing an attorney as a “vermin-infested turd dispenser.” See Spence v. Flynt, 816 P.2d 771 (Wyo. 1991). In the Janklow case, the claims against the bookstores eventually were dismissed on summary judgment for the same reason that judgment was entered for the producers of the book (speech was “opinion” protected by 1st Amendment). See: In the Spirit of Crazy Horse.
  2. I found two reported cases in which a court imposed liability on a property owner for defamatory graffiti on an interior wall in his building, and three extremely old cases in which a property owner was held responsible for refusing to remove a defamatory statement displayed on his property. In none of those cases was the defendant in the business of distributing to the public printed speech produced by others.

Filed Under: , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Revisiting The Common Law Liability Of Online Intermediaries Before Section 230”

Subscribe: RSS Leave a comment
11 Comments
This comment has been deemed insightful by the community.
Anonymous Coward says:

Indeed, do we really expect Kroger to make decisions whether to pull an issue of the National Enquirer from the shelves in every one of its grocery stores across the country because the CFO’s spouse told her at breakfast that he read in that week’s issue that a celebrity claimed one of his critics was a “liar”?

To extend that idea a step further: Do we really expect Kroger, upon being notified of the (allegedly) defamatory nature of the Enquirer article, to embark on an investigation to determine if said article is, in fact, defamatory? Do we really want a private organization or individual to be the "arbiter of truth" in this way?

I can understand the desire of an individual (or company) to want to have unfavorable material about them removed from distribution. But if there is to be any obligation placed on the distributor, it should be in response to something more objective than an aggrieved party crying foul. Attaching liability at the mere claim of defamation has a chilling effect. A company is going to simply remove the content, even if it’s not defamatory at all, rather than risk the lawsuit.

Some possible ideas to address that:

  1. A distributor isn’t liable for defamatory content disseminated through their platform unless they refuse to cease distribution after being notified that a court of competent jurisdiction has actually found the content to be defamatory.
  2. A distributor isn’t liable for defamatory content disseminated through their platform, but can be required to post a notice with the content in question stating that a court of competent jurisdiction has actually found the content to be defamatory, and/or can be ordered to remove the content temporarily while the litigation is pending.
  3. A distributor can have a legal "safe harbor" immunizing them from liability for allegedly defamatory content disseminated through their platform if they post a notification with the content in question stating that they have received a notice claiming that said content is defamatory.

I don’t actively support any of these three positions, but they all seem a lot more reasonable to me than having liability attach at the "alleged" stage.

Clint says:

Re: Re:

Broad scope of Section 230’s immunity is historically unprecedented.
It is legally unique, which causes much of its controversy.

While some earlier legal doctrines had limited the liability of bookstores and other 3rd party distributors (supermarkets ?), it never granted them the kind of total immunity that online providers now enjoy.

A bookstore could still be legally liable if there was proof it knew it was conveying a defamatory or obscene book.

By contrast, Internet providers are immune even if they know about illegal content on their sites and leave it online.

Fair treatment of all under uniform and just legal principles would seem a critical bedrock of American law.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re:

230 does not immunize any interactive web service against the deliberate hosting of illegal content. If Twitter admins knew about an account posting child sexual abuse material and left both the account and the CSAM stay up, those admins would (rightfully) face a legal punishment for their actions.

Who- or whatever led you to believe 230 offers complete blanket immunity from legal liability is mistaken. You might want to rectify your mistaken beliefs, lest you make this same mistake again.

Anonymous Coward says:

Re: Re: Re:

Broad scope of Section 230’s immunity is historically unprecedented.

Wrong. It only clarifies existing law as applicable to internet companies and everyone else. Do you want to be able to speak on the internet, or not?

By contrast, Internet providers are immune even if they know about illegal content on their sites and leave it online.

No one is ever immune from knowingly hosting illegal content. Now you are just lying.

Fair treatment of all under uniform and just legal principles would seem a critical bedrock of American law.

Yeah, hence Section 230.

Anonymous Coward says:

I am not a lawyer, but I am gratified to see this legal analysis confirm what I have suspected: Section 230 is not at all unique in any way, and wouldn’t be needed at all if not for the perverse Prodigy case (where a judge was evidently confused or repelled by new technology.

There’s nothing special about communication enablers, and they ought to have the same protections as anyone else. Section 230 simply codifies that obvious conclusion into law.

Anonymous Coward says:

How is Prodigy distinguished from carrier "Information Services"

The Prodigy case editorial oversight as a basis for assigning responsibility. It doesn’t distinguish between transmission medium.

If prodigy is responsible for editorial oversight, then isn’t any network filtering essentially the same thing? A "communication" does not distinguish between whether it is on a disk, or over a wire.

Or is this another one of those: "heads we win, tails you loose" situations?

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: How is Prodigy distinguished from carrier "Information Servi

Prodigy was such a bad ruling that Congress stepped in with a bipartisan vote to unroll it. (Perhaps, given the Supreme Court stance on free speech in recent years, it would eventually have been overruled anyway, but at great legal expense and inconvenience for the innocently-bystanding victims of legal thuggery.

Leave a Reply to Clint Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
12:04 SOPA Didn't Die. It's Just Lying In Wait. (5)
09:30 Demanding Progress: From Aaron Swartz To SOPA And Beyond (3)
12:00 How The SOPA Blackout Happened (3)
09:30 Remembering The Fight Against SOPA 10 Years Later... And What It Means For Today (16)
12:00 Winding Down Our Latest Greenhouse Panel: Content Moderation At The Infrastructure Layer (4)
12:00 Does An Internet Infrastructure Taxonomy Help Or Hurt? (15)
14:33 OnlyFans Isn't The First Site To Face Moderation Pressure From Financial Intermediaries, And It Won't Be The Last (12)
10:54 A New Hope For Moderation And Its Discontents? (7)
12:00 Infrastructure And Content Moderation: Challenges And Opportunities (7)
12:20 Against 'Content Moderation' And The Concentration Of Power (32)
13:36 Social Media Regulation In African Countries Will Require More Than International Human Rights Law (7)
12:00 The Vital Role Intermediary Protections Play for Infrastructure Providers (7)
12:00 Should Information Flows Be Controlled By The Internet Plumbers? (10)
12:11 Bankers As Content Moderators (6)
12:09 The Inexorable Push For Infrastructure Moderation (6)
13:35 Content Moderation Beyond Platforms: A Rubric (5)
12:00 Welcome To The New Techdirt Greenhouse Panel: Content Moderation At The Infrastructure Level (8)
12:00 That's A Wrap: Techdirt Greenhouse, Broadband In The Covid Era (17)
12:05 Could The Digital Divide Unite Us? (29)
12:00 How Smart Software And AI Helped Networks Thrive For Consumers During The Pandemic (41)
12:00 With Terrible Federal Broadband Data, States Are Taking Matters Into Their Own Hands (18)
12:00 A National Solution To The Digital Divide Starts With States (19)
12:00 The Cost Of Broadband Is Too Damned High (12)
12:00 Can Broadband Policy Help Create A More Equitable And inclusive Economy And Society Instead Of The Reverse? (11)
12:03 The FCC, 2.5 GHz Spectrum, And The Tribal Priority Window: Something Positive Amid The COVID-19 Pandemic (6)
12:00 Colorado's Broadband Internet Doesn't Have to Be Rocky (9)
12:00 The Trump FCC Has Failed To Protect Low-Income Americans During A Health Crisis (26)
12:10 Perpetually Missing from Tech Policy: ISPs And The IoT (10)
12:10 10 Years Of U.S. Broadband Policy Has Been A Colossal Failure (7)
12:18 Digital Redlining: ISPs Widening The Digital Divide (21)
More arrow