Why Section 230 Matters And How Not To Break The Internet; DOJ 230 Workshop Review, Part I

from the don't-break-the-internet dept

Festivus came early this year — or perhaps two months late. The Department of Justice held a workshop Wednesday: Section 230 – Nurturing Innovation or Fostering Unaccountability? (archived video and agenda). This was perhaps the most official “Airing of Grievances” we’ve had yet about Section 230. It signals that the Trump administration has declared war on the law that made the Internet possible.

In a blistering speech, Trump’s embattled Attorney General, Bill Barr, blamed the 1996 law for a host of ills, especially the spread of child sexual abuse material (CSAM). That proved a major topic of discussion among panelists. Writing in Techdirt three weeks ago, TechFreedom’s Berin Szóka analyzed draft legislation that would use Section 230 to force tech companies to build in backdoors for the U.S. government in the name of stopping CSAM — and predicted that Barr would use this workshop to lay the groundwork for that bill. While Barr never said the word “encryption,” he clearly drew the connection — just as Berin predicted in a shorter piece just before Barr’s speech. Berin’s long Twitter thread summarized the CSAM-230 connection the night beforehand and continued throughout the workshop.

This piece ran quite long, so we’ve broken it into three parts:

  1. This post, on why Section 230 is important, how it works, and how panelists proposed to amend it.

  2. Part two, discussing how Section 230 has never applied to federal criminal law, but a host of questions remain about new federal laws, state criminal laws and more.

  3. Part three, which will be posted next week, discussing what?s really driving the DOJ. Are they just trying to ban encryption? And can we get tough on CSAM without amending Section 230 or banning encryption?

Why Section 230 Is Vital to the Internet

The workshop’s unifying themes were “responsibility” and “accountability.” Critics claim Section 230 prevents stopping bad actors online. Actually, Section 230 places responsibility and liability on the correct party: whoever actually created the content, be it defamatory, harassing, generally awful, etc. Section 230 has never prevented legal action against individual users — or against tech companies for content they themselves create (or for violations of federal criminal law, as we discuss in Part II). But Section 230 does ensure that websites won’t face a flood of lawsuits for every piece of content they publish. One federal court decision (ultimately finding the website responsible for helping to create user content and thus not protected by Section 230) put this point best:

Websites are complicated enterprises, and there will always be close cases where a clever lawyer could argue that something the website operator did encouraged the illegality. Such close cases, we believe, must be resolved in favor of immunity, lest we cut the heart out of section 230 by forcing websites to face death by ten thousand duck-bites, fighting off claims that they promoted or encouraged — or at least tacitly assented to — the illegality of third parties.

Several workshop panelists talked about “duck-bites” but none really explained the point clearly: One duck-bite can’t kill you, but ten thousand might. Likewise, a single lawsuit may be no big deal, at least for large companies, but the scale of content on today’s social media is so vast that, without Section 230, a large website might face far more than ten thousand suits. Conversely, litigation is so expensive that even one lawsuit could well force a small site to give up on hosting user content altogether.

A single lawsuit can mean death by ten thousand duck-bites: an extended process of appearances, motions, discovery, and, ultimately, either trial or settlement that can be ruinously expensive. The most cumbersome, expensive, and invasive part may be “discovery”: if the plaintiff’s case turns on a question of fact, they can force the defendant to produce that evidence. That can mean turning a business inside out — and protracted fights over what evidence you do and don’t have to produce. The process can easily be weaponized, especially by someone with a political ax to grind.

Section 230(c)(1) avoids all of that by allowing courts to dismiss lawsuits without defendants having to go through discovery or argue difficult questions of First Amendment case law or the potentially infinite array of potential causes of action. Some have argued that we don’t need Section 230(c)(1) because websites should ultimately prevail on First Amendment grounds or that the common law might have developed to allow websites to prevail in court. The burden of litigating such cases at the scale of the Internet — i.e., for each of the billions and billions of pieces of content created by users found online, or even the thousands, hundreds or perhaps even dozens of comments that a single, humble website might host — would be impossible to manage.

As Profs. Jeff Kosseff and Eric Goldman explained on the first panel, Congress understood that websites wouldn’t host user content if the law imposed on them the risk of even a few duck bites per posting. But Congress also understood that, if websites faced increased liability for attempting to moderate harmful or objectionable user content on their sites, they’d do less content moderation — and maybe none at all. That was the risk created by Stratton Oakmont, Inc. v. Prodigy Services Co. (1995): Whereas CompuServe had, in 1991, been held not responsible for user content because it did not attempt to moderate user content, Prodigy was held responsible because it did.

Section 230 solved both problems. And it was essential that, the year after Congress enacted Section 230, a federal appeals court in Zeran v. America Online, Inc. construed the law broadly. Zeran ensured that Section 230 would protect websites generally against liability for user content — essentially, it doesn’t matter whether plaintiffs call websites “publishers” or “distributors.” Pat Carome, a partner at WilmerHale and lead defense counsel in Zeran, deftly explained the road not taken: If AOL had a legal duty as a “distributor” to take down content anyone complained about, anything anyone complained about would be taken down, and users would lose opportunities to speak at all. Such a notice-and-takedown system just won’t work at the scale of the Internet.

Why Both Parts of Section 230 Are Necessary

Section 230(c)(1) says simply that “No provider or user of an interactive computer service [content host] shall be treated as the publisher or speaker of any information provided by another information content provider [content creator].” Many Section 230 critics, especially Republicans, have seized upon this wording, insisting that Facebook, in particular, really is a “publisher” and so should be held “accountable” as such. This misses the point of Section 230(c)(1), which is to abolish the publisher/distributor distinction as irrelevant.

Miami Law Professor Mary Anne Franks proposed scaling back, or repealing, 230(c)(1) but leaving 230(c)(2)(A), which shields “good faith” moderation practices. She claimed this section is all that tech companies need to continue operations as “Good Samaritans.”

But as Prof. Goldman has explained, you need both parts of Section 230 to protect Good Samaritans: (c)(1) protects decisions to publish or not to publish broadly, while (c)(2) protects only proactive decisions to remove content. Roughly speaking, (c)(1) protects against complaints that content should have been taken down or taken down faster, while (c)(2) protects against complaints that content should not have been taken down or that content was taken down selectively (or in a “biased” manner).

Moreover, (c)(2) turns on an operator’s “good faith,” which they must establish to prevail on a motion to dismiss. That question of fact opens the door to potentially ruinous discovery — many duck-bites. A lawsuit can usually be dismissed via Section 230(c)(1) for relatively trivial legal costs (say, <$10k). But relying on a common law or 230(c)(2)(A) defense — as opposed to a statutory immunity — means having to argue both issues of fact and harder questions of law, and thus could raise that cost to easily ten times or more. Having to spend, say, $200k to win even a groundless lawsuit creates an enormous “nuisance value” to such claims — which, in turn, encourages litigation for the purpose of shaking down companies to settle out of court.

Class action litigation increases legal exposure for websites significantly: Though fewer in number, class actions are much harder to defeat because plaintiff’s lawyers are generally sharp and intimately familiar with how to wield maximum pressure to settle through the legal system. This is a largely American phenomenon and helps to explain why Section 230 is so uniquely necessary in the United States.

Imagining Alternatives

The final panel discussed “alternatives” to Section 230. FTC veteran Neil Chilson (now at the Charles Koch Institute) hammered a point that can’t be made often enough: it’s not enough to complain about Section 230; instead, we have to evaluate specific proposals to amend section 230 and ask whether they would make users better off. Indeed! That requires considering the benefits of Section 230(c)(1) as a true immunity that allows websites to avoid the duck-bites of the litigation (or state/local criminal prosecution) process. Here are a few proposed alternatives, focused on expanding civil liability. Part II (to be posted later today) will discuss expanding state and local criminal liability.

Imposing Size Caps on 230’s Protections

Critics of Section 230 often try to side-step startup concerns by suggesting that any 230 amendments preserve the original immunity for smaller companies. For example, Sen. Hawley’s Ending Support For Internet Censorship Act would make 230 protections contingent upon FTC certification of the company’s political neutrality if the company had 30 million active monthly U.S. users, more than 300 million active monthly users worldwide, or more than $500 million in global annual revenue.

Julie Samuels, Executive Director of Tech:NYC, warned that such size caps would “create a moat around Big Tech,” discouraging the startups she represents from growing. Instead, a size cap would only further incentivize startups to become acquired by Big Tech before they lose immunity. Prof. Goldman noted two reasons why it’s tricky to distinguish between large and small players on the Internet: (1) several smaller companies are among the top 15 U.S. services, e.g., Craigslist, Wikipedia, and Reddit, with small staffs but large footprints; and (2) some enormous companies rarely deal with user generated content, e.g., Cloudflare, IBM, but these companies would still be faced with all of the obligations that apply to companies that had a bigger user generated footprint. You don’t have to feel sorry for IBM to see the problem for users: laws like Hawley could drive such companies to get out of the business of hosting user-generated content altogether, deciding that it’s too marginal to be worth the burden.

Holding Internet Services Liable for Violating their Terms of Service

Goldberg and other panelists proposed amending Section 230 to hold Internet services liable for violating their terms of service agreements. Usually, when breach of contract or promissory estoppel claims are brought against services, they involve post or account removals. Courts almost always reject such claims on 230(c)(1) grounds as indirect attempts to hold the service liable as a publisher for those decisions. After all, Congress clearly intended to encourage websites to engage in content moderation, and removing posts or accounts is critical to how social media keep their sites usable.

What Goldberg really wants is liability for failing to remove the type of content that sites explicitly disallow in their terms (e.g., harassment). But such liability would simply cause Internet services to make their terms of service less specific — and some might even stop banning harassment altogether. Making sites less willing to remove (or ban) harmful content is precisely the “moderator’s dilemma” that Section 230 was designed to avoid.

Conversely, some complain that websites’ terms of service are too vague — especially Republicans, who complain that, without more specific definitions of objectionable content, websites will wield their discretion in politically biased ways. But it’s impossible for a service to foresee all of the types of awful content its users might create, so if websites have to be more specific in their terms of service, they’d have to constantly update their terms of service, and if they could be sued for failing to remove every piece of content they say they prohibit… that’s a lot of angry ducks. The tension between these two complaints should be clear. Section 230, as written, avoids this problem by simply protecting websites operators from having to litigate these questions.

Finally, in general, contract law requires a plaintiff to prove both breach and damages/harm. But with online content, damages are murky. How is one harmed by a violation of a TOS? It’s unclear exactly what Goldberg wants. If she’s simply saying Section 230 should be interpreted, or amended, not to block contract actions based on supposed TOS violations, most of those are going to fail in court anyway for lack of damages. But if they allow a plaintiff to get a foot in the door, to survive an initial motion to dismiss based on some vague theory of alleged harm, even having to defend against lawsuits that will ultimately fail creates a real danger of death-by-duck-bites.

Compounding the problem — especially if Goldberg is really talking about writing a new statute — is the possibility that plaintiffs’ lawyers could tack on other, even flimsier causes of action. These should be dismissed under Section 230, but, again, more duck-bites. That’s precisely the issue raised by Patel v. Facebook, where the Ninth Circuit allowed a lawsuit under Illinois’ biometric privacy law to proceed based on a purely technical violation of the law (failure to deliver the exact form of required notice for the company’s facial recognition tool). The Ninth Circuit concluded that such a violation, even if it amounted to “intangible damages,” was sufficient to confer standing on plaintiffs to sue as a class without requiring individual damage showings by each member of the class. We recently asked the Supreme Court to overrule the Ninth Circuit but they declined to take the case, leaving open the possibility that plaintiffs can get into federal court without alleging any clear damages. The result in Patel, as one might imagine, was a quick settlement by Facebook in the amount of $500 million shortly after the petition for certiorari was denied, given that the total statutory damages that would have been available to the class would have amounted to many billions. Even the biggest companies can be duck-bitten into massive settlements.

Limiting Immunity to Traditional Publication Torts

Several panelists claimed Section 230(c)(1) was intended to only cover traditional publication torts (defamation, libel and slander) and that over time, courts have wrongly broadened the immunity’s coverage. But there’s just no evidence for this revisionist account. Prof. Kosseff found no evidence for this interpretation after exhaustive research on Section 230’s legislative history for his definitive book. Otherwise, as Carome noted, Congress wouldn’t have needed to contemplate the other non-defamation related exceptions in the statute, like intellectual property, and federal criminal law.

Anti-Conservative Bias

Republicans have increasingly fixated on one overarching complaint: that Section 230 allows social media and other Internet services to discriminate against them, and that the law should require political neutrality. (Given the ambiguity of that term and the difficulty of assessing patterns at the scale the content available on today’s Internet, in practice, this requirement would actually mean giving the administration the power to force websites to favor them.)

The topic wasn’t discussed much during the workshop, but, according to multiple reports from participants, it dominated the ensuing roundtable. That’s not surprising, given that the roundtable featured only guests invited by the Attorney General. The invite list isn’t public and the discussion was held under Chatham House rules, but it’s a safe bet that it was a mix of serious (but generally apolitical) Section 230 experts and the Star Wars cantina freak show of right-wing astroturf activists who have made a cottage industry out of extending the Trumpist persecution complex to the digital realm.

TechFreedom has written extensively on the unconstitutionality of inserting the government into the exercise of editorial discretion by website operators. Just for example, read our statement on Sen. Hawley’s proposed legislation on regulating the Internet and Berin’s 2018 Congressional testimony on the idea (and Section 230, at that shit-show of a House Judiciary hearing that featured Diamond and Silk). Also read our 2018 letter to Jeff Sessions, Barr’s predecessor, on the unconstitutionality of attempting to coerce websites in how they exercise their editorial discretion.

Conclusion

Section 230 works by ensuring that duck-bites can’t kill websites (though federal criminal prosecution can, as Backpage.com discovered the hard way — see Part II). This avoids both the moderator’s dilemma (being more liable if you try to clean up harmful content) and that websites might simply decide to stop hosting user content altogether. Without Section 230(c)(1)’s protection, the costs of compliance, implementation, and litigation could strangle smaller companies even before they emerge. Far from undermining “Big Tech,” rolling back Section 230 could entrench today’s giants.

Several panelists poo-pooed the “duck-bites” problem, insisting that each of those bites involve real victims on the other side. That’s fair, to a point. But again, Section 230 doesn’t prevent anyone from holding responsible the person who actually created the content. Prof. Kate Klonick (St. John’s Law) reminded the workshop audience of “Balk’s law”: “THE INTERNET IS PEOPLE. The problem is people. Everything can be reduced to this one statement. People are awful. Especially you, especially me. Given how terrible we all are it’s a wonder the Internet isn’t so much worse.” Indeed, as Prof. Goldman noted, however new technologies might aggravate specific problems, better technologies are essential to facilitating better interaction. We can’t hold back the tide of change; the best we can do is to try to steer the Digital Revolution in better directions. And without Section 230, innovation in content moderation technologies would be impossible.

For further reading, we recommend the seven principles we worked with a group of leading Section 230 experts to draft last summer. Several panelists referenced them at the workshop, but they didn’t get the attention they deserved. Signed by 27 other civil society organizations across the political spectrum and 53 academics, we think they represent the best starting point for how to think about Section 230 yet offered.

Next up, in Part II, how Section 230 intersects with the criminal law. And, in Part III… what’s really driving the DOJ, banning encryption, and how to get tough on CSAM.

Filed Under: , , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Why Section 230 Matters And How Not To Break The Internet; DOJ 230 Workshop Review, Part I”

Subscribe: RSS Leave a comment
35 Comments
Anonymous Coward says:

Actually, Section 230 places responsibility and liability on the correct party: whoever actually created the content, be it defamatory, harassing, generally awful, etc. Section 230 has never prevented legal action against individual users — or against tech companies for content they themselves create (or for violations of federal criminal law, as we discuss in Part II). But Section 230 does ensure that websites won’t face a flood of lawsuits for every piece of content they publish.

Theoretically, this is a good thing. In practice, however…

It theoretically doesn’t prevent legal action against individual users, but how are you going to find the individual users to take legal action against them? You can’t get their identity out of the platform they used through the process of discovery, because 230. And even if you get a legitimate court ruling saying some post is defamatory, the platform still doesn’t have to take it down, because 230.

Fix those two points, and you’d eliminate almost all of the legitimate criticisms of Section 230, which would leave you in a much stronger position to defend it against the bogus criticisms. Because right now, when people say that it’s used to protect bad actors, they’re right! And they use the truth of that to make their bogus claims look more valid.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re:

Explain how you can eliminate section 230 2ithout forcing companies to decide what you can or cannot say on the Internet, or possible worse, allow the trolls to take over the Internet with their vitriol.

Also, if you were a fly on the wall in local pubs and Cafes, you might discover that people can be quite awful, and their is little you can do about it.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re:

“Fix those two points” and you remove the idea that people can remain semi-anonymous on interactive web services. You would remove the idea that someone can say something non-defamatory yet offensive or upsetting to people in power with at least some form of anonymity to protect that someone from retaliation. The First Amendment protects anonymous speech; your “fixes” to 47 U.S.C. § 230 would amount to ending that protection because “it was on the Internet”.

Anonymous Coward says:

Re: Re:

1) Wouldn’t a combination of alternative modes of service and pseudonymous/anonymous proceedings handle getting the legal case against the user going? For instance, instead of trying to get Twitter to divulge who’s behind @DevinCow in order to sue them by name, the lawsuit could be filed against the "operator or operator(s) of @DevinCow", with service of process being sent as a DM to that Twitter account, provided the court approves? (Granted, this wouldn’t work everywhere as some platforms don’t provide a suitable conduit for service of process or suitable ways of referring back to users, but none of this is novel, either, no?)

2) Once you have a court ruling saying some post needs to be nuked from the face of the earth, can’t the court then issue specific performance relief to force the poster to delete that post on pain of contempt, as far as practicable? Granted, some things don’t provide the user with the option to delete their posts, and courts will need to take this into account to avoid ordering impossibilities, but this, combined with an injunction against repeating the ruled-to-be-defamatory or otherwise adjudicated-unlawful statement (an anti-libel permanent injunction of the kind contemplated by Eugene Volokh), would get you where you are asking without dragging the platform into the middle of it all, don’t you think?

Anonymous Coward says:

Re: Re: Re:

1) Wouldn’t a combination of alternative modes of service and pseudonymous/anonymous proceedings handle getting the legal case against the user going?

You would weaponize the legal system against the average person, because they cannot afford to fight civil proceedings. That would enable the elites and every scumbag company out there to remove anything negative that anybody has ever said against them.

Scary Devil Monastery (profile) says:

Re: Re: Re: Re:

"That would enable the elites and every scumbag company out there to remove anything negative that anybody has ever said against them."

…which, I believe, is the whole intent of the attack against 230. Several major companies – and, of course, our own bobmail/Jhon/Blue/Baghdad bob – have a great deal of vested personal interest in being able to shut down anyone writing a negative comment about their latest shady extortion or fraud attempt.

Anonymous Coward says:

Re: Re: Re: Re:

That problem is probably better addressed by things like Anti-SLAPP laws (because the SLAPP problem is not limited to online speech).

Side question: if a pro se defendant wins on an Anti-SLAPP motion, what cost reimbursement are they eligible for? Court costs, I reckon? (Jurisdiction wise, we can assume either California or Texas, since their Anti-SLAPP statutes are held up as examples of "how to Anti-SLAPP".)

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re:

You have no guarantee to being able to find the individual in meatspace either, especially when they don’t think it is worth their time. Even if someone committed an actual criminal offense and vandalized your car calling you a pedophile there is no guarantee that you would be able to find them. There are no guarantees in life, I don’t know how you didn’t realize that by now.

Anonymous Coward says:

Re: Re: Re:

Yep, there are no guarantees in life, and sometimes you just have to deal with not being able to get due recompense from anybody for harm suffered. If you get partially blinded by a defective dog leash that was sold by a fly-by-night company that Amazon and its burgeoning third-party flea market neglected to vet, TechDirt essentially says "Yeah, we know this is really difficult, but whatever. We don’t give a shit, Section 230 is more important than your eyesight." That sounds perfectly reasonable to me.

Scary Devil Monastery (profile) says:

Re: Re:

"It theoretically doesn’t prevent legal action against individual users, but how are you going to find the individual users to take legal action against them? You can’t get their identity out of the platform they used through the process of discovery, because 230."

In the real world there is also very little you can do to find the "individual user" who shit-talked you in a pub or cafe. No one has suggested we need to force cafe or pub owners to bar anyone refusing to identify himself from the right to speak on their premises.

"Fix those two points, and you’d eliminate almost all of the legitimate criticisms of Section 230…"

You are missing the whole point. Those points should NOT be fixed anymore than we need to rush out and "fix" the idea that every time a person communicates in a public or private space, their right to open their mouth must be conditional.

"Because right now, when people say that it’s used to protect bad actors, they’re right!"

Yes? This is how free speech has always worked, in principle and practice. We have some 50 years worth of debates in legal and political circles around the concept. Free speech is dual-use, and much like a screwdriver you can use it constructively…or to shank someone.

We know, by now, that the only way to reduce the ability to abuse dual-use tools is by effectively removing those tools. At that point, progress stops. Historically this ends with the bad actors then being the exclusive users of said tools. Witness China, the soviet union, Turkey or Iran, for instance.

Your argument only looks fine until we consider the practical effects of your suggestion – that the concept of free speech in it’s entirety should become conditional to mandatory identification which rather eliminates most of it’s role.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

How to retire at age thirty: post a bunch of stuff people don’t like to the internet, under your own name.

Wait. Get defamed, cancelled, whatever.

When no one will hire you, become suicidal.

Get on disability for bipolar or depression.

Collect up to three grand a month (based on your pre-disability earnings) and use the cancel-proof money to go online all day and night saying more of the stuff that pissed off those who wanted to ruing you.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Individual reputations can and have been ruined by those who "Google bomb" them. This is indisputable. Many employers don’t hire people who have been defamed or harassed for whatever reason, and they often don’t bother notifying anyone. Danielle Citron wrote about this in a 2014 Newsweek article.

If you support 230, you are accepting individuals having their reputations destroyed without recourse as collateral damage. That includes female victims of revenge porn.

Rose McGowan sued a lawyer she claimed suggested hiring internet operatives to post defamatory content about her to discredit her #metoo allegations. This is how 230 can silence a whistleblower, since she can’t sue the sites.

It is also disingenuous to claim that only the original poster is inflicting harm. Distributor liability exists in every country BUT the US (it requires the distributor be put on notice first), and existed in the US until 230 was passed. A search engine that is put on notice that it is costing a woman jobs because her ex-boyfriend flooded the internet with revenge porn can certainly disable that type of search, and should be sued into oblivion if they don’t.

Content moderation at scale is certainly possible, it’s just more EXPENSIVE and some companies don’t want to pay for it. Automobile safety is also expensive but no one suggests doing away with it because the price prevents dangerous cars from winding up on the road.

Australia and England do not have 230 immunity, and both have working internets. Many of the media outlets in America who say that 230 preserves user comment sections…don’t allow comments.

It won’t bother you that some "racist" republican (i.e., white male conservative) is living off YOUR tax dollars because people "said mean things" on the internet that made them "unemployable" and qualified them for disability, will it? It won’t hurt the country if we have to spend six figures a year on a single person to undo the harm inflicted because 230 exists, right?

Since money is no object, and since you don’t mind seeing people you can’t stand not having to work and being subsidized by your tax dollars, I guess your support of 230 is congruent with your principles.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re:

Individual reputations can and have been ruined by those who "Google bomb" them. This is indisputable.

Hi, Rick Santorum!

If you support 230, you are accepting individuals having their reputations destroyed without recourse as collateral damage.

230 doesn’t preclude people from protecting their reputations. It precludes people from doing so by suing the tool instead of the person who used it.

Rose McGowan sued a lawyer she claimed suggested hiring internet operatives to post defamatory content about her to discredit her #metoo allegations. This is how 230 can silence a whistleblower, since she can’t sue the sites.

I…don’t see how that’s silencing a whistleblower? If anything, the way you phrase it there sounds like McGowan is the whistleblower against a scumbag lawyer.

It is also disingenuous to claim that only the original poster is inflicting harm.

It is disingenuous to claim the opposite about interactive web services. To quote another regular commenter: “Last I checked, no pub owner will be considered guilty of slander if any of his patrons calls someone a fucking nut on his premises. Section 230 does nothing but extend that immunity to the online community.”

Distributor liability exists in every country BUT the US (it requires the distributor be put on notice first), and existed in the US until 230 was passed.

Gee, can’t imagine why making Twitter legally liable for every single post made by a third party wouldn’t be a problem~.

A search engine that is put on notice that it is costing a woman jobs because her ex-boyfriend flooded the internet with revenge porn can certainly disable that type of search, and should be sued into oblivion if they don’t.

How can Google know for certain the porn is “revenge porn”? (And boy, do we need a new phrase for that.) How can Google know the porn is costing someone jobs? And if Google should be held liable for that content when the company is informed about that content, should Google also be held liable for that content if the company never receives a notification about it?

Content moderation at scale is certainly possible, it’s just more EXPENSIVE and some companies don’t want to pay for it.

Google already pays millions upon millions for content moderation on YouTube. Any argument that says it doesn’t is disingenuous at best, a flat-out fucking lie at worst.

And yes, content moderation at scale is possible…if you rule out the idea of that moderation being effective. You can’t scale moderation to a platform the size of YouTube and expect it to be perfect all of the time. False positives, reportbombings, and inconsistencies in rule enforcement will always happen. You can’t fix the first two with tech alone. And you can’t fix the third without fundamentally altering human behavior.

Australia and England do not have 230 immunity, and both have working internets.

Irrelevant. Australia and England are not bound by American law.

Many of the media outlets in America who say that 230 preserves user comment sections…don’t allow comments.

Irrelevant. This argument assumes hypocrisy where none is present, then tries to make that hypocrisy somehow look like it weakens arguments in favor of 230.

It won’t bother you that some "racist" republican (i.e., white male conservative) is living off YOUR tax dollars because people "said mean things" on the internet that made them "unemployable" and qualified them for disability, will it?

It would…if you could provide even one example of something 100% exactly like this ever happening in the history of ever.

It won’t hurt the country if we have to spend six figures a year on a single person to undo the harm inflicted because 230 exists, right?

Defamation harms people, not 230. Also, your rhetorical gimmick will not work here.

Since money is no object, and since you don’t mind seeing people you can’t stand not having to work and being subsidized by your tax dollars, I guess your support of 230 is congruent with your principles.

…fucking what

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re: Re:

230 doesn’t preclude people from protecting their reputations. It precludes people from doing so by suing the tool instead of the person who used it.

Setting aside that companies who sell tools which injure people can easily be sued under product-liability law…

The computer is a tool. The ISP is almost like a tool since they act like a ‘dumb pipe.’ A search engine is a repository that actually republishes the defamation to a much larger audience that includes people with power over someone’s life.

Every country in the world, even the US, recognizes the separate harm of distributor liability. The US is the only country which immunizes this harm. When an original publisher can be anonymous, and therefore can’t be sued, and everyone else is immune, anyone is defenseless, which is why reputation blackmail thrives.

The cost to the taxpayer comes when a target’s career is ruined (large number of people in this category btw) and they wind up on disability at up to 3k a month because obviously they’re going to be depressed. Without 230, the defamation wouldn’t be spread by the search engines, and these people wouldn’t be harmed because no one would find the lies.

230 is also costing people their lives because gang violence often ignites online with the platforms not intervening. Private messages are one thing but this is often done out in the open. Videos of people being bullied or abused are easily found as well. Supporters of 230 think the internet is more important than these individuals.

Scary Devil Monastery (profile) says:

Re: Re: Re: Re:

"very country in the world, even the US, recognizes the separate harm of distributor liability."

You keep lying about that.

In every other country in the world mere conduit protection tends to be built into basic telecommunications law unlike as is the case in the US.

Hence much of the rest of the world doesn’t need a separate guarantee of mere conduit the same way the US does. Added that US tort law encourages fishing expeditions by unscrupulous lawyers we get the reason as to why section 230 is a decidedly american necessity.

And yet every time we tell you this, Baghdad Bob, you ignore it and reformulate the same tired old lie yet again. It’s been well over five years since you started peddling that shit. At some point you’ll just have to realize that no one is buying it.

"The cost to the taxpayer comes when a target’s career is ruined (large number of people in this category btw) and they wind up on disability at up to 3k a month…"

So let me get this straight, your primary argument why section 230 is bad is because it allows people to talk about other people? I think the founding fathers want a word with you as to why this is an unavoidable necessity in a society where freedom of speech is still a thing at all.

The pub owner should not be held liable for what his patrons discuss amongst themselves on his premises. The same way Facebook should not be held liable as to what their patrons discuss.

"230 is also costing people their lives because gang violence often ignites online with the platforms not intervening."

Oh, classy. I haven’t heard anyone seriously push the "Because of free speech, lives are lost" argument for some time now, Bobmail.

"Private messages are one thing but this is often done out in the open."

The same way people talk and chat in a pub, in the street, in the church, in the local bingo parlor, or while hanging at the water cooler at work, you mean?

Yes, Bobmail, this is called *Free Speech. It’s dual use and your argument that it should be abolished because you’re afraid that your latest fraudulent business model will fail because people will be able to google it is NOT a good reason to abolish that principle.

But I guess for someone like you with your documented defense of Prenda and other confidence fraud companies, section 230 must indeed be a bit of a pain.

Scary Devil Monastery (profile) says:

Re: Re: Re: Re:

Oh, and this little shit nugget…

"Setting aside that companies who sell tools which injure people can easily be sued under product-liability law…"

Not if they work as intended. You can use that as an analogy when Black & Decker get sued because a person chose to use one of their power drills in a crime, rather than the person using that tool to assault someone else.

Your argument that Ford should be held liable every time someone uses one of their vehicles for criminal purposes is STILL nothing but shit, Baghdad Bob.

Anonymous Coward says:

Re: Re: Re:

If anything, the way you phrase it there sounds like McGowan is the whistleblower against a scumbag lawyer.

You know John Smith just shot himself in the foot when he’s forced himself to defend one of the major symbols of the #MeToo campaign which he hates with the passion of a thousand dying suns.

Personally I wish he’d aim that shotgun higher.

This comment has been deemed insightful by the community.
Mike Masnick (profile) says:

Re: Re:

Australia and England do not have 230 immunity, and both have working internets.

How many successful user generated internet platforms were started in either country?

I’ll wait.

On a potentially related note, both Australia and England have a general rule that the loser pays the winner’s legal fees in this kind of tort litigation.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re:

"Individual reputations can and have been ruined by those who "Google bomb" them. This is indisputable. "

Is anyone disputing that, if so where?

Santorum google bomb did not originate out of nowhere, there were multiple reasons for the masses to take issue with things he said. The resulting poll numbers were not necessarily a result of the google bomb as he had already made an ass of himself. Blaming the internet for your own self destruction is a bit childish, no?

Anonymous Coward says:

Re: Re:

"Australia and England do not have 230 immunity, and both have working internets."

How many internets do they have?

I do not understand the logic attempt here, why would it matter if something else somewhere is or is not like something else somewhere else? When two things are not even similar, one can not compare them. If one wants to compare things they should look at the laws governing use of the internet within different countries and understand how said laws affect usage.

Scary Devil Monastery (profile) says:

Re: Re:

"Australia and England do not have 230 immunity, and both have working internets."

…and they both have other protections applying when it comes to free speech. The US requires section 230 because unlike, for instance, Sweden, it does not possess mere conduit immunity written into it’s basic telecommunications act.

You could argue that australia doesn’t have the basis for a functioning internet anymore since – from a legal basis, thanks to it’s current administration – programming functional code can be considered to be illegal.

But you knew all of this, Baghdad bob, and you still keep lying through your teeth about it.

By now we all know why, of course – because if 230 goes away it won’t be possible for anyone to post criticism about outright fraud anymore, in the US.

All that shit-posting by you, only to defend the likes of Prenda.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...