Appeals Court Issues Strong CDA 230 Ruling, But It Will Be Misleadingly Quoted By Those Misrepresenting CDA 230

from the mostly-good,-but-a-bit-of-bad dept

Last Friday, the DC circuit appeals court issued a mostly good and mostly straightforward ruling applying Section 230 of the Communications Decency Act (CDA 230) in a perfectly expected way. However, the case is notable on a few grounds, partly because it clarifies a few key aspects of CDA 230 (which is good), and partly because of some sloppy language that is almost certainly going to be misquoted and misrepresented by those who (incorrectly) keep insisting that CDA 230 requires "neutrality" by the platform in order to retain the protections of the law.

Let's just start by highlighting that there is no "neutrality" rule in CDA 230 -- and (importantly) the opposite is actually true. Not only does the law not require neutrality, it explicitly states that it's goal is for there to be more content moderation. The law explicitly notes that it is designed:

to remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children’s access to objectionable or inappropriate online material

In short, the law was designed to encourage moderation, which, by definition, cannot be "neutral."

Now, onto the case. It involved a bunch of locksmiths who claim that there are "scam locksmiths" claiming to be local in areas they are not, and the various search engines (Google, Microsoft and Yahoo are the defendants here) are putting those fake locksmiths in their search results, meaning that the real locksmiths have to spend extra on advertising to get noticed above the scam locksmiths.

You might think that if these legit locksmiths have been wronged by anyone, it's the scam locksmiths, but because everyone wants to blame platforms, they've sued the platforms -- claiming antitrust violations, false advertising, and a conspiracy in restraint of trade. The lower court, and now the appeals court, easily finds that Section 230 protects the search engines from these claims, as the content from the scam locksmiths is from the scam locksmiths, and not from the platforms.

The attempt to get around CDA 230 is mainly by focusing on one aspect of how the local search services work -- in that most of the services try to take the address of the local business and place a "pin" on a map to show where the business is physically located. The locksmiths argue that creating this map and pin involves content that the search engines create, and therefore it is not immune under CDA 230. The appeals court says... that's not how it works, since the information is clearly "derived" directly from the third parties:

The first question we must address is whether the defendants’ translation of information that comes from the scam locksmiths’ webpages -- in particular, exact street addresses -- into map pinpoints takes the defendants beyond the scope of § 230 immunity. In considering this question, it is helpful to begin with the simple scenario in which a search engine receives GPS data from a user’s device and converts that information into a map pinpoint showing the user’s geographic location. The decision to present this third-party data in a particular format -- a map -- does not constitute the “creation” or “development” of information for purposes of § 230(f)(3). The underlying information is entirely provided by the third party, and the choice of presentation does not itself convert the search engine into an information content provider. Indeed, were the display of this kind of information not immunized, nothing would be: every representation by a search engine of another party’s information requires the translation of a digital transmission into textual or pictorial form. Although the plaintiffs resisted this conclusion in their briefs, see Locksmiths’ Reply Br. 3 (declaring that the “location of the inquiring consumer . . . is determined entirely by the search engines”), they acknowledged at oral argument that a search engine has immunity if all it does is translate a user’s geolocation into map form, see Recording of Oral Arg. at 12:07-12:10.

With this concession, it is difficult to draw any principled distinction between that translation and the translation of exact street addresses from scam-locksmith websites into map pinpoints. At oral argument, the plaintiffs could offer no distinction, and we see none. In both instances, data is collected from a third party and re-presented in a different format. At best, the plaintiffs suggested that a line could be drawn between the placement of “good” and “bad” locksmith information onto the defendants’ maps. See id. at 12:43-12:58 (accepting that, “to the extent that the search engine simply depicts the exact information they obtained from the good locksmith and the consumer on a map, that appears to be covered by the [Act]”). But that line is untenable because, as discussed above, Congress has immunized the re-publication of even false information.

That's a nice, clean ruling on what should be an obvious point. But having such clean language could be useful for citations in future cases. It is notable, at least, (and useful) that the court clearly states: "Congress has immunized the re-publication of even false information." Other courts have made this clear, but having it in such a nice, compact form that is highly quotable is certainly handy.

There are a few other attempts to get around CDA 230 that all fail -- including using the fact that the "false advertising" claim is under the Lanham Act (which is often associated with trademark law), and CDA 230 explicitly excludes "intellectual property" law. But that doesn't magically make the false advertising claims "intellectual property," nor does it exclude them from CDA 230 protections.

But, as noted up top, there is something in the ruling that could be problematic going forward concerning the still very incorrect argument that CDA 230 requires the platforms be "neutral." The locksmiths' lawyers argued that even if the above case (of putting a pin on a map) didn't make the search engines "content creators," perhaps they were content creators when they effectively made up the location. In short: when these (and some other) local search engines don't know the actual exact location of a business, they might put in what is effectively a guesstimate, usually placing it in a central location of an expected range. As the court explains:

The plaintiffs describe a situation in which the defendants create a map pinpoint based on a scam locksmith’s website that says the locksmith “provides service in the Washington, D.C. metropolitan area” and “lists a phone number with a ‘202’ area code.” Locksmiths’ Br. 8; see also Locksmiths’ Reply Br. 4-5. According to the plaintiffs, the defendants’ search engines use this information to “arbitrarily” assign a map location within the geographic scope indicated by the third party.

Legally, that does represent a slightly different question -- and (if you squint) you can kinda see how someone could maybe, possibly, make the argument that if the local search engines take that generalized info and create a pin that appears specific to end users, that it has somehow "created" that content. But the court (correctly, in my opinion) says "nope," and that since that pin is still derived from the information provided by a 3rd party, 230 protects. This is good and right.

The problem is that the court went a bit overboard in using the word "neutral" in describing this, using the word in a very different way than most people mean when they say "neutral" (and in a different way than previous court rulings -- including those cited in the case -- have used the word neutral):

We conclude that these translations are also protected. First, as the plaintiffs do not dispute, the location of the map pinpoint is derived from scam-locksmith information: its location is constrained by the underlying third-party information. In this sense, the defendants are publishing “information provided by another information content provider.” Cf. Kimzey v. Yelp!, Inc., 836 F.3d 1263, 1270 (9th Cir. 2016) (holding that Yelp’s star rating system, which is based on receiving customer service ratings from third parties and “reduc[ing] this information into a single, aggregate metric” of one to five stars could not be “anything other than usergenerated data”). It is true that the location algorithm is not completely constrained, but that is merely a consequence of a website design that portrays all search results pictorially, with the maximum precision possible from third-party content of varying precision. Cf. Carafano v. Metrosplash.com, Inc., 339 F.3d 1119, 1125 (9th Cir. 2003) (“Without standardized, easily encoded answers, [Matchmaker.com] might not be able to offer these services and certainly not to the same degree.”).

Second, and also key, the defendants’ translation of thirdparty information into map pinpoints does not convert them into “information content providers” because defendants use a neutral algorithm to make that translation. We have previously held that “a website does not create or develop content when it merely provides a neutral means by which third parties can post information of their own independent choosing online.” Klayman, 753 F.3d at 1358; accord Bennett, 882 F.3d at 1167; see Kimzey, 836 F.3d at 1270 (holding that Yelp’s “star-rating system is best characterized as the kind of neutral tool[] operating on voluntary inputs that . . . [does] not amount to content development or creation” (internal quotation marks omitted) (citing Klayman, 753 F.3d at 1358)). And the Sixth Circuit has held that the “automated editorial act[s]” of search engines are generally immunized under the Act. O’Kroley v. Fastcase, Inc., 831 F.3d 352, 355 (6th Cir. 2016).

Here, the defendants use automated algorithms to convert third-party indicia of location into pictorial form. See supra note 4. Those algorithms are “neutral means” that do not distinguish between legitimate and scam locksmiths in the translation process. The plaintiffs’ amended complaint effectively acknowledges that the defendants’ algorithms operate in this fashion: it alleges that the words and numbers the scam locksmiths use to give the appearance of locality have “tricked Google” into placing the pinpoints in the geographic regions that the scam locksmiths desire. Am. Compl. ¶ 61B. To recognize that Google has been “tricked” is to acknowledge that its algorithm neutrally translates both legitimate and scam information in the same manner. Because the defendants employ a “neutral means” and an “automated editorial act” to convert third-party location and area-code information into map pinpoints, those pinpoints come within the protection of § 230.6

See all those "neutral means" lines? What the court means is really automated and not designed to check for truth or falsity of the information. It does not mean "unbiased," because any algorithm that is making decisions is inherently and absolutely "biased" towards trying to choose what it feels is the "best" solution -- in this case, it is "biased" towards approximating where to put the pin.

The court is not, in any way, saying that a platform need be "neutral" in how it applies moderation choices, but I would bet a fair bit of money that many of the trolls (and potentially grandstanding politicians) will use this part of the court ruling to pretend 230 does require "neutrality." The only thing I'm not sure about is how quickly this line will be cited in a bogus lawsuit, but I wouldn't expect it to take very long.

For what it's worth, after I finished writing this, I saw that professor Eric Goldman had also written up his analysis of the case, which is pretty similar, and includes a few other key points as well, but also expects the "neutral means" line to be abused:

I can see Sen. Cruz seizing on an opinion like this to say that neutrality indeed is a prerequisite of Section 230. That would be less of a lie than his current claim that Section 230 only protects “neutral public forums,” but only slightly. The “neutrality” required in this case relates only to the balance between legal and illegal content. Still, even when defined narrowly and precisely like in the Daniel v. Armslist case, the term “neutrality” is far more likely to mislead than help judges. In contrast, the opinion cites the O’Kroley case for the proposition that Section 230 protects “automated editorial act[s],” and that phrasing (though still odd) is much better than the term “neutrality.”

The overall ruling is good and clear -- with just this one regrettable bit of language.

Filed Under: cda 230, intermediary liability, local search, locksmiths, neutrality, scam locksmiths, section 230
Companies: google, microsoft, yahoo


Reader Comments

The First Word

Subscribe: RSS

View by: Time | Thread


  • This comment has been flagged by the community. Click here to show it
    identicon
    Anonymous Coward, 11 Jun 2019 @ 12:41pm

    "Congress has immunized the re-publication of even false information."

    Exactly: you can't trust anything you read online, including advertising (or reviews), which is why it's wise to ignore ALL online advertising.

    Newspapers are held to a different standard, as are websites in other countries.

    It's up to the public to demand truth from the internet, and until now, it has not.

    As for there being no neutrality requirement, the moderation language was based mostly on pornography, not lies or political bias. Even though there is no explicity "neutrality" requirement at present in 230, that doesn't mean Congress can now choose to impose one. Laws are changed all the time.

    Those who claim 230 and neutrality should be linked are making an argument based on principle, not law, and arguing that the principle should become the law.

    reply to this | link to this | view in chronology ]

    • icon
      Cdaragorn (profile), 11 Jun 2019 @ 1:48pm

      Re:

      You might be arguing that it should become the law. Those quoted in the article have clearly said on multiple occasions that it is already the law. You can't pretend that they somehow meant "this is what the law should be". There is no way to twist their statements to mean that without breaking the English language.

      As for the idea that it should become the law, aside from the fact that that would be pretty unconstitutional from my point of view it would also be an impossible law to obey. Expression is an act of opinion. Even stating facts is expressing your opinion of what is factual. Nevermind the issue that anyone will think that any expression they disagree with is not neutral purely because they don't agree with what was said.

      If you honestly think that platforms should be blamed for the acts of their users then I want a law that throws the President in prison for every crime committed in the US. That idea is no dumber than what you're asking for.

      reply to this | link to this | view in chronology ]

    • icon
      Stephen T. Stone (profile), 11 Jun 2019 @ 3:26pm

      Even though there is no explicity "neutrality" requirement at present in 230, that doesn't mean Congress can now choose to impose one.

      I want you to think about this hypothetical. I want you to consider every ramification of it. Then I want you to answer the question I pose at the end.

      A privately-owned political forum bans the promotion of specific ideologies — one of which is, say, White supremacy. One day, the admins of that forum hear how Congress has altered 230 to require “neutrality” in content moderation. This change means a platform cannot moderate any legally protected speech. (That change is the situation for which you appear to advocate. If I am wrong about that, blame your lack of clarity on the matter.)

      Promotion of White supremacy is legal in the United States. So how can those admins ban speech they do not want hosted on their forum if the altered CDA 230 forces them into hosting that speech?

      reply to this | link to this | view in chronology ]

      • identicon
        Anonymous Coward, 11 Jun 2019 @ 10:51pm

        Re:

        I would apply the law that is being currently being applied in some EU Member States' bars. Yeah, those places where you drink alcohol.

        They can't discriminate. That is, they can't deny entry to people in a:

        • Arbitrary way. They have to set rules and make them easily visible at their doors.
        • It has to be done in a non-discriminatory way. For example, a bar can't deny access to a black person, even if it's owned by a white supremacist group: they have to provide them the same service as any other customer.

        They can set, let's say, a dress code. But they can't deny access to a person to their bar based on their religion, sex, sexual orientation or race.

        Any discrimination done in that regard is a felony.

        If you want to have a white supremacist bar in the EU (at least in some MS), you're in for a bad day.

        Of course, you're perfectly free to be a complete racist in your own home.

        In case of a site/forum:

        • It has to be a business and make money in a commercial way (in short, if it makes barely enough to keep itself alive it isn't a business, if you and others are making a salary from it, it is a business).
        • Rules have to be stated clearly what you can and you can't do.
        • They have to be easily accessible.
        • They have to be easy to read: no legalese and no whole bibles with small print (in fact, I would ban small print, period, even IRL).
        • You can, obviously, set rules about behavior, like no spamming, no trolling, keeping things civil and polite...
        • You can ban speech, as long as it's done in a non-discriminatory way and as long as it's justified in some way. Just because you don't like it it's a no-no.

        For example, a MMORPG forum can ban any mention to other games except theirs, because their purpose is to promote their game, not others. They can't choose to ban only one game, while allowing others.

        A political forum can ban speech about idk, consoles, because that's not what they do. But it's a political forum, so they have to allow any talk about politics (as long as it isn't illegal).

        See that the rule set is "a business". For example, Facebook and Twitter are businesses. Your average political forum might not be, as long as they don't make money and/or what they make is limited.

        Of course, with those rules I'd add stronger anti-liability rules like:

        • You don't have to ban or censor any speech as long as a judge hasn't ruled that is illegal. And yes, a JUDGE, not an authority or a "copyright holder".
        • You don't have to monitor or filter. In fact, you can't.
        • You aren't liable of what your users do or say, as long as you haven't ignored a court order.
        • Burden of proof of banned speech befalls on the plaintiff, not on the defendant: that is, if you ban someone for spamming and he says that it's because you're a racist, it has to be clearly evident that was racism, or he has to prove that fact.
        • Still, you have to give a reason of why you moderate someone. Just "violating our ToS" isn't enough. You have to state clearly what clauses they have violated and how. Failing to do so makes the ban null and void.

        Still, don't want to keep an eye on all this shit? Don't make a business out of a social network or a forum. You can make a non-profit forum and moderate all you want.

        reply to this | link to this | view in chronology ]

        • icon
          That One Guy (profile), 12 Jun 2019 @ 12:14am

          Re: Re:

          Ignoring for the moment the idea that a site could be forced to allow use by someone who was an ass in an inventive way such that the way they were an ass wasn't specifically spelled out in the TOS(and where have I heard that logic before...), if violations of the TOS in general wouldn't be enough, then said TOS would quickly become vague enough to allow them to give the boot to anyone, or it would be quickly be nullified by people finding new and inventive ways to skirt around it.

          But it's a political forum, so they have to allow any talk about politics (as long as it isn't illegal).

          This... would be a nightmare. Political forums would become battlegrounds, as the trolls/idiots from the various parties would go around filling up any forum run by another party with massive numbers of posts, not only forcing the owners from one party to host any and all content by opposing parties, no matter what they felt about it, but making any discussions between members all but impossible. If you've ever seen the comment section of a political video on YT and the absolute mess that tends to be, it would be like that except everywhere.

          You've put more thought into this than a lot of people(sadly a good number of them politicians...), but even so there are some large problems with the idea, such that I still believe it would be a cure worse than the disease.

          reply to this | link to this | view in chronology ]

        • icon
          Stephen T. Stone (profile), 12 Jun 2019 @ 4:28am

          You can ban speech, as long as it's done in a non-discriminatory way and as long as it's justified in some way. Just because you don't like it it's a no-no.

          I dislike White supremacists. I dislike their ideology. Under this suggested rule of yours, I could not delete White supremacist propaganda from a politics forum I run because “no sir, I don’t like it” isn’t a good enough reason. Similarly…

          A political forum can ban speech about idk, consoles, because that's not what they do. But it's a political forum, so they have to allow any talk about politics (as long as it isn't illegal).

          …I also couldn’t delete it because it is technically political speech.

          Your suggestions would cause far more disorder and chaos than they would to rein it in. Then again, I suspect that may be the point.

          reply to this | link to this | view in chronology ]

    • identicon
      Anonymous Coward, 11 Jun 2019 @ 4:24pm

      Re:

      Even though there is no explicity "neutrality" requirement at present in 230, that doesn't mean Congress can now choose to impose one.

      Congress can attempt to impose lots of things, but the bill of rights would get in the way of any such imposition. Who, after all, is to say what is "neutral"?

      CDA 230 is there in the first place because of a horrendously-bad court decision.

      It is not inconceivable that, even in the absence of CDA 230, the courts (led by a Supreme Court supportive of speech) would have eventually recognized that free speech implies that people who are not speaking (but merely selling amplifiers) can't justly be accused of the speech others make.

      reply to this | link to this | view in chronology ]

  • identicon
    Anonymous Coward, 11 Jun 2019 @ 12:41pm

    "Congress has immunized the re-publication of even false information."

    "But...but...I read it in GOOGLE! Why am I being sued and not the person who put it there?"

    reply to this | link to this | view in chronology ]

  • identicon
    Anonymous Coward, 11 Jun 2019 @ 12:59pm

    "But...but...I read it in GOOGLE! Why am I being sued and not the person who put it there?"

    Because the person who is choosing to sue you (they are not required to do so) wants money and you're an easy target?

    reply to this | link to this | view in chronology ]

  • identicon
    Anonymous Coward, 11 Jun 2019 @ 1:39pm

    Not only does the law not require neutrality, it explicitly states that it's goal is for there to be more content moderation.

    That's a misreading. It says

    (b) Policy: It is the policy of the United States—
    ...
    (2) to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation;
    (3) to encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet and other interactive computer services;
    (4) to remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children’s access to objectionable or inappropriate online material;

    That talks about user control, not site-operator control. It doesn't support the view that sites should moderate, but leans toward supporting the opposite: people should use software to remove stuff they don't want to see. At best it's arguing for a "free market" between unmoderated and centrally-moderated sites.

    Saying that people should be free to do something is not the same as saying they should do it.

    reply to this | link to this | view in chronology ]

    • icon
      Gary (profile), 11 Jun 2019 @ 2:00pm

      Re:

      That's a misreading. It says

      You are misrepresenting 230. Your opinion isn't supported by caselaw, the stated intentions of the law, and the actual reading to the statue.

      reply to this | link to this | view in chronology ]

      • identicon
        Anonymous Coward, 11 Jun 2019 @ 2:09pm

        Re: Re:

        The statement in the article was that the law was designed to encourage moderation. It may have been, but the given quote doesn't support that. Caselaw about 230 has nothing to do with it, because that came after the law was designed. As for "the stated intentions", can you link to a relevant statement of intent?

        reply to this | link to this | view in chronology ]

        • identicon
          Anonymous Coward, 11 Jun 2019 @ 2:27pm

          Re: Re: Re:

          The statement in the article was that the law was designed to encourage moderation. It may have been, but the given quote doesn't support that.

          The law was intended to allow companies to facilitate moderation without fear of liability. At the time, that meant blocking software that a parent could purchase, but it also referred to curated services like Prodigy vs. an open pipe like modern ISPs. A parent might choose Prodigy for their kids over an open pipe because they like how Prodigy chooses to moderate, and 230 would allow Prodigy to do so without the risk of being liable as they were in Stratton.

          Fast forward to now, and platforms like Facebook can moderate content like Prodigy could, thanks to 230.

          For a statement of intent, check https://www.congress.gov/congressional-record/1995/08/04/house-section/article/H8460-1. The relevant section starts with "amendment offered by mr. cox of California"

          reply to this | link to this | view in chronology ]

          • identicon
            Anonymous Coward, 11 Jun 2019 @ 2:47pm

            Re: Re: Re: Re:

            Thanks for the link. After reading Cox's argument, I still don't see that he wanted to encourage moderation per se, as a government policy. He wanted to allow it for sure, and that was clear from Mike's quotes; but any encouragement from him would be as a private citizen, a parent acting in a free market (choosing between "family-friendly" areas and others). There's quite a bit of libertarian subtext really.

            reply to this | link to this | view in chronology ]

            • icon
              Stephen T. Stone (profile), 11 Jun 2019 @ 3:27pm

              Allowing moderation to even happen in the first place is encouraging it.

              reply to this | link to this | view in chronology ]

            • identicon
              Anonymous Coward, 11 Jun 2019 @ 4:27pm

              Re: Re: Re: Re: Re:

              I still don't see that he wanted to encourage moderation per se, as a government policy.

              Cox said (emphasis mine):

              We want to encourage people like Prodigy, like CompuServe, like America Online, like the new Microsoft network, to do everything possible for us, the customer, to help us control, at the portals of our computer, at the front door of our house, what comes in and what our children see.

              ...

              Mr. Chairman, our amendment will do two basic things: First, it will protect computer Good Samaritans, online service providers, anyone who provides a front end to the Internet, let us say, who takes steps to screen indecency and offensive material for their customers. It will protect them from taking on liability such as occurred in the Prodigy case in New York that they should not face for helping us and for helping us solve this problem.

              That doesn't sound like "I'll allow it;" to me. It sounds like "We want them to do it."

              reply to this | link to this | view in chronology ]

              • identicon
                Anonymous Coward, 11 Jun 2019 @ 8:02pm

                Re: Re: Re: Re: Re: Re:

                That doesn't sound like "I'll allow it;" to me. It sounds like "We want them to do it."

                I see your point, but don't you think that "We want them to help us do it" is a more appropriate reading? Quoting: "to do everything possible for us, the customer, to help us control, at the portals of our computer, at the front door of our house, what comes in"

                That may seem like hairsplitting but I find the distinction important and feel like he's going out of his way to make it. When Facebook etc. block content for us—applying their policy for the whole world—we lose control. Or rather, our control is limited to selecting the site whose moderation policy aligns most closely with our own. In that sense, perhaps he did intend sites to moderate differently (i.e. he intended them to moderate) to cater to different groups. Still, he hints at a grander vision that hasn't come to pass in any significant way.

                reply to this | link to this | view in chronology ]

                • icon
                  Stephen T. Stone (profile), 11 Jun 2019 @ 9:22pm

                  A White supremacist forum and a Black Lives Matter forum will always be moderated differently. You can’t make them both moderate the exact same way and expect them to retain their unique identities.

                  reply to this | link to this | view in chronology ]

                  • identicon
                    Anonymous Coward, 12 Jun 2019 @ 5:25am

                    Re:

                    You can’t make them both moderate the exact same way and expect them to retain their unique identities.

                    You're still assuming all viewers see the same effect from moderation decisions. If we were to apply Cox's idea of user control to Techdirt, you might configure the site to hide posts that other users have tagged as "racist", while someone else might choose to make those more visible and hide "social justice". It would be less necessary for each group to set up specific forums for themselves.

                    As another example, some people have complained when Techdirt uses certain "naughty" words in headlines. One headline recently quoted "fuck you". Given an option, some might configure the site to hide/bowdlerize those.

                    reply to this | link to this | view in chronology ]

                    • icon
                      Thad (profile), 12 Jun 2019 @ 9:05am

                      Re: Re:

                      But that's got nothing to do with Stratton Oakmont v Prodigy. That decision didn't involve Prodigy providing tools for users to block certain posts*, it involved moderators actively deleting posts that violated forum guidelines.

                      * I'll add that Prodigy did have a rudimentary blocklist -- it filled up too fast to be much use -- but that's it; nothing remotely like the category sorting you're describing. It's worth remembering that the CDA passed in 1996, and Prodigy's proprietary forum software was already pretty long in the tooth even then; if you think people were looking at the sort of advanced tagging/categorization features we see today, you're getting way ahead of yourself.

                      reply to this | link to this | view in chronology ]

                    • icon
                      Stephen T. Stone (profile), 12 Jun 2019 @ 12:24pm

                      It would be less necessary for each group to set up specific forums for themselves.

                      That doesn’t address how most sites don’t want to host specific types of speech regardless of whether users can filter it.

                      reply to this | link to this | view in chronology ]

                • identicon
                  Anonymous Coward, 12 Jun 2019 @ 10:02am

                  Re: Re: Re: Re: Re: Re: Re:

                  Or rather, our control is limited to selecting the site whose moderation policy aligns most closely with our own. In that sense, perhaps he did intend sites to moderate differently (i.e. he intended them to moderate) to cater to different groups.

                  I'm inclined to agree with you on this point. Consider the examples given: Prodigy, AOL, CompuServe. These are classic examples of the "walled garden" form of internet access and at the national level, they were what people expected. While you could certainly open a browser from within AOL and go where you liked, it wasn't the common form of usage. The idea of an open pipe was far less common and tended to be offered by local providers. (I used both AOL and a local dialup provider at different times back in the 90s)

                  That said, I also think that on the whole, Section 230 has accomplished what Cox and Wyden intended it to do, even if it might not have done it in exactly the way they intended it to happen.

                  reply to this | link to this | view in chronology ]

    • identicon
      Anonymous Coward, 14 Jun 2019 @ 8:49am

      Re:

      You missed the part further down though that directly addresses moderation:

      (2) Civil liability
      No provider or user of an interactive computer service shall be held liable on account of—
      (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
      (B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).[1]

      reply to this | link to this | view in chronology ]

  • identicon
    Anonymous Coward, 11 Jun 2019 @ 1:51pm

    "keep insisting that CDA 230 requires "neutrality" by the platform in order to retain the protections of the law. "

    Those who demand this actually think a neutral position is possible?
    How can one claim they have every voice represented on every item in every article/post/ad

    reply to this | link to this | view in chronology ]

    • identicon
      Anonymous Coward, 11 Jun 2019 @ 1:58pm

      Re:

      Those who demand this actually think a neutral position is possible?

      Not at all, they want their views promoted by third parties, while they shout down opposing views.

      reply to this | link to this | view in chronology ]

    • identicon
      Anonymous Coward, 11 Jun 2019 @ 2:12pm

      Re:

      Those who demand this actually think a neutral position is possible? How can one claim they have every voice represented on every item in every article/post/ad

      Of course it's possible. You can let anyone provide an article/post/ad under the same terms (pricing etc). That produces a neutral site, but probably not a useful site.

      reply to this | link to this | view in chronology ]

  • This comment has been flagged by the community. Click here to show it
    identicon
    Anonymous Coward, 11 Jun 2019 @ 5:04pm

    Fast forward to now, and platforms like Facebook can moderate content like Prodigy could, thanks to 230.

    When moderation can influence the outcome of elections, Congress can decide to tie 230 immunity to political neutrality.

    reply to this | link to this | view in chronology ]

    • identicon
      Anonymous Coward, 11 Jun 2019 @ 5:21pm

      Re:

      There is no such thing.

      reply to this | link to this | view in chronology ]

    • icon
      Stephen T. Stone (profile), 11 Jun 2019 @ 6:00pm

      Congress can decide to tie 230 immunity to political neutrality

      And until they have a Supreme Court willing to overlook the First Amendment, such a decision means nothing.

      reply to this | link to this | view in chronology ]

    • icon
      Gary (profile), 11 Jun 2019 @ 6:54pm

      Re:

      When moderation can influence the outcome of elections, Congress can decide to tie 230 immunity to political neutrality.

      Ya got my "LOL" vote AC!

      Good thing the rest of us have a body of caselaw (Commonlaw) and the bill of rights that say otherwise.

      reply to this | link to this | view in chronology ]

    • identicon
      Anonymous Coward, 11 Jun 2019 @ 7:00pm

      Re:

      Television and print news outlets can also influence elections. Are you suggesting that Congress should pass a law forcing their coverage to be neutral, too? Fox sure as hell wouldn't like that.

      reply to this | link to this | view in chronology ]

      • icon
        That One Guy (profile), 11 Jun 2019 @ 7:06pm

        Re: Re:

        '... and now to balance out the gushing praise we just heaped on Trump, we will now have a thirty minute segment talking about how bad he is, and/or how great and more qualified his opponents are.'

        Oh yeah, I'm sure that'd go over great, though strangely enough I suspect that when it comes to enforced political neutrality there would be nary a mention of the likes of Fox for some strange reason...

        reply to this | link to this | view in chronology ]

        • icon
          Gary (profile), 11 Jun 2019 @ 7:54pm

          Re: Re: Re:

          Oh yeah, I'm sure that'd go over great, though strangely enough I suspect that when it comes to enforced political neutrality there would be nary a mention of the likes of Fox for some strange reason...

          Well obviously anyone pushing for this kind of viewpoint neutrality is a snowflake that melts when someone criticizes El Cheetos. Therefore their solution is to let the White House determine what is fair and balanced reporting!

          reply to this | link to this | view in chronology ]

          • icon
            That One Guy (profile), 11 Jun 2019 @ 8:17pm

            'We have always been against that sort of presidential control!'

            Therefore their solution is to let the White House determine what is fair and balanced reporting!

            ... Right until the other party is in power, at which point even the suggestion that the White House should make that sort of determination would be decried as tyrannical and utterly unconstitutional.

            reply to this | link to this | view in chronology ]

          • This comment has been flagged by the community. Click here to show it
            identicon
            Algorithmically Disappeared, 11 Jun 2019 @ 8:32pm

            Re: Re: Re: Re: So, "Gary": I see you're being GENEROUS again!

            Unprecedented generosity of making others "First Word". How much do you pay for that? Or are you Timothy Geigner, aka "Dark Helmet" with Admin privileges and it's therefore free to you?

            You first came to my notice for criticizing Techdirt! I predicted you wouldn't last here, remember? But you turned into one of the most proific commenters: 1328 now. And yet in first two years made only a dozen comments? Weird.

            You can't hide identity, Timmy, when repeat bombast and your Trump Derangement Syndrome always bringing up "El Cheetos". Sad.

            reply to this | link to this | view in chronology ]

    • icon
      That One Guy (profile), 11 Jun 2019 @ 7:01pm

      Pandora called, she wants her box back

      Why yes, congress could pass a blatantly unconstitional law like that, however assuming the supreme court had any respect for the document it would quickly be struck down, and on the off-chance that it didn't I can all but guarantee that it would not go the way you think it would.

      If the choice is between 'no political discussion/content is allowed, including the good stuff', and 'all political discussion/content is required to be allowed, even stuff the company/platform strongly disagrees with', what makes you think they wouldn't block all of it, if only to avoid having their platform filled with deplorable individuals?

      In addition, if the ability to influence election results is grounds for enforcing political neutrality, well, hope you're not a fan of any platform/company that isn't politically neutral(like, oh say, Trump's cheerleading squad on Fox...), because that can of worms you just opened will swallow them right up unless you hypocritically only want 'political neutrality' enforced against platforms you don't agree with.

      reply to this | link to this | view in chronology ]

  • This comment has been flagged by the community. Click here to show it
    identicon
    Algorithmically Disappeared, 11 Jun 2019 @ 8:26pm

    Not handed down by God on mile-high titanium blocks. Changeable.

    First, let's all look for the "Decency" in "Communications Decency Act"! Where the HELL did that go? ... Oh, right: long since ruled UN-Constitutional! Any reference to the intent of legislators already proven FLATLY WRONG on the major point is STUPID! Most of the Act is already THROWN OUT.

    Even if Masnick were right with his Clintonian / Red Queen assertion that "neutral" doesn't mean neutral when he doesn't want it to, is not the last decision and can be changed at any time by The People's representatives.

    "Moderation" by definition is NOT and CANNOT be partisan. Period. If not objective and for good cause with The Public as beneficiary under common law -- which is what the section actually states with "Good Samaritan" title and "in good faith" requirement -- then it's not allowed.

    By attacking the word "neutral" Masnick is trying to meld "moderation" and "censorship" to empower corporations to control ALL speech.

    Masnick is a partisan for corporations and though talks up 1A "free speech" that's just for cover: in this crucial point for corporate power he's against the clear Constitutional rights of "natural" persons. He's going to slant what writes to play up government-conferred power of corporations to control what YOU write.

    His duplicity on this Section is shown by that when arguing with me, he simply DELETED the "in good faith" requirement! -- And then blows it off as not important:

    https://www.techdirt.com/articles/20190201/00025041506/us-newspapers-now-salivating-over- bringing-google-snippet-tax-stateside.shtml#c530

    Now, WHERE did Masnick get that exact text other than by himself manually deleting characters? -- Go ahead. Search teh internets with his precious Google to find that exact phrase. I'll wait. ... It appears nowhere else, which means that Masnick deliberately falsified the very law under discussion trying to keep me from pointing out that for Section 230 to be valid defense of hosts, they must act "in good faith" to The Public, NOT as partisans discriminating against those they decide are foes.

    Masnick deliberately falsified when supposedly quoting, re-defines a common word, and holds a corporatist view against YOUR interests! And you clowns still believe he's right? Sheesh.


    -m-a-s-n-i-c-k-s -h-a-t-e -r-u-l-e-s -e-s-p -h-o-r-i-z-o-n-t-a-l-s

    PS: Intentionally late because who cares? This tiny little site has almost no influence, not least because WRONG! There's no one here to convince. All you FEW remaining kids will do is gainsay, and then the site -- "the community" doing it is just another lie -- but an Administrator of the site will decide to censor this, falsely calling it "hiding" -- because a key point of the new censorship is to keep their SNEAKY CHEATS from becoming known.

    reply to this | link to this | view in chronology ]

    • icon
      Stephen T. Stone (profile), 11 Jun 2019 @ 9:27pm

      If not objective and for good cause with The Public as beneficiary under common law -- which is what the section actually states with "Good Samaritan" title and "in good faith" requirement -- then it's not allowed.

      Question, Blue Balls: If a certain kind of distasteful speech is legal, what law (“common” or otherwise) says a given platform must host it?

      This tiny little site has almost no influence

      The grand irony here is that if this site truly has little-to-no influence, you’re wasting your time shit-talking it even more than (you think) the rest of us are wasting on our comments and readership. I mean, if the site is such basic bullshit that no one pays attention to it, who else besides regular commentators is paying attention to your bullshit?

      reply to this | link to this | view in chronology ]

    • icon
      Mike Masnick (profile), 12 Jun 2019 @ 1:35am

      Re: Not handed down by God on mile-high titanium blocks. Changea

      First, let's all look for the "Decency" in "Communications Decency Act"! Where the HELL did that go? ... Oh, right: long since ruled UN-Constitutional! Any reference to the intent of legislators already proven FLATLY WRONG on the major point is STUPID! Most of the Act is already THROWN OUT.

      Perhaps you should stop spewing nonsense and learn about the actual history of the law.

      It was two separate laws mashed together. One part was thrown out. It was written by Senator Exon. One part was not thrown out. It was written by Reps. Cox and Wyden. So, yes, it's fine to ignore the legislative intent of Exon's part. That got thrown out. But that's got nothing to do with 230.

      And you would know this if you weren't so consistently wrong on everything and refusing to even take the first steps to cure your ignorance. It is almost as if you thrive by making shit up. Maybe stop doing that. It's been over a decade. At some point, being a total ignorant asshole on a forum you hate has to have diminishing returns.

      reply to this | link to this | view in chronology ]

    • identicon
      Anonymous Coward, 14 Jun 2019 @ 9:01am

      Re: Not handed down by God on mile-high titanium blocks. Changea

      "Moderation" by definition is NOT and CANNOT be partisan. Period. If not objective and for good cause with The Public as beneficiary under common law -- which is what the section actually states with "Good Samaritan" title and "in good faith" requirement -- then it's not allowed.

      Oh really? Want to bet on that? Here's that section you like to throw around so much in all it's textual glory:

      (2) Civil liability
      No provider or user of an interactive computer service shall be held liable on account of—
      (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected

      See those parts I bolded? Here, let me make it clearer:

      No provider...of an interactive computer service shall be held liable on account of— any action voluntarily taken in good faith to restrict access to or availability of material that the provider...considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected

      The provider can't be held liable for restricting access to content that the provider deems objectionable. Even if said content is Constitutionally protected.

      Your entire argument is invalid. Now sit down and shut up.

      reply to this | link to this | view in chronology ]

    • identicon
      Anonymous Coward, 14 Jun 2019 @ 9:06am

      You're still lying and misrepresenting everything

      Now, WHERE did Masnick get that exact text other than by himself manually deleting characters? -- Go ahead. Search teh internets with his precious Google to find that exact phrase. I'll wait. ... It appears nowhere else

      If you're going to copy and paste then I'm going to copy and paste:

      He simply DELETED the "in good faith" requirement! -- And then blows it off as not important!

      No, he didn't, he was quoting the paragraph/section down from the good faith clause you nincompoop.

      Now, WHERE did Masnick get that exact text other than by himself manually deleting characters?

      From the paragraph immediately following the one you are talking about you dolt.

      Search teh internets with his precious Google to find that exact phrase. I'll wait.

      Wait's over:
      https://www.law.cornell.edu/uscode/text/47/230
      And:
      https://www.google.com/search?rlz=1C1GCEB_ enUS852US852&ei=rsUDXbSCK5K2swW92rvACQ&q=No+provider+or+user+of+an+interactive+computer+serv ice+shall+be+held+liable+on+account+of+any+action+taken+to+enable+or+make+available+to+information+c ontent+providers+or+others+the+technical+means+to+restrict+access+to+material+described+in+paragraph &oq=No+provider+or+user+of+an+interactive+computer+service+shall+be+held+liable+on+account+of+an y+action+taken+to+enable+or+make+available+to+information+content+providers+or+others+the+technical+ means+to+restrict+access+to+material+described+in+paragraph&gs_l=psy-ab.3..0i71l8.72225.74382..7 4918...0.0..0.0.0.......1....2j1..gws-wiz.Pgx6m4-4SEc&safe=active&ssui=on

      It appears nowhere else

      See above links.

      which means that Masnick deliberately falsified the very law under discussion

      Well, since he ACTUALLY was quoting the paragraph down, he didn't falsify anything.

      trying to keep me from pointing out that for Section 230 to be valid defense of hosts, they must act "in good faith" to The Public, NOT as partisans discriminating against those they decide are foes

      Or maybe you are trying to misrepresent what Mike was saying to keep from being embarrassed that you are wrong. Again.

      reply to this | link to this | view in chronology ]

  • identicon
    Anonymous Coward, 12 Jun 2019 @ 1:41am

    Indexing

    "Congress has immunized the re-publication of even false information."

    This is why search engines can index Breitbart and the NYT.

    reply to this | link to this | view in chronology ]

    • icon
      Thad (profile), 12 Jun 2019 @ 9:06am

      Re: Indexing

      That level of false equivalence takes some serious chutzpah.

      I'm no fan of the New York Times, but it's nowhere near the equivalent of Breitbart. It's not even the equivalent of the New York Post.

      reply to this | link to this | view in chronology ]

  • identicon
    Anonymous Coward, 12 Jun 2019 @ 2:42am

    You know it's going to be good when you don't even have to post "Where's Poochie" comments in the 230 thread.

    reply to this | link to this | view in chronology ]

  • icon
    Coyne Tibbets (profile), 12 Jun 2019 @ 5:10am

    They'd misquote anything

    To be fair, the people who will misquote this would misquote, "To be, or not to be," if it served their purposes.

    reply to this | link to this | view in chronology ]


Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here



Subscribe to the Techdirt Daily newsletter




Comment Options:

  • Use markdown. Use plain text.
  • Remember name/email/url (set a cookie)

Close

Add A Reply

Have a Techdirt Account? Sign in now. Want one? Register here



Subscribe to the Techdirt Daily newsletter




Comment Options:

  • Use markdown. Use plain text.
  • Remember name/email/url (set a cookie)

Follow Techdirt
Techdirt Gear
Shop Now: I Invented Email
Advertisement
Report this ad  |  Hide Techdirt ads
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Chat
Advertisement
Report this ad  |  Hide Techdirt ads
Recent Stories
Advertisement
Report this ad  |  Hide Techdirt ads

Close

Email This

This feature is only available to registered users. Register or sign in to use it.