Wired's Big Cover Story On Facebook Gets Key Legal Point Totally Backwards, Demonstrating Why CDA 230 Is Actually Important

from the bad-reporting dept

If you haven’t read it yet, I highly recommend reading the latest Wired cover story by Nicholas Thompson and Fred Vogelstein, detailing the past two years at Facebook and how the company has struggled in coming to grips with the fact that their platform can be used by people to do great harm (such as sow discontent and influence elections). It’s a good read that is deeply reported (by two excellent reporters), and has some great anecdotes, including the belief that an investigation by then Connecticut Attorney General Richard Blumenthal into Facebook a decade ago, was really an astroturfing campaign by MySpace:

Back in 2007, Facebook had come under criticism from 49 state attorneys general for failing to protect young Facebook users from sexual predators and inappropriate content. Concerned parents had written to Connecticut attorney general Richard Blumenthal, who opened an investigation, and to The New York Times, which published a story. But according to a former Facebook executive in a position to know, the company believed that many of the Facebook accounts and the predatory behavior the letters referenced were fakes, traceable to News Corp lawyers or others working for Murdoch, who owned Facebook’s biggest competitor, MySpace. “We traced the creation of the Facebook accounts to IP addresses at the Apple store a block away from the MySpace offices in Santa Monica,” the executive says. “Facebook then traced interactions with those accounts to News Corp lawyers. When it comes to Facebook, Murdoch has been playing every angle he can for a long time.”

That’s a pretty amazing story, which certainly could be true. After all, just a few years later there was the famous NY Times article about how companies were courting state Attorneys General to attack their competitors (which later came up again, when the MPAA — after reading that NY Times article — decided to use that strategy to go after Google). And Blumenthal had a long history as Attorney General of grandstanding about tech companies.

But, for all the fascinating reporting in the piece, what’s troubling is that Thompson and Vogelstein get some very basic facts wrong — and, unfortunately, one of those basic facts is a core peg used to hold up the story. Specifically, the article incorrectly points to Section 230 of the Communications Decency Act as being a major hindrance to Facebook improving its platform. Here’s how the law incorrectly described in a longer paragraph explaining why Facebook “ignored” the “problem” of “fake news” (scare quotes on purpose):

And then there was the ever-present issue of Section 230 of the 1996 Communications Decency Act. If the company started taking responsibility for fake news, it might have to take responsibility for a lot more. Facebook had plenty of reasons to keep its head in the sand.

That’s… wrong. I mean, it’s not just wrong by degree, it’s flat out, totally and completely wrong. It’s wrong to the point that you have to wonder if Wired’s fact checkers decided to just skip it, even though it’s a fundamental claim in the story.

Indeed, the whole point of CDA 230 is exactly the opposite of what the article claims. As you can read yourself, if you look at the law, it specifically encourages platforms to moderate the content they host by saying that the moderation choices they make do not impact their liability. This is the very core point of CDA 230:

No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected

This is the “good samaritan clause” of the CDA 230 and it’s encouraging platforms like Facebook to “take responsibility for fake news” by saying that no matter what choices it makes, it won’t make Facebook liable for looking at the content. Changing CDA 230 as many people are trying to do right now is what would create incentives for Facebook to put its head in the sand.

And yet, Thompson and Vogelstein repeat this false claim:

But if anyone inside Facebook is unconvinced by religion, there is also Section 230 of the 1996 Communications Decency Act to recommend the idea. This is the section of US law that shelters internet intermediaries from liability for the content their users post. If Facebook were to start creating or editing content on its platform, it would risk losing that immunity—and it’s hard to imagine how Facebook could exist if it were liable for the many billion pieces of content a day that users post on its site.

This one is half right, but half misleading. It’s true — under the Roommates case — that if Facebook creates content that breaks the law, then it remains liable for that content. But not for editing or moderating content on its platform as that sentence implies.

Indeed, this is a big part of the problem we have with the ongoing debates around CDA 230. So many people insist that CDA 230 incentivizes platforms to “do nothing” or “look the other way” or, as Wired erroneously reports, to “put their head in the sand.” But that’s not true at all. CDA 230 not only enables, but encourages, platforms to be more active moderators by making it clear that the choices they make concerning moderating content (outside the context of copyright — which uses a whole different set of rules), don’t create new liability for them. That’s why so many platforms are trying so many different things (as we recently explored in our series of stories on content moderation by internet platforms).

What’s really troubling about this is that people are going to use the Wired cover story as yet another argument for doing away with (or at least punching giant holes in) CDA 230. They’ll argue that we need to make changes to encourage companies like Facebook not to ignore the bad behavior on their platform. But the real lesson of the story — which should have come out if the reporting were more carefully done — is that CDA 230 is what we need to encourage that behavior. The fact that Facebook is able to and is willing to change and experiment in response to increasing public pressure, is only so because CDA 230 gives the company that freedom to do so. Adding liability for wrong decisions is actually what would make the problem worse, and would encourage platforms like Facebook to do less.

It’s tragic that in such a high profile, carefully reported story, a key part of it — indeed, a part on which much of the story itself hinges — is simply, factually, wrong.

Filed Under: , , , , , ,
Companies: facebook

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Wired's Big Cover Story On Facebook Gets Key Legal Point Totally Backwards, Demonstrating Why CDA 230 Is Actually Important”

Subscribe: RSS Leave a comment
36 Comments
Anonymous Coward says:

"Good Samaritans" must be GOOD is key point. And must not be arbitrary, nor is a "provider" given full power over "users". -- It's for The Public's good, in any case, NOT the "provider" as such.

By the way, from quoted: a “user” is given same authority as “provider”, so if users considers, say, Facebook’s advertising objectionable, then surely must be provided a way to block it!

Of course, corporatists can only read that section without “good faith” or “user” because want corporations to be given power besides immunities which print publishers don’t get.

In short, Section 230 is an unworkable tangle that must be revised in light of what’s now known of how corporations abuse the power precisely to deny “natural” persons outlet for Constitutionally protected speech.

Mike Masnick (profile) says:

Re: "Good Samaritans" must be GOOD is key point. And must not be arbitrary, nor is a "provider" given full power over "users". -- It's for The Public's good, in any case, NOT the "provider" as such.

By the way, from quoted: a "user" is given same authority as "provider", so if users considers, say, Facebook’s advertising objectionable, then surely must be provided a way to block it!

The law does not, in any way, say that you have to be given tools to block (though that can be a good offering if platforms decide to offer it), but I do find it bizarre that you make this claim in the same comment where you insist that the law should not allow any blocking at all. Which one is it? Are you for or against content moderation?

And no one ignores the "good faith" aspect of the law, despite your constant assertions to the contrary.

An Onymous Coward (profile) says:

Re: Re: "Good Samaritans" must be GOOD is key point. And must not be arbitrary, nor is a "provider" given full power over "users". -- It's for The Public's good, in any case, NOT the "provider" as such.

Out of one side of his mouth he wants to force platform providers to allow anything and everything users might want to post that might to any degree be considered free speech. Out of the other side he expects those same providers to block out all objectionable content at any cost, nevermind the technical impossibility of doing so. It’s a massive cognitive dissonance and characteristic of his posts.

cpt kangarooski says:

Re: Re: a bunch of rot

Mike,
On a related note, our resident dipshits have worked out a means to avoid having their comments hidden by overloading the subject lines with their blather.

I suggest that some means be adopted to also hide the parts of subject lines originating from hidden comments. It would take some work on the back end but it seems more in line with this place than the second best solution, limiting the amount of text that can go in the subject line.

Just a thought.

Ninja (profile) says:

Re: "Good Samaritans" must be GOOD is key point. And must not be arbitrary, nor is a "provider" given full power over "users". -- It's for The Public's good, in any case, NOT the "provider" as such.

And what’s Facebook definition of what’s good for the public? What’s yours? I’d say it would be good for this site if you were silenced but clearly Mike disagrees with me since you are still here after all these years.

Facebook is free to do whatever it pleases with its platform. In striving to provide a good environment for most they moderate the platform. CDA 230 ensures they are not liable for any user action even if they try to moderate. I wonder where you got delusional and understood that the platform has to do everything the user wants.

“Of course, corporatists can only read that section without “good faith” or “user” because want corporations to be given power besides immunities which print publishers don’t get.”

Comparing apples with ostriches are we? If they publish user generated content they get the protections. Most print publishers have their online counterpart and they are protected as well. They may be liable for content they produce as it has been seen in other cases where platforms lost their CDA 230 protections.

“In short, Section 230 is an unworkable tangle that must be revised in light of what’s now known of how corporations abuse the power precisely to deny “natural” persons outlet for Constitutionally protected speech.”

Nobody is obliged to let you say whatever you want except the government. The 1st applies to the government, not to corporations. CDA 230 is what prevents idiots like you from trying to make the platform liable for what their users produce. It’s what has allowed the Internet to flourish.

Anonymous Coward says:

Re: "Good Samaritans" must be GOOD is key point. And must not be arbitrary, nor is a "provider" given full power over "users". -- It's for The Public's good, in any case, NOT the "provider" as such.

Do you realize that if you get section 230 removed, you will only be able to publish on any Internet site if an editor decides that what you say is worth publishing, and within the law. However it also means most of the time, they will not even look at what you say, because they will receive many more submissions that they can review.

Also, if you decide to run your own sites, you will have to vet everything that you allow to be published, and if you accidentally let something through that is against the law, you will be held liable.

In other words what you keep demanding will most like;y silence you completely.

An Onymous Coward (profile) says:

Re: Re: "Good Samaritans" must be GOOD is key point. And must not be arbitrary, nor is a "provider" given full power over "users". -- It's for The Public's good, in any case, NOT the "provider" as such.

Platforms will have to do away with anonymous posting altogether. They’ll need registered usernames so they can block entire accounts that have a tendency to post content that doesn’t pass editorial muster. Those will probably expand to IP bans before long and many sites will simply disallow public posting completely.

Platforms like Facebook and Twitter, even YouTube, will find it next to impossible to continue to operate at all. The costs of moderation would destroy their business model. We’ll go back to the days of media being controlled entirely by media corporations, all the power back in their hands. And this fool (blue) will be partly to blame.

That One Guy (profile) says:

Re: Re: Re: Be careful what you wish for...

We’ll go back to the days of media being controlled entirely by media corporations, all the power back in their hands. And this fool (blue) will be partly to blame.

Which is extra funny when you consider how rabidly anti-corporation they like to present themselves as. Not only would they be forced to either create an account or not post, but they’d have provided a huge boon to the very groups they claim to hate so much.

Truly, Blue is a gift that keeps on giving.

Anonymous Coward says:

Re: "Good Samaritans" must be GOOD is key point. And must not be arbitrary, nor is a "provider" given full power over "users". -- It's for The Public's good, in any case, NOT the "provider" as such.

By the way, from quoted: a "user" is given same authority as "provider", so if users considers, say, Facebook’s advertising objectionable, then surely must be provided a way to block it!

Where does it say the user is given the same authority as the provider? I don’t see where it says that anywhere in CDA 230.

Also, grammar man! Learn it! Good lord I can barely make out what that gibberish is trying to say.

Gwiz (profile) says:

Re: Re:

In short, Section 230 is an unworkable tangle that must be revised in light of what’s now known of how corporations abuse the power precisely to deny "natural" persons outlet for Constitutionally protected speech.

 

Please show any place in the Constitution, US Code, or caselaw that states that anyone, corporation or otherwise, is required to give you a platform for your speech.

Anonymous Coward says:

not surprising

WIRED became a tabloid after the last buyout. It is now just like any other mega corp propaganda rag. Some media buyouts are like lacing a consumers vanilla pudding with vicoden. You should get some warning about the change in content.

Hey, there should be an FDA regulation! When something that was good gets fucked up by some media mogul douche, they should have to run a picture of said douche on the front page for at least 4 issues. That way everybody knows who to punch in the face if they see them on the street. /s

Roger Strong (profile) says:

Re: not surprising

WIRED became a tabloid after the last buyout.

That’s a real shame. Following the space industry, I’d occasionally see good (and often surprised) things said about them. I even kept a couple quotes:

In the context of Id Software, I have been written about in a large number of magazines and newspapers, including big ones like Time, Newsweek, Forbes, Fortune, The Economist, the NYT, etc. Many things are often reported incorrectly. I still remember the first time I got a call from someone fact checking an upcoming article. It was from Wired magazine, which has fact checked with me directly every time they have written about me. I have explicitly asked some magazines to do a fact check call, and got a haughty "We don’t do that", as if I had insulted their journalistic integrity.

  • John Carmack, Id Software, Armadillo Aerospace

Some years ago, Wired asked me to write a tiny little piece — I think the limit was 250 words — on one-way Mars missions, just explaining briefly why the idea might make sense. After I sent it in, I was a bit startled when their fact checker called me up and asked for references on a couple of the numerical details I’d quoted. I don’t know whether he actually dug up the references and checked them, but he did want to know that the facts I was citing were at least somewhat verifiable. I can’t say that I’m all that enthralled with Wired in general, but this aspect impressed me.

  • Henry Spencer
Dan (profile) says:

I seem to remember some case law...

Somebody help me out here. I seem to remember some case law that implied that if a service provider was completely ignorant of forum content they were blameless, and that if they knew of the content and didn’t take it down [fast enough], they were liable. The effect being that it was legally better to not moderate.

Anonymous Coward says:

Re: I seem to remember some case law...

That was before the CDA was passed into law, with section 230 to protect sites that moderated user content.

Also mote, that most of those pushing for the removal of section 230 want to make the sites liable for user published content, and if it is repealed, expect the RIAA/MPAA to go on a rampage.

Anonymous Coward says:

The edit issue

A charitable reading

“the ever-present issue of Section 230”

The issue I see is that 230 is not ever-present. That “safe harbour” provision is under threat and Facebook will become legally vulnerable were it to disappear.

“If Facebook were to start creating or editing content”

The hypothetical here is if Facebook was to publish articles like BuzzFeed. Then they get into legal hot-water because there is no clear division between what they “allow” and what they “contributed”. You know, like Wikileaks.

So: Techdirt, my question is did you reach out to the Wired authors for comment on why they have bashed 230? I hope so.

ECA (profile) says:

hold it..

How many people, here and THERE..
Understand all the laws that would apply to ANY FORUM, from 200 Countries.
IF’ you were made responsible for ANYTHING said/posted/sent..to your site..OMFG.

Isnt this a Form of Censorship?? and keeping up with 200 countries..and deleting anything and everything would leave EVERY SITE, BARE AND NAKED..
Some countries you CANT NAME Officials..
Some you cant Criticize Officials..
NO bad words,
No spitting
No pubic hairs
NO WORDS TO EXPRESS ALLAH..
NO FEMALE POSTS..
No opinions against/for A religion..

Anyone?
Could you say/post/send anything??
Free speech is strange, as there are Strange people. Strange reasoning’s…

Anonymous Coward says:

Facebook already moderates…just different ways on different days depending on how the Zuck feels about something.

Of course they would never think of letting fake news thru or mod ( I don’t know if that’s right term ) posts up because it fits their cause du jour.

It’s not like anyone is holding their feet to the fire on anything now a days.

I think that outside of posts that advertise or advocate things that are unlawful or things that bog their site down like spam, nothing should be marked up or down by the site itself.

Sites should not be immune if they want to moderate things up or down to suit their agenda. Users who have a clue would be blocking others who are obnoxious, or maybe they would give up their Crackbook all together.

And yes I despise Facebook and their methods. GF gets all her “news” from her FB feed. No point in arguing with her because everybody on FB “knows” whatever is true because everybody on FB is re posting it or whatever they do.

Sorry for getting off topic a bit in last paragraph.

Mike Masnick (profile) says:

Re: Re:

Phew, I was worried for a minute that the other post about Facebook today wouldn’t be offset by a "we should let Facebook do whatever they want, forever" post by Masnick. Crisis averted.

I recognize you’re trolling, but can you explain where I have ever said that Facebook should be able to do whatever it wants forever? Or can we just agree that you make up stupid strawmen that the imaginary "Mike Masnick in your head" says, rather than responding to anything I actually say?

Because that would make this a lot faster.

Leave a Reply to That One Guy Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...