Time Magazine Explains Why Section 230 Is So Vital To Protecting Free Speech

from the free-speech-under-attack dept

For years now, we’ve been highlighting just how bad various mainstream media publications have been in discussing Section 230 of the Communications Decency Act. Therefore, it’s a bit of a pleasant surprise to find out that Time Magazine has published an excellent explainer by David French, a lawyer who has been a long time free speech supporter. At the very least, this new article makes up for an earlier Time article that appears (like so many) to confuse Section 230 with the 1st Amendment in terms of what enables the posting of disinformation online.

French’s piece is more than just a defense of Section 230, it explains — as we have in the past — how Section 230 enables free speech online, and why that’s important, even as it may sometimes be abused.

It?s difficult to overstate how important this law is for the free speech of ordinary Americans. For 24 years we?ve taken for granted our ability to post our thoughts and arguments about movies, music, restaurants, religions, and politicians. While different sites have different rules and boundaries, the overall breadth of free speech has been extraordinary.

As it always has through human history, free speech has been used for good and ill. Anti-vaccination activists abuse liberty by spreading medical misinformation online. Social media bullies have named and shamed even private citizens for often trivial offenses. But on balance, free speech is a great gift to American culture. As the courageous abolitionist Frederick Douglass declared in 1860, free speech is the ?dread of tyrants.? It is the ?great moral renovator of society and government.? The freedom to speak has been at the foundation of America?s most potent social movements.

French then points out that many of the people (on both sides of the political aisle) now attack Section 230 are famous and have the ability to speak out and have the media repeat what they are saying. That is, those are people who have their own channels to communicate, and thus have much less of a reason to care about the fact that 230 opens up such channels of communication to everyone else, allowing them to speak their minds and get their thoughts out there:

But note well the speakers here. Hawley, Biden, and Cohen have immense public platforms. Hawley even enjoys an extraordinary legal immunity that other citizens can?t even dream of ? thanks to the Constitution?s Speech and Debate Clause, he can?t be held legally responsible for anything he says in the performance of his official legislative duties. There are no more privileged speakers in America than members of Congress.

Celebrities have their own websites. They?re sought after for speeches, interviews, and op-eds. Politicians have campaigns and ad budgets, and they also have abundant opportunities to speak online and in the real world. If they succeeded in making social media companies liable for users? speech, they would pay no meaningful price. You would, however. Your ability to say what you believe, to directly participate in the debates and arguments that matter most to you would change, dramatically.

French is making a very important point here. The effort to kill of Section 230 is, fundamentally, an effort by the rich, powerful, and connected, to shut off a key channel of speech for the marginalized, ignored, and shunned. It is an explanation for why Section 230 helps those who need it most, and how efforts to cut it off are, at their core, an effort by those in power to silence those without power.

In my opening paragraph, I argued that reforming or repealing Section 230 would represent ?one of most significant acts of censorship in modern American history.? An entire contemporary culture of speech and debate exists thanks to Section 230. A generation of young people has grown up knowing nothing but the freedom to speak online.

Yes, this freedom is often abused, but ? truly ? whose fault is that? Is it Twitter?s fault if I lie about the news? It is my responsibility to exercise my rights responsibly. And the failure of others to respond well to freedom should not result in the loss of my right to speak. Politicians will sacrifice nothing if you?re silenced. In fact, when they speak of Section 230 reform, understand that they are uttering the ancient argument of powerful censors throughout American history.

This is not a new argument. We have tried to argue over the years that enabling government-backed censorship powers will inevitably create scenarios in which the powerful and connected censor those without power. It will harm the most marginalized, but be done in the name of “protecting” them. It would be a huge mistake, and, as French rightly points out, an affront to free speech online.

Filed Under: , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Time Magazine Explains Why Section 230 Is So Vital To Protecting Free Speech”

Subscribe: RSS Leave a comment
153 Comments

This comment has been flagged by the community. Click here to show it.

Scary Devil Monastery (profile) says:

Re: Re: Re:

"How’s that white elephant mailing list coming along, John Smith? Think you’ll make enough to cover Paul Hansmeier’s defense fund?"

Oh, old Bobmail/Jhon will need a lot more than just that elusive mailing list last seen carried off by pirates flying the tape-cassette-and-bones (according to his own dubious testimony).

He’ll need to use that list to successfully extort or defraud the people on it first, before he’ll have any money at all with which to defend his boyhood idols of Hansmeyer and Steele.

And that is, of course, why he’s so upset about section 230. As long as people are able to warn one another about con men and scams online he’s not getting a cent whether he’ll manage to wrest his precious mailing list back from The Pirate Bay or not.

This comment has been deemed insightful by the community.
Mike Masnick (profile) says:

Re: Re:

Section 230 is not free speech

It helps enable free speech.

it is immunity from liability from libel lawsuits that doesn’t exist anywhere in the world.

It is about the proper application of liability on the party actually violating the law. Similar provisions, perhaps not as strong and complete, do exist throughout the world. But the same is true of the 1st Amendment. Other countries have similar declarations of support for free expression, but the courts have not interpreted them as broadly as the US’s 1st Amendment. Doesn’t mean we should get rid of the 1st Amendment, nor does it mean we should get rid of 230.

If Bill Barr wants to investigate the more nefarious side of 230, he will be given probable cause.

Uh, what the fuck does probable cause have to do with Section 230? It’s not being tried for criminal violations for fuck’s sake.

This comment has been deemed insightful by the community.
bhull242 (profile) says:

Re: Re:

If Bill Barr wants to investigate the more nefarious side of 230, he will be given probable cause.

Setting aside the fact that a law cannot commit a crime, making the idea of probable cause irrelevant, §230 does not affect culpability for federal crimes at all, and Barr can only prosecute or investigate violations of federal law, primarily criminal laws. He has no jurisdiction over state laws at all, and I can’t think of any federal civil laws that would be relevant to both Barr and §230, so I can’t see what Barr would have to investigate.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Samuel Abram (profile) says:

Re: Re: Re: Section 230 will die if Facebook and Twitter won't c

They can’t do it at a large scale because there are always going to be false positives and it’s hard to determine context. For instance, how do you differentiate hate speech from satire? Or how can you differentiate something used by hate groups in a non-hate group context? I’ll offer some examples:

The numbers "88" in hate groups mean the letters "HH" or "Heil Hitler". But Chinese Speakers also use "88" because "8" is pronounced "ba" in Chinese and "88" is prounounced "ba ba" in Chinese which sounds similar to the English valediction "bye-bye".

You may also think a swastika is a dead giveaway for a hate group. However, whenever I was in Japan and I opened Google Maps, a swastika marked a Buddhist temple because of Buddhism’s Hindu origins which are shared with the Swastika.

Now you may say it’s simple: check to see if the source language and writing systems are Chinese and Japanese. However, even beyond these two cases, there is more ambiguity. Take for instance, the ????sign. White Supremacists have used it to mean "White Power". However, any Mystery Science Theater 3000 fan would recognize it as "It Stinks!" (if you don’t get the reference, search for "MST3K pod people" on youtube and watch the whole movie-length episode. You’ll thank me later.). Even in the same language, there’s ambiguity.

My point is, even when you think you could easily detect what’s hate speech and what’s not, it’s not easy. And computers can mess it up big time.

This comment has been flagged by the community. Click here to show it.

cpt kangarooski says:

Re: Re: Re:5 Re:

Provided it’s hateful speech, I can’t say I care. They’re private services and not at all obligated to avoid chilling it. If Nazis et al want to keep speaking they can, but they’ll have to go elsewhere, and hopefully will be unable to get their message out. (Though it’s still a good idea to monitor them, and it’s always handy when someone outs themselves)

Your positions reminds me of that of Flanders’ parents: tried nothing and all out of ideas.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:6

Question: Would you use an online platform if you knew it would (and could) boot you for something you said or did outside of, and wholly unrelated to, that specific platform? If you knew Techdirt would and could ban you for something you said on Twitter — that Techdirt keeps track of what you do on other platforms, that it secretly surveils your social media accounts — would you keep commenting on Techdirt?

cpt kangarooski says:

Re: Re: Re:7 Re:

Sure. You would too, probably, as there is literally nothing preventing Mike from implementing such a policy now, if he liked.

The various social media sites have the tools to deal with offensive and hateful speech. While they have no obligation to use them, and should not become obligated to, using them carefully but effectively would be responsible, socially beneficial, and could prove helpful to the platforms themselves in trying to stave off misguided attempts to regulate them.

If I were Zuckerberg or Dorsey I would be personally mortified at some of the speech crossing the service I ran, and would not want to have the people writing and spreading it as users, would not want to provide them with the least iota of assistance, and would want to do what I could as a private party to stamp it out and to isolate the people responsible. This is perfectly compatible with having a strong commitment to free speech because free speech does not require anyone to give up or put away their own opinions, nor to aid others that they disagree with.

This comment has been deemed insightful by the community.
Samuel Abram (profile) says:

Re: Re: Re:8 Re:

Sure. You would too, probably, as there is literally nothing preventing Mike from implementing such a policy now, if he liked.

The scale at which Techdirt operates is absolutely nothing compared to the scale at which Facebook, Instagram, Twitter, and YouTube operate.

This comment has been deemed insightful by the community.
Samuel Abram (profile) says:

Re: Re: Re:8 Re:

Also, this:

If I were Zuckerberg or Dorsey I would be personally mortified at some of the speech crossing the service I ran, and would not want to have the people writing and spreading it as users, would not want to provide them with the least iota of assistance, and would want to do what I could as a private party to stamp it out and to isolate the people responsible.

That’s akin to saying "If I were a maker of Notepads and writing materials, I would do what I can to make sure that Nazis, Klansmen and other white supremacists can’t buy them." Of course, that would be possible, but you would actually ingrain the type of surveillance that is hallmark of Far-right Regimes, and as is shown with Donald Trump, Duterte, Narenda Modi, and other far-right figures, can and will be used against the same people people like you are trying to protect.

This comment has been deemed insightful by the community.
Wendy Cockcroft (profile) says:

Re: Re: Re:8 Re:

@cpt kangarooski you are at liberty to start your own social media platform and to run it as you please. Assume you do and that it’s instantly popular with tons of subscribers and you are now a billionaire.

Trouble enters paradise when the nazis, anti-vaxxers and other numbnuts come along and start spreading all kinds of crap on your platform. It’s easy enough to delete their crap and ban them, etc., but you’re running into two problems:

  1. the millions of users uploading and sharing multiple posts per person, as many as twenty a day.
  2. the bannination system you’re using bans IPs; most of the people affected are actually innocent, they were just in the same catchment area as the guilty parties.
  3. the guilty parties are resorting to TOR, etc., to set up new accounts and post anyway, so you ban TOR.
  4. the guilty parties are using VPNs to post anonymously. Since their IP keeps changing, they keep setting up new accounts. Meanwhile, innocent VPN users are being kicked off the platform.
  5. both the innocent and the guilty are freaking out about you on other platforms and the news media has picked this up.

What are you going to do about it?

Scary Devil Monastery (profile) says:

Re: Re: Re:9 Re:

"What are you going to do about it?"

He won’t be in the position to do a damn thing about it because at that time his desperate grasps for control over what users do have made his service difficult to use for most legitimate visitors and completely impossible for those who truly need it (like chinese dissidents trying to post anonymously).

He’s now no longer a billionaire because his platform has become a loss leader, and no matter what he does his credibility is shot.

Meanwhile, two URL’s down the road a new startup "Footbook" is starting up, promising a "grounds-up" environment where the only moderation done in groups is done by the invited members of the groups. Said service gains popularity with everyone, including, yes, the nazis and misogynists.

Kangarooski’s problem is that he dual use problem has no solution in reality OR theory. Yet he wants to think it does.

bhull242 (profile) says:

Re: Re: Re:8 Re:

Who says that Zuckerberg and Dorsey don’t feel like that? They do their best to moderate content that they find too offensive to host on their respective platforms. It’s just that it’s impossible to do it perfectly at the scale these services operate. Too much content from too many users in too many languages with too many gray areas that have to be judged by simultaneously too many and too few moderators, with too many trolls trying to gum up the works by flagging too many perfectly benign posts on top of that.

Heck, even a smaller platform is likely to struggle given the global scale of the internet and the amount of subjectivity involved in making moderation decisions. It’s just not anywhere close to as easy as you think.

Wendy Cockcroft (profile) says:

Re: Re: Re:9 Re:

It’s not easy at all. With users uploading and sharing content over and over again, you’d need at least one moderator per hundred or so to keep on top of it all. No one could afford to do that, so they automate as much as possible. Even if they could afford to have that many moderators the choices they make are subjective and there’s a lot of burnout over the horrible crap they have to deal with.

This comment has been deemed insightful by the community.
urza9814 (profile) says:

Re: Re: Re:7 Re:

Depends who they’re kicking out, just like any other private property. I’m not coming over to your house if I hear you’ve been kicking out anyone you think might be homosexual, but if I hear you’ve been kicking out Nazis, I’ll gladly come help.

It’s one thing when we’re talking about government censorship, where they might throw you in prison for the rest of your life if they don’t like what you’ve said. But as long as the worst they can do is kick you off their website, I’m not sure I see the problem.

And frankly, I think this whole idea is a large part of what is wrong with our society today. Everyone acts like you just have to let jerks be jerks and you can’t say anything or do anything about it. Acting like all that matters is money, and if someone is giving you money, you shouldn’t care if you’re serving Adolph Hitler himself. Screw morality, it’s all about the profits. We collectively need to stop acting like we have to help people who are only trying to screw us over. Yes, you should let people be themselves, let them exercise their human rights, and not try to force them to do or not do something…but you also shouldn’t let them force you to do or not do something that you don’t agree with either. You think the KKK is going to let someone else publish an article about the benefits of diversity in their quarterly newsletter? Probably not. So why do we feel like we’re required to let them publish in ours?

What’s the difference, really? Is it because Twitter is a big company? Is it some perverted idea of a "duty to shareholders"? Or just an inability to distinguish between large corporations and national governments? I don’t get it…

Anonymous Coward says:

Re: Re: Re:8 Re:

It’s one thing when we’re talking about government censorship,

You know, if you don’t like what the US Government does, you can just move to another country. What’s the big deal?

What’s the difference, really? Is it because Twitter is a big company? Is it some perverted idea of a "duty to shareholders"? Or just an inability to distinguish between large corporations and national governments? I don’t get it…

Yes, it is absolutely a matter of size, and the power it gives you over others. The whole public-versus-private distinction isn’t as cut and dried as you seem to think, but the important issue is that, public or private, once something gets to a certain size and enjoys a certain amount of power, it needs to be controlled.

That is far, far more important than people’s fantasies of being allowed to control enormous "private" preserves.

This comment has been deemed insightful by the community.
urza9814 (profile) says:

Re: Re: Re:9 Re:

You know, if you don’t like what the US Government does, you can just move to another country. What’s the big deal?

Only if the US Government allows me to. At the moment, they would not.

Yes, it is absolutely a matter of size, and the power it gives you over others. The whole public-versus-private distinction isn’t as cut and dried as you seem to think, but the important issue is that, public or private, once something gets to a certain size and enjoys a certain amount of power, it needs to be controlled.

You know, once upon a time, the belief was that no private company should be allowed to reach that amount of size and power. That’s why we created monopoly and anti-trust legislation. It’s a shame nobody seems willing to use that particular tool anymore…but if these services are really so essential to society then perhaps we ought to exercise our right of eminent domain and turn them into federal agencies. That is the proper way to bind a company to obey the constitution and uphold civil rights.

You aren’t going to fix the problem by thinking up some new regulation when the problem only exists because of people ignoring existing regulation. They’ll just ignore the new ones too.

This comment has been deemed insightful by the community.
Samuel Abram (profile) says:

Re: Re: Re:10 Re:

You know, once upon a time, the belief was that no private company should be allowed to reach that amount of size and power. That’s why we created monopoly and anti-trust legislation. It’s a shame nobody seems willing to use that particular tool anymore…but if these services are really so essential to society then perhaps we ought to exercise our right of eminent domain and turn them into federal agencies. That is the proper way to bind a company to obey the constitution and uphold civil rights.

As I said before,how will Facebook and Twitter be broken up? Their size is in the people who use them, not the assets they control. There’s a far clearer and sensible argument for breaking up Amazon but I failed to have seen a similar argument for breaking up Facebook short of forcing people not to use the platform and decoupling it from instagram and Oculus.

This comment has been deemed insightful by the community.
urza9814 (profile) says:

Re: Re: Re:11 Re:

As I said before,how will Facebook and Twitter be broken up? Their size is in the people who use them, not the assets they control.

Couldn’t a similar thing have been said of Bell back in the day? And the approach to breaking them up would be more or less the same — federate the network. Facebook probably already has the tools to do this mostly in place — it’s not like they’re running on one server or one datacenter, they clearly have systems in place to allow those servers to communicate between each other. You’d need to extend those protocols pretty significantly, but it’s surely possible. Many other services have already successfully created such systems.

Although splitting out Instagram and (to a lesser extent) Oculus isn’t a terrible idea either…

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re:12 Re:

federate the network.

How does that solve the moderation problem, as you will still have the same number of posts to moderate, and add in disagreements about what needs moderation. Oh, and you still need section 230 to allow those separate companies to operate.

urza9814 (profile) says:

Re: Re: Re:13 Re:

Oh, and you still need section 230 to allow those separate companies to operate.

Right. Read the parent posts. I was arguing against the idea that Facebook et al should not be allowed to moderate because of their size. If their size is such that we must require them to provide a right to free speech within their platform, then they need to be a government entity, because private corporations are not required to obey free speech laws. So they should be free to moderate, as section 230 allows. I never stated any opposition to section 230.

This is an "A vs B, pick C" situation. On one side people say moderation must be strictly regulated, because of how large and powerful the platforms are. On the other side, people say reasonable moderation can’t be done because of how large and powerful the platforms are, and we must allow them to take down whatever they want for whatever reason they want, or be inundated with trolls and spam. How about we go with none of the above? Let’s build hundreds of nodes, each with their own moderation policies. Some will have humans reading every single post; some will have big automated systems with varying levels of dispute resolution; some will just be a firehose of filth. Users can pick whichever level they desire. If the nodes with more extensive moderation are more expensive to run, they can charge a fee, and the end users can determine if that fee is worth paying. Or they might decide that, for all their flaws, the automated systems provide great value.

Moderation is a "problem" because we’ve essentially decided that we need to have a single moderation standard for all of humanity. That seems pretty obviously infeasible, so why do we keep arguing about how to accomplish it? If the goal is impossible, find a new goal.

Anonymous Coward says:

Re: Re: Re:14 Re:

Let’s build hundreds of nodes, each with their own moderation policies.

Have you not been following the history of 8chan, now 8kun. Platform size is not the problem but rather that a few people who think that they the guardians of society making lots of noise until somebody does something to make their problem go away is the real problem, And once they have killed one target, they move onto the next one.

This comment has been deemed insightful by the community.
urza9814 (profile) says:

Re: Re: Re:15 Re:

Platform size is not the problem but rather that a few people who think that they the guardians of society making lots of noise until somebody does something to make their problem go away is the real problem, And once they have killed one target, they move onto the next one.

That can also be understood as a problem of platform size and design. It’s not resilient. One node gets shut down and the whole service is gone. Try playing whac-a-mole with a thousand different nodes, especially when a good portion of them are offshore, and a good portion are some minor in his parents’ basement that’s gonna be a PR and procedural nightmare to go after. Try nuking the content when there’s a hundred different servers caching it, all operated by different owners or agencies.

How many years and how many millions of dollars have been spent trying to kill torrents? How well is that going? Distributed systems are very hard to attack in that way.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re:16 Re:

"How many years and how many millions of dollars have been spent trying to kill torrents? ‘

But, that’s a completely different issue. With torrents, you have people using their own property resisting against attempts to stop them from using it the way they wish. In the case of 8channers, it’s the people who own the property telling them to GTFO.

Wendy Cockcroft (profile) says:

Re: Re: Re:15 Re:

… a few people who think that they the guardians of society making lots of noise until somebody does something to make their problem go away is the real problem, And once they have killed one target, they move onto the next one.

Such people exist; I usually call them "mad proggies" except where they appear on the right wanting to ban sexual content.

That said, moderation is essential to keep discourse moving, otherwise the public space is ceded to the most obnoxious.

bhull242 (profile) says:

Re: Re: Re:12 Re:

I can agree that Facebook could be broken up into the core Facebook platform, WhatsApp, and such, but I really don’t think that would even come close to alleviated the issues that make moderation impossible to do well or please any of the people demanding that Facebook be broken up.

As for regulating ad tracking, I can tentatively agree that that would help with some of the issues, but it won’t help with moderation at all, which is a major factor into why people want to break up or get rid of Facebook so badly.

This comment has been deemed insightful by the community.
Wendy Cockcroft (profile) says:

Re: Re: Re:13 Re:

the issues that make moderation impossible to do well or please any of the people demanding that Facebook be broken up.

While the issues wouldn’t be massively alleviated it’d be a start.

Moderation is a different thing and, at scale, is hard to do well. Mike’s idea of protocols rather than platforms is a step in that direction.

Sooner or later we’re going to have to address the social issues that drive people to bad behaviour; people are the problem here.

We also need to look harder at owing our own online spaces; I block and mute judiciously to keep crap out of my feed. I don’t mind people disagreeing with me as it makes for a lively debate— just don’t be a jerk about it.

PaulT (profile) says:

Re: Re: Re:14 Re:

"We also need to look harder at owing our own online spaces; I block and mute judiciously to keep crap out of my feed."

That is a double-edged sword, though. Yes, I certainly do wish that people would just block certain types of people, stop complaining and get on with their day. But, this is how echo chambers are created. It’s great to block, say, alt-right idiots from popping up in their feed. But others with less critical mindsets end up being convinced to block the "leftist" and "globalist" (read: "factual") news media, and end up having their entire world views shaped by the sewage that remains.

I’m not sure what the actual fix is, but saying that people would be better off if they curated their exposure to the world isn’t necessarily going to be a positive move every time. We’re already seeing what happens when people are convinced to shield themselves from opposing viewpoints (Brexit, Trump), and it will get worse if people just shut off anything the disagree with on both sides.

Wendy Cockcroft (profile) says:

Re: Re: Re:15 Re:

I’m not sure what the actual fix is, but saying that people would be better off if they curated their exposure to the world isn’t necessarily going to be a positive move every time. We’re already seeing what happens when people are convinced to shield themselves from opposing viewpoints (Brexit, Trump), and it will get worse if people just shut off anything the disagree with on both sides.

I was taught to argue both sides as a child, I’ve continued to do so ever since. It helps in debating as I can understand the other side’s viewpoint even though I disagree with it. As a result of this I follow a wide range of people on Twitter from full-on left wingers to pretty hard right, e.g. David French. This gives me a good balance, IMHO, without including anti-vaxxers, Trump cultists, and assorted freaks. Their crap still comes up in my feed when others retweet it, but it doesn’t drown out all the other stuff I’m interested in.

I understand some work is being done in terms of teaching children how to tell the difference between reliable sources and fake news; we need more of that.

As for the actual fix, we need to deal firmly with the social issues that encourage people to act like jerks.

urza9814 (profile) says:

Re: Re: Re:14 Re:

We also need to look harder at owing our own online spaces; I block and mute judiciously to keep crap out of my feed. I don’t mind people disagreeing with me as it makes for a lively debate— just don’t be a jerk about it.

I REALLY like the way you phrased that, although personally I’d like to take it a bit more literally. I want more distributed platforms, with more nodes running from peoples’ basements and living rooms. Own your online spaces so thoroughly you can pick them up and hold them in your hands.

Not everyone can be a sysadmin, sure…but each "community" can have at least one, whether that community is geographic or ideological.

Wendy Cockcroft (profile) says:

Re: Re: Re:15 Re:

I like your idea, Urza, but it needs work if it’s going to be put into action. I’m thinking of an item you can plug in and switch on as most people — myself included — are not technically minded. Forming a community of people running nodes would be problematic as finding such people might not be easy. I wouldn’t know where to look among my own neighbours, for example.

If there was a way to collaborate online to locate nearby nodes and connect to them via the kind of portable system I’m thinking of, that would work, but it wold rely on like-minded people living nearby and on having a regular ISP setup as a backup in case that failed. Could it be done? I don’t know enough about mesh networking to do anything but imagine and hope.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re:16 Re:

"I’m thinking of an item you can plug in and switch on as most people — myself included — are not technically minded."

But, then, what’s the point of such a device? If the idea is to get away from centralised management, then it will fail. Since all you’re doing is handing nodes to people who have no idea how to administer them and will therefore pass power to the people who do the admin for them – or to bad actors who can easily compromise the unmonitored devices.

"I don’t know enough about mesh networking to do anything but imagine and hope."

But, the issue here isn’t networking, it’s moderation of content. That’s a completely different issue that won’t be fixed by putting devices in the hands of people who don’t have the knowledge or will to properly administer the devices they already have.

This stuff you’re describing wouldn’t go into the hands of people who would use it properly. It would go into the hands of the types of people who put their 3 year old unattended in front of YouTube for 4 hours then complain that something unsuitable for a toddler might have been shown

Anonymous Coward says:

Re: Re: Re:15 Re:

I want more distributed platforms,

How do give them the discovery possibility of the centralised platforms. The big problem with distributed system is that people form little cliches with people they know, and only rarely if ever form links with those outside their cliches, unless the other person is famous and well known. A federated system makes it easier to follow the likes of Trump, while making it difficult to find and link with a local activists that is worth supporting.

Anonymous Coward says:

Re: Re: Re:10 Re:

The US has no controls on emigration that I know of. Maybe you are a special case.

But my whole point is that it’s not practical for most people to just move anyway. Nor is it practical to move off of Facebook or Twitter if you actually want to communicate with people. Network effects.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: Re: Re:10 Like fixing a papercut through amputation

but if these services are really so essential to society then perhaps we ought to exercise our right of eminent domain and turn them into federal agencies. That is the proper way to bind a company to obey the constitution and uphold civil rights.

Good luck managing that when doing so means no company will knowingly get that large and will as a result put strict limits on use, resulting in massive damage to free speech, and the platforms that you just handed over to the government are overrun in short order by trolls and other people posting repulsive but perfectly legal content, driving everyone else away.

urza9814 (profile) says:

Re: Re: Re:11 Like fixing a papercut through amputation

resulting in massive damage to free speech

No it won’t. Whether or not a private company chooses to publish your speech has no impact on your free speech rights.

and the platforms that you just handed over to the government are overrun in short order by trolls and other people posting repulsive but perfectly legal content, driving everyone else away.

Yeah, just like our public schools and public libraries and public parks…oh wait, no, those are just fine. If it was a problem though, I would imagine the result would be development of user-side solutions. Why do you think users are incapable of determining what they want to see without Facebook’s help? Why do we need Big Brother to tell us what we should read and watch and discuss?

Of course, my ultimate preference would be NONE OF THE ABOVE. I want private services that aren’t allowed to become so large and powerful. Federated social networking, where each node can set and enforce its own standards, where each user can set up their own filtering and access mechanisms, where users can freely move from one service to another. "Laboratories of Democracy" for the modern era.

But if the services are seen as essential (which I strongly disagree with — my Techdirt profile is the closest thing I have to an active social media account) and we decide we REALLY want it to be one big monolithic provider..then yes, it should be the government, and they should allow anything that is legal. Because the alternative is a privately owned and operated quasi-government entity that is kept separate from government solely so it has the legal right to violate civil liberties. And that’s just a stupid idea and an EXTREMELY dangerous precedent.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: Re: Re:12 Like fixing a papercut through amputation

No it won’t. Whether or not a private company chooses to publish your speech has no impact on your free speech rights.

Yes and no. Legally, no, ‘you can’t use this platform’ is not a violation of free speech, but if you put strict limits in place on who can use what you are still impacting speech, as people that could have posted currently suddenly find themselves barred, and if the reason for those restrictions are threats from the government for growing too large then the first amendment likely comes into play.

Yeah, just like our public schools and public libraries and public parks…oh wait, no, those are just fine.

They also aren’t available for hundreds of millions of people to easily come to and talk at, so the comparison doesn’t hold up.

If it was a problem though, I would imagine the result would be development of user-side solutions. Why do you think users are incapable of determining what they want to see without Facebook’s help?

They can and already do do that, now take the issue that already exist with a platform like that an increase them exponentially by dumping all of the decisions on the users.

Just got three hundred spam message? Have fun going through and blocking them all, knowing you’re going to be doing the same thing tomorrow, and the day after that, and the day after that.

Getting harassed by a troll/bigot? Great, you blocked them, now get ready to do it to the next account they sign up with, and the account after that, and after that, knowing that all they have to do is create a new account to keep it up, and you either block them until they get bored or deal with whatever they throw at you.

Why do we need Big Brother to tell us what we should read and watch and discuss?

I certainly hope you don’t use a spam filter for your email if you don’t want other people telling you what you can see, and instead do all the filtering and blocking yourself.

Of course, my ultimate preference would be NONE OF THE ABOVE. I want private services that aren’t allowed to become so large and powerful. Federated social networking, where each node can set and enforce its own standards, where each user can set up their own filtering and access mechanisms, where users can freely move from one service to another. "Laboratories of Democracy" for the modern era.

Alright, running with this idea, what happens when one of those services/platforms gets an overwhelming majority of users on it thanks to the rules/standards it sets? You can’t tell people they can’t use it, as you’ve expressed the desire to allow them to move from service to service as they want, so you’re stuck with a ‘node’ where the majority of people are using it and following the standards it set, much like the current situation.

Because the alternative is a privately owned and operated quasi-government entity that is kept separate from government solely so it has the legal right to violate civil liberties.

Hold on, what civil liberties are they violating again? You yourself pointed out that they can host or not host whatever they want, so what exactly is the problem again with private platforms doing just that?

urza9814 (profile) says:

Re: Re: Re:13 Like fixing a papercut through amputation

They can and already do do that, now take the issue that already exist with a platform like that an increase them exponentially by dumping all of the decisions on the users.

Just got three hundred spam message? Have fun going through and blocking them all, knowing you’re going to be doing the same thing tomorrow, and the day after that, and the day after that.

You do understand that "filters" are not the same thing as "blocking a user", right? Have you ever run your own mail server? Because I do. It runs off an old laptop in my living room. I set up the spam filters myself. And I’ve never had to go through hundreds of messages to pick out the spam from the legitimate content. That’s just not how any of this works. Although I suppose you could do it that way if you’re some kind of masochist…?

I certainly hope you don’t use a spam filter for your email if you don’t want other people telling you what you can see, and instead do all the filtering and blocking yourself.

Yes, as mentioned above, I do actually, and have for about five years now. It’s great. I used to lose a TON of mail to the spam filter back when I was with gmail, and spam was still getting into my inbox too. Now I get less spam, and have never had a single legitimate message filtered out.

Alright, running with this idea, what happens when one of those services/platforms gets an overwhelming majority of users on it thanks to the rules/standards it sets? You can’t tell people they can’t use it, as you’ve expressed the desire to allow them to move from service to service as they want, so you’re stuck with a ‘node’ where the majority of people are using it and following the standards it set, much like the current situation.

The difference is that if people don’t like the way that node is filtering/moderating, they can leave the node without leaving the network. Right now I have to choose between blindly accepting whatever choices Facebook wants to make, with no possible recourse, or not talking to most of my friends again. FWIW, I did take the latter option…

Hold on, what civil liberties are they violating again? You yourself pointed out that they can host or not host whatever they want, so what exactly is the problem again with private platforms doing just that?

I never said they were. Read that paragraph — and probably the thread it’s a part of — again. It’s a hypothetical. IF we deem them to be an essential monopoly, THEN continuing to operate the way they currently do would make them guilty of civil rights violations.

bhull242 (profile) says:

Re: Re: Re:14 Like fixing a papercut through amputation

For the record, not everyone with an email address has both the resources and the technical know-how to set up and run their own email server.

Also, I do believe that Techdirt has mentioned a similar idea to what you propose about Facebook working more similarly to emails, where you use nodes or “protocols” instead of platforms. I’m certainly not against the idea, and I think it would solve a lot of the major issues, but I don’t think it’ll completely solve the moderation issue; merely alleviate it. But then, I don’t think it’s possible to completely solve the issue without fundamentally changing human nature, severely restricting access to a very select few, and/or changing how math and physics work, which aren’t actually options, so that’s probably the best solution one could have.

This comment has been deemed insightful by the community.
urza9814 (profile) says:

Re: Re: Re:15 Like fixing a papercut through amputation

For the record, not everyone with an email address has both the resources and the technical know-how to set up and run their own email server.

This is why I talk about hundreds or thousands of nodes, not hundreds of millions of them. There’s a pretty huge area in between "ever person using the same one service" and "every service being used by only one person". Notice that while a TON of people just use Gmail, any of us can still set up our own email server if we want to, we can share it with our friends if we want to (FWIW, I’m not the only person using my email server, it’s small but it does have a couple other users), and we can still email those Gmail users if we want to. You can’t do any of that with any of the major social networking platforms.

Also, I do believe that Techdirt has mentioned a similar idea to what you propose about Facebook working more similarly to emails, where you use nodes or “protocols” instead of platforms.

Yup, exactly the same idea. Although I wasn’t a huge fan of that article as I think it misrepresents the true work that needs to be done. Yes, we need protocols instead of platforms. But the problem isn’t building those protocols — we have PLENTY already. The problem is getting the general public to start using the things. Which, as you said, is mostly about human nature rather than any technical development. If I had a solution to that, I’d be implementing it already…

Anonymous Coward says:

Re: Re: Re:14 Like fixing a papercut through amputation

Your filter could also block a user. It just depends on your rules.

Also note that you said you get a lot LESS spam. Not no spam. Therefore your filter makes mistakes filtering out the spam from legitimate mail. That’s kind of the point of this discussion. Any automated system is going to fail since it isn’t going to be able to determine nuance, context and/or meaning.

You are making no sense. The current network is the Internet. The node is Facebook. You can leave the Facebook node and go to the MySpace node. How does that differ from your example? Or do you mean MySpace and Facebook should be allowed to post to each other’s sites? How would that work? Let’s say you are a racist. Facebook bans your racist speech so you switch to MySpace which allows it. Why on god’s green earth would Facebook allow your racist speech just because your broadcasting it from MySpace? Why wouldn’t they block it at their border per their rules which caused you to leave in the first place?

urza9814 (profile) says:

Re: Re: Re:15 Like fixing a papercut through amputation

Your filter could also block a user. It just depends on your rules.

Yes, it certainly could.

Also note that you said you get a lot LESS spam. Not no spam. Therefore your filter makes mistakes filtering out the spam from legitimate mail. That’s kind of the point of this discussion. Any automated system is going to fail since it isn’t going to be able to determine nuance, context and/or meaning.

You are missing my point entirely. I don’t want everything filtered, I don’t want all filtering removed. I want the end user to be free to chose the level of filtering they want.

You are making no sense. The current network is the Internet. The node is Facebook. You can leave the Facebook node and go to the MySpace node. How does that differ from your example? Or do you mean MySpace and Facebook should be allowed to post to each other’s sites?

Yes. Exactly like email currently works and has worked for decades. Why is that so difficult to understand? Email used to be a bunch of isolated networks just like social networking is today. Email shifted from closed, isolated protocols to open interconnected ones, so why can’t social networking do the same?

How would that work? Let’s say you are a racist. Facebook bans your racist speech so you switch to MySpace which allows it. Why on god’s green earth would Facebook allow your racist speech just because your broadcasting it from MySpace? Why wouldn’t they block it at their border per their rules which caused you to leave in the first place?

Why doesn’t Google just block any email I send to their users since I found their policies bad enough to leave? Because they have customers who do want those mails, and because they don’t want to fundamentally break the protocol on which they were built.

Now, in a world of diaspora* pods instead of Facebook, would there be DSA pods that refused all content from Stormfront pods and vice-versa? Sure. And if you want to cut those people off, go ahead and join one. But there would also be pods in the middle that allow you to have friends on both sides. That’s the point. ANY system will fail, as you said yourself. It will fail because people are different. They want different policies, and they even interpret the same policies in different ways. How do you think someone can construct one single policy — any policy, even "absolutely zero moderation" — that will please everyone? It can’t be done, that’s why we’re having this discussion, and we’ll have to KEEP having this discussion until we fundamentally change something about the environment in which it is occurring.

Anonymous Coward says:

Re: Re: Re:16 Like fixing a papercut through amputatio

If you think a more distributed system will resolve the demands for moderation, then you have not been following what happened to 8chan. A lot of the problem is a few loud people who do want the Internet censored, and censored according to their standards. Some them will go hunting for sites and opinions to complain about, because finding and crusading against what they disagree with is what they do

urza9814 (profile) says:

Re: Re: Re:17 Like fixing a papercut through amput

If you think a more distributed system will resolve the demands for moderation, then you have not been following what happened to 8chan. A lot of the problem is a few loud people who do want the Internet censored, and censored according to their standards. Some them will go hunting for sites and opinions to complain about, because finding and crusading against what they disagree with is what they do

It won’t stop people from whining, but it will solve the actual problems that are caused by it and by removing any legitimacy to those complaints. There are still people going around shouting that the Earth is flat, but that’s not affecting public policy because we all know they’re full of it.

You split up the network, you get an easy go-to response to these complaints: "If you don’t like our pod, go find another." Everyone will know that if these people are still finding things to complain about, it’s because they CHOOSE to see it. It’s not "Well, they hold my social life hostage if I leave…" — that’s a dumb argument now, but it’s a powerful one to a lot of people.

And the network operators can be more resilient to those complaints. There’s no single CEO to get dragged into Congress to testify, no international PR crisis to avoid, no massive group of shareholders to scare off. You might still have massive corporate pods that want to keep everything family-friendly for everyone, and they can do that. You’ll also have pods that tell anyone who complains to go F- themselves. If I’m running a pod from the miniature datacenter in my living room, why on earth would I care what what some rando halfway across the country thinks of my moderation policies?

And suppose they somehow manage to interfere with those freedom loving pods too. Maybe they go after web hosts or DNS providers. Well, there’s more than one of those too, plus you’ve still gotta shut down each individual pod. They can brigade from one to another, playing a global game of whac-a-mole, and watch as new pods pop up every time they close an existing one.

Scary Devil Monastery (profile) says:

Re: Re: Re:10 Re:

"…but if these services are really so essential to society then perhaps we ought to exercise our right of eminent domain and turn them into federal agencies. That is the proper way to bind a company to thwart the constitution and undermine civil rights."

FTFY. Substitution and emphasis mine.

I’m only half joking, by the way. Historically federal agencies tend to end up as hatchet men for <vested interest A> and when I look at Bill Barr, I can’t deny that having Zuckerman or Musk in his place would probably make me feel safer.

cpt kangarooski says:

Re: Re: Re:4 Section 230 will die if Facebook and Twi

Fewer than the number of posts. And ad software is designed around being able to track unique individuals online across accounts, and in their public and private lives. It might not be perfect, but it’s a good place to start. After all, 1) blocking obvious unacceptable speech risks false positives with the same terms used by others, innocently (or occasionally ironically, or as taking it back, or as accusatory quotes, news, etc.); 2) There probably aren’t that many bad users out there stirring up trouble; 3) Its hard to hide one’s identity well, especially from the service provider, who can look at entire histories, compare notes with others, etc. 4) Looking at a user’s history is a good way at assessing their intent so as to avoid false positives and negatives.

Get rid of the rotten apples and hopefully the rest won’t spoil.

cpt kangarooski says:

Re: Re: Re:6 Re:

Meh. Based on my observation of how people behave online over the last 25-odd years I don’t think that there would be much uproar about it, especially not to the extent that people left major social media platforms en masse. I don’t think there are really so many of these assholes, they’re just vociferous.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: Re: Re:7 Re:

Hey, great to hear that a person posting semi-anonymously doesn’t think decimating privacy by tracking what everyone online does wouldn’t be that big of an issue.

Why, I can’t think of so much as a single way that could backfire or be abused…

cpt kangarooski says:

Re: Re: Re:8 Re:

Do you think you have that much privacy now? At least on major social networking platforms that might use their ability to moderate without liability? Because if you do, that is just adorable.

If we’re going to lack privacy due to advertising we might as well leverage the same capabilities for something socially useful.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: Re: Re:9 An authoritarian/corporate wet-dream

Do you think you have that much privacy now?

Having taken several steps to limit what can be tracked, and moreover compared to the idea you’re pushing? I’ve got tons. Even someone who has taken zero steps to limit how they can be tracked would have vastly more privacy than they would in the situation you propose.

There is a significant difference between a platform tracking what you do on that one platform and being tracked everywhere, and if platforms/companies are trying to track your actions past their platform the argument should be to limit that if not outright shut it down, not accept and normalize it.

If we’re going to lack privacy due to advertising we might as well leverage the same capabilities for something socially useful.

Congrats, you just utterly destroyed any standing you might have had objecting to companies tracking people and what they say/do, and gutted your ability to object to any government doing the same, because if you want to argue that a complete loss of privacy online is acceptable to deal with trolls then tracking people that the government labels as actually dangerous just became even more acceptable.

urza9814 (profile) says:

Re: Re: Re:10 An authoritarian/corporate wet-dream

There is a significant difference between a platform tracking what you do on that one platform and being tracked everywhere, and if platforms/companies are trying to track your actions past their platform the argument should be to limit that if not outright shut it down, not accept and normalize it.

Facebook, Google, and others already do track what you do on a huge number of other websites. Everywhere you see "Login with Facebook" or "Like this on Facebook" or "Share to Facebook"…those Facebook icons come with tracking code. You are being watched, even when you aren’t on their site, even if you don’t have an account.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re:4 Section 230 will die if Facebook and Twi

"Focus on the Users" is the equivalent of arguing that we can solve the world’s safe water problem by looking at the ocean and "focusing on the fresh water".

You can filter out some fresh water from a small amount of salt water, but if someone tells you to remove all the impurities from the ocean while leaving all the good water….

Well, what do you do about all the organisms that require those "impurities" to function, for starters? And then, how do you scale up without making a huge mess of things?

cpt kangarooski says:

Re: Re: Re:5 Section 230 will die if Facebook and

Who, precisely requires Nazis, racists, misogynists, etc. to be present in order to "function"?

And if there is anyone who does — who is not themselves in the same boat — do we really care so much that we’re willing to let society go down the crapper in order to cater to them?

You might want to listen to yourself before posting.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: Re: Re:6 Section 230 will die if Facebook

Who, precisely requires gays, blacks, women who think they deserve equal rights, etc. to be present in order to "function"?

And if there is anyone who does — who is not themselves in the same boat — do we really care so much that we’re willing to let society go down the crapper in order to cater to them?

You might want to listen to yourself before posting.

Most people these days would likely agree that the groups you listed are indeed disgusting, the problem is that not too long ago any number of other groups would likely have been seen the same, and depending on who gets to set the rules the same standards that would give the groups you listed the boot could all too easily give other groups the boot as well, as ‘society simply doesn’t need those kinds of people.’

cpt kangarooski says:

Re: Re: Re:7 Section 230 will die if Face

Sure. But I still think that it’s worth a shot. Self-awareness and trepidation should be watchwords for the implementors. And there very well might be benign groups that have bad reputations and who are marginalized who should not be shut out. But just as happened in the real world they’ll have to organize and use platforms more open to them to gain access to the majors and to get better reputations. I’m sorry it should be that way, if it happens, but it’s viable.

Better than to have no standards at all. We’ve been trying that for a while and it clearly isn’t working.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:8

there very well might be benign groups that have bad reputations and who are marginalized who should not be shut out

And how, pray tell, would you be able to tell the difference if the people in power are telling you that “feminists” and “racists” are one and the same in terms of being hateful groups of people? How will you know, beyond the shadow of any reasonable doubt, whether an antifascist group isn’t near as threatening to the safety and stability of society as a fascist group when the government is telling you (and social media sites) that the antifascists are terrorists?

The problem with trying to ban certain groups of people from society (or parts thereof) based on who they are or what they believe in lies with a simple notion: One day, you may be in one of those groups without ever realizing it. Who will speak for you when you are labeled an “undesirable” and you have all but said that “undesirables” should be banned from (at least parts of) society?

PaulT (profile) says:

Re: Re: Re:8 Section 230 will die if

"But just as happened in the real world they’ll have to organize and use platforms more open to them to gain access to the majors and to get better reputations"

So… your solution is to kick out the people you don’t personally like, and tell the innocent people you’ve just marginalised that they’re "separate but equal" and they need to create their own platform to have a voice, then hope that doesn’t get hijacked by the people you don’t like so they can be banned again?

That’s really better than simply accepting that some people whose speech you dislike have the same freedom as the people whose speech you do like, and to then hold them directly responsible for that speech rather than the tools they happen to use to exercise their freedoms?

cpt kangarooski says:

Re: Re: Re:9 Section 230 will die

I think you’ve misunderstood my position.

I support keeping the section 230 safe harbor as it is. It gives sites the freedom to choose whether and how to moderate user-posted material. I think that’s much better than any alternative.

But I am suggesting that the sites actually use their freedom to do so. Instead of just allowing everything out of little more than sheer laziness. Facebook and Twitter etc are protected legally and are fully capable of kicking people off for being Nazis, etc. I think they should do so. That’s all.

I am completely against repealing section 230 or having the government participate in any way in what should be a voluntary clean-up of the platforms.

People kicked out have the freedom to build their own sites to chat with each other just as they do now. And hopefully few or no people who don’t deserve to be kicked out will be. But there are plenty of people who absolutely should be.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re:10 Section 230 will

Facebook and Twitter etc are protected legally and are fully capable of kicking people off for being Nazis, etc. I think they should do so. That’s all.

They already do, depending on what you define those terms to mean. It seems like your complaint is that their moderation does not meet your standards.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re:10 Section 230 will

"But I am suggesting that the sites actually use their freedom to do so."

They do. But, the sheer level of content they process means there will always be false positives and they won’t catch everything. Combined with the fact that they get attacked no matter what position they take, they use their resources where they’re most effective.

"just allowing everything out of little more than sheer laziness"

In what world do you live in where you think this is true?

"Facebook and Twitter etc are protected legally and are fully capable of kicking people off for being Nazis, etc’

…and they do, which is why there’s so many alt-right types whining about being deplatformed and trying to force various levels of legal threats through, up to and including this being used as a reason given by some politicians as to why section 230 should be revoked completely.

I agree with your position, but you seem to be viewing a different reality to the one I see.

Scary Devil Monastery (profile) says:

Re: Re: Re:6 Section 230 will die if Facebook

"Who, precisely requires Nazis, racists, misogynists, etc. to be present in order to "function"?"

"The first amendment" comes to mind.

First of all, who’s going to determine what is offensive enough to merit a ban from the online environment?

Secondly, free speech was never meant to be used so the vocal majority could make use of it. It’s expressly intended to provide the minority view the ability to make itself heard.

Third, offensive speech is how the majority vaccinates itself from toxic memes. As long as bigotry is hidden the only ones to fight it will be the targeted victims. We NEED to have the bigoted messages in the public space before we can even realize that it exists – and that it’s bad.

Shut the bigots up and all you end up with is that twenty years later down the line they’re in parliament, still carrying the same stale old opinion, under newer, more acceptable terminology.

Anonymous Coward says:

Re: Section 230 will die if Facebook and Twitter won't change

I’m sure Twitter, et. al., police their content just fine. What they cannot do is police the content of all Twitter users, aka "not their content", in a way that will satisfy you or anyone else. Thankfully, you’re not the decider on this issue because clearly your viewpoint is skewed toward "technologically illiterate".

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re:

230 exists so Facebook and Twitter can legally moderate content on their respective platforms. Get rid of 230 because those platforms aren’t doing a good job based on someone’s arbitrary standards, and one of two things happens:

  1. Facebook and Twitter shut down to avoid legal liability for third party posts, or…
  2. Facebook and Twitter stop moderating content altogether to avoid legal liability for knowledge of illegal content in third party posts.

Neither outcome bodes well, especially not for smaller services like Mastodon instances based in the U.S. — which will have to make the same decision on what to do because 230 applies equally as much to them as it does to Facebook and Twitter.

Could Facebook and Twitter be doing a better job of moderating content on their respective platforms? Yeah, probably. But should their efforts be the benchmark by which 230 lives or dies? Absolutely not. To believe otherwise is to hinge the fates of all smaller platforms on the fates of Facebook and Twitter. If you want that to happen, well, that’s a hell of a thing to want.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:

The entire, original, on-the-record intent of 230 was to allow legal moderation of speech. Any change to 230 that goes against said intent will invite one of the two outcomes I mentioned. No company wants to face legal liability for third party posts, so it’ll either shut down a platform to avoid that liability altogether or leave that platform unmoderated to avoid the kind of liability laid out in the Prodigy ruling. You can’t take away a platform’s right to legally moderate speech post-Prodigy and still expect the platform to moderate speech.

This comment has been flagged by the community. Click here to show it.

urza9814 (profile) says:

Re: Re: Re:4 Re:

Not if it’s worded as "you are directly liable for anything you publish" rather than "you must moderate anything you publish". But the end result would be the same. No company would dare publish user content without moderating it under such a system.

You are assuming that they will either leave 230 entirely intact, or remove it entirely and not replace it. If they decide they need to go a bit further to "fix" the "problem", then who knows where that would end up…

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:5

Not if it’s worded as "you are directly liable for anything you publish"

I hate to break this to you, but 230 jurisprudence already says that. A platform such as Twitter is legally responsible for any speech the platform itself publishes (or directly helps a third party publish or write, e.g., Backpage). 230 places liability for third party speech where it belongs: on the person who posted it.

We don’t let people sue Craftsman if someone uses one of that company’s tools to kill a person. For what reason should we let people sue Twitter if no Twitter employee wrote/published defamatory speech themselves or directly helped write/publish defamatory speech from a third party?

This comment has been deemed insightful by the community.
Wendy Cockcroft (profile) says:

Re: Re: Re:6 Re:

No, he wants it held for moderation and approved before being posted. I.E. he’s all about censorship and approved speech. This sounds reasonable to people who honestly believe this will only be used to weed out nazis, but as the experience of the people of Poland has shown, it can and would be abused.

This comment has been deemed insightful by the community.
Wendy Cockcroft (profile) says:

Re: Re: Re:6 Re:

It’d lead to the collapse of the business. Consider this: we have a bunch of resident trolls who whine about having their posts hidden behind grey text. Imagine the cries of "Censorship, by God!" from all the "conservatives," etc., should such a scheme be implemented.

You’re asking for the evisceration of the First Amendment. No.

This comment has been deemed insightful by the community.
BG (profile) says:

Re: Section 230 will die if Facebook and Twitter won't change

You have demonstrated a fundamental failing commonly seen amongst proponents of section 230 needing to be limited or removed entirely.

It is NOT FB’s or Twitter’s content.

It is their USERS content.

The content served up by FB, Twitter, YouTube, etc. is the very definition of user generated content (UGC). That includes the adverts, etc. as that content is generated by users, the only difference being that they are paying users (advertisers, snake-oil salesmen, etc.) unlike the rest of us average Joes.

I genuinely didn’t think the concept was difficult to grasp, but f##king Hell, some people make me facepalm so hard about this that I may need corrective surgery for a broken nose.

Anonymous Coward says:

Even with 230 it is imbalanced

I don’t disagree with overall message of the article but even with section 230 there is still imbalance between the powerful and powerless. One need only look at how FB and twitter treat celebrity and politician accounts compared to an average user when enforcing there moderation efforts. It would be interesting if Section 230 was rewritten to state you have to adhere to your standards consistently regardless of account owner to keep the protections in place. I’m not saying that would be a good idea, just interesting. I would imagine there would be a lot of screaming from some segments of the political and celebrity class if their accounts all of sudden were banned for repeated violations.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re:

I’m not saying that would be a good idea

Good, because you shouldn’t be saying that. An idea being “interesting” doesn’t make it a “good” idea. And while I agree that platforms have double standards for celebrity/corporate/public figure accounts, you can’t fix that issue by having the government force “neutrality” on platforms. And if you believe such “rules” would be limited only to the largest platforms, congratulations — you played yourself.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re:

"platforms have double standards for celebrity/corporate/public figure accounts"

They have double standards for one specific reason – those accounts attract users and advertisers. Like it or not, the reason why some random alt righter gets blocked for saying the same thing that Trump is saying is because Trump attracts users, while the other guy attracts nothing they can profit from.

There’s no nefarious scheme no matter how much some people like to complain – the people making the companies money are allowed to get away with more than people who are costing them money. Enforcing some arbitrary rules won’t change simple economics, it would just create boring platforms where most people are scared to say anything outside of the blandest safe words.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re:

The other thing the French points out, if you read the article, is that 230 provides for a level playing field:

Large internet companies that possess billions of dollars in resources would be able to implement and enforce strict controls on user speech. Smaller sites simply lack the resources to implement widespread and comprehensive speech controls. Many of them would have no alternative but to shut down user content beyond minimalist input.

Repealing it would just entrench the big players, as we’ll see that Article 17 of the Copyright Directive will do in Europe.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re: Re:

"Smaller sites simply lack the resources to implement widespread and comprehensive speech controls"

Well, that’s partly true. The other possibility is for Google to get another side business in licencing ContentID to other companies, giving them vastly more control over other sites on the internet than they already do.

There’s 2 likely endgames from removing section 230, and both of them involve handing the internet over to the companies that the anti-230 crowd pretend they’re fighting against.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Even with 230 it is imbalanced

"even with section 230 there is still imbalance between the powerful and powerless."

YEs, so imagine how bad it would be without it.

"It would be interesting if Section 230 was rewritten to state you have to adhere to your standards consistently regardless of account owner to keep the protections in place"

It would, but not in the way you’re thinking.

":I would imagine there would be a lot of screaming from some segments of the political and celebrity class if "

Yet, the alt right assholes who are currently doing most of the screaming would still be "deplatformed".

Sok Puppette says:

I have great hopes for the repeal of 230...

… because it may finally drive people off of "platforms" and bring something closer to real freedom of Internet speech.

The "free" speech you have now on Facebook, or Twitter, or whatever, is, in practice, freedom to say anything that’s commercially convenient for those platforms to allow. If they don’t like it, they can and do ban it. Maybe they can’t make it disappear entirely, but they can disrupt it, disadvantage it, and hide it to the point where it can’t really take root. And, yes, they can indeed do that "at scale".

Yeah, yeah, private property, whatever yadda yadda. I don’t really care. The reality is that you or I can’t say certain things and actually have much of an audience. If you’d like to join an audience, you won’t necessarily find the speakers who would interest you. Network effects matter.

So we end up with whatever’s not too upsetting to "nice" people… except with vast amounts of extra leeway given to upsetting content if it happens to bring in enough cash.

By the way, censorship isn’t their only form of manipulation. There’s an audience boost for whatever puts eyeballs on ads, regardless of whether it’s true, whether it’s enlightening, whether it’s entertaining, whether it’s good for anybody’s mental health, or whether it’s good for anything other than clicks. It’s not like the platforms are boosting socially beneficial speech. Mostly they boost small, frequent dopamine rushes. Shit like Twitter succeeded because it made it harder to write a complete thought than to drop a stupid sound bite.

… and nothing engages people like a fight. It doesn’t matter whether the platforms want to start fights. They optimize for clicks. If fights create clicks, then they will naturally evolve into fight-starting machines.

We now have the basics of the technology to build mass communication that’s really decentralized, beyond what even Usenet ever was, and really hard to control. We can even make a decent try at showing the content the reader wants to see, and only that… rather than the content that it’s profitable for clickmongers to have the reader see.

But none of that will ever take off if it’s outcompeted by entrenched, sort-of-vaguely-good-enough compromise alternatives. As long as those platforms are out there operating "at scale" and sucking the oxygen out of Internet discourse, there are going to be hard limits on what you can say.

The platforms get a big fat exemption from previously existing law in the form of Section 230. That amounts to a subsidy that has helped them to grow to "scale". I’m not saying it’s their only advantage, because it is not. But it’s a real advantage, nonetheless.

If they lose that exemption, and that gives people the kick and the pants they need to move onto truly censorship-resistant, manipulation-resistant ways of communicating, I have trouble feeling bad about it. There’s reason for optimism, especially because governments outside the US are also cracking down on anything that has enough "scale" to be a target.

urza9814 (profile) says:

Re: Re: I have great hopes for the repeal of 230...

It would certainly be an interesting alternative…

I think it would prevent any companies from getting into social networking…or at least any domestic companies. But I’m not sure if I can consider that a bad thing.

There’s two possible outcomes in my mind, depending on how the prosecution goes once the major companies are gone. One option is you get non-profit distributed systems that crop up, where the service provider isn’t actually storing or transmitting any content themselves, so they can’t be prosecuted. It’s possible though that the prosecution would then turn to individual users who end up transmitting the data. In that case you end up turning social networking into torrents or darkweb, where you’ve got a bunch of questionable offshore service providers and a web of encrypted tunnels. That’d be pretty cool, but it’d also remove about half the user base and probably result in two thirds of what’s left getting a TON of viruses and malware….but it’d be fun for a couple of us 😉

Sok Puppette says:

Re: Re: Re:2 I have great hopes for the repeal of 230...

In a peer to peer system, you bring your own, and you pay for it because you want to participate. Yeah, somebody has to sell it to you, but the equipment and software general purpose, you can’t tell what any individual is using them for, and anybody can make them.

If necessary, that can be extended to the entire communication infrastructure, but in fact we’re not talking about the IP layer of fiber and routers here. We’re talking about application layer overlays that can clearly be done peer to peer. Facebook and Google are not infrastructure.

bhull242 (profile) says:

Re: Re: Re: I have great hopes for the repeal of 230...

This can’t be a profit making enterprise.

Wait, so people shouldn’t make a profit from providing a product or service that lots of people use and enjoy? I’m not necessarily saying that they’re entitled to a profit just for providing a product or service, but saying that they shouldn’t make a profit seems a bit too far, and least in a capitalist society.

This comment has been deemed insightful by the community.
bhull242 (profile) says:

Re: I have great hopes for the repeal of 230...

Here’s the thing: what would be left? Someone has to host the content for it to be online, and most people can’t afford to host their own. Additionally, hosting only your own content drastically reduces visibility for all but the most famous.

And removing §230 won’t actually fix any of the problems you say it will.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: I have great hopes for the repeal of 230...

Except you miss that it would effectively mean the end of any other company starting a communications platform.

Sure Google, Facebook, and Twitter have problems no one is denying that but what you hope for would effectively be a scorched earth and salting the ground policy.

What you hope is not only foolhardy but dangerous to the entire internet as it currently exists.

Sok Puppette says:

Re: Re: I have great hopes for the repeal of 230...

Wow, people are dense.

"Platforms" are the problem. Starting more new "platforms" just perpetuates the problem. There are obvious commercial, social, and governmental pressures that would make it absolutely certain that no new platform would be any better in the long term.

It is a FEATURE that it would not be possible to start the next Google, Facebook, or Twitter.

Preserving the "internet as it currently exists" is not a goal, because what currently exists is not good enough and cannot be made good enough.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re: I have great hopes for the repeal of 230...

Preserving the "internet as it currently exists" is not a goal, because what currently exists is not good enough and cannot be made good enough.

Most people find the current Internet good enough, and use it to keep in touch with friends, learn new skills, share ideas, engage in cooperative endeavours. They also find that there are people and places that they wish to avoid, but that applies in the physical world as well, so that is not an Internet problem.

In the main it is people who wish to force their views and politics onto others than have problems with the Internet, either because views that they disagree with are not silenced, or they express their views in a way that results in them being moderated by various platforms, or even finding that they are thrown off of the platform. So people who think the Internet is broken because views they disagree with are not silenced, and the latter because users of platforms get them removed from the platform because of their objectionable views, and the objectionable way they try to force their views into every conversation.

Which group do you belong to?

This comment has been deemed insightful by the community.
Rocky says:

Re: Re: Re: I have great hopes for the repeal of 230...

Preserving the "internet as it currently exists" is not a goal, because what currently exists is not good enough and cannot be made good enough.

The internet is always changing, it’s quite different from what it was 10 years ago and it certainly will be very different from what we have today.

Repealing 230 would mean that the rate of change would drastically slow down since the costs of doing business would sky-rocket. Unless of course, your goal is to use the internet as a one-way street where the average internet-user is relegated to becoming a pure consumer which can fleeced by the big companies.

And for some reason you are focusing your ire on Google, Facebook and Twitter while ignoring the consequences for everyone else that depends on 230, like TD for example or any other site that has a discussion-forum or UGC – regardless of their size.

This comment has been deemed insightful by the community.
bhull242 (profile) says:

Re: Re: Re: I have great hopes for the repeal of 230...

Preserving the "internet as it currently exists" is not a goal, because what currently exists is not good enough and cannot be made good enough.

Unless and until there is a viable alternative that is likely to improve upon the current failings without removing the successes, adding completely new failings, or worsening other failings, “preserving the internet as it currently exists” is a perfectly reasonable goal, at least for now.

And for quite a few people, what currently exists is good enough, at least for the time being. Again, without a viable alternative, saying “this isn’t working, so something should be done” isn’t really going to help. A lot of people understand the limitations and failings of humans and current technology, so they are willing to accept the flaws in the current system in lieu of something that improves on those flaws without being worse in some other area(s).

Anonymous Coward says:

Section 230 is very simple ,you can sue the user who defamed you or told lies about you ,not the platform,
if america loses section 230, free speech is greatly reduced,
other countrys will follow,
section 230 means everyone has a voice not just the rich and powerful,
this matters more to the poor ,minoritys and opressed .
section 230 means people can talk about corruption ,sexist harassment,lgbt rights,
police brutality, many issues , american has many issue,s ,
but at least its free,
the young especially use the web to communicate ,
they don,t have the paltforms rich politicans have to get their opinion known.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re:

Q: Does listening to wrong-minded politicians, celebrities and echo chamber blog sites spewing lies about S230 and then repeating that garbage elsewhere without an ounce of critical thinking or independent thought make someone a "fool"?

A: Yes. Yes it does.

This comment has been deemed insightful by the community.
Shufflepants (profile) says:

I still can’t get over the fact that people are so dumb at computers that we even need a law like Section 230. No one has any problems understanding that you don’t hold Honda responsible for the actions of a bank robber because they used a Honda brand car in their getaway; or the gun manufacturers for that matter.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »