The Tech Policy Greenhouse is an online symposium where experts tackle the most difficult policy challenges facing innovation and technology today. These are problems that don't have easy solutions, where every decision involves tradeoffs and unintended consequences, so we've gathered a wide variety of voices to help dissect existing policy proposals and better inform new ones.

Fighting Hate Speech Online Means Keeping Section 230, Not Burying It

from the first-do-no-harm dept

At Free Press, we work in coalition and on campaigns to reduce the proliferation of hate speech, harassment, and disinformation on the internet. It’s certainly not an easy or uncomplicated job. Yet this work is vital if we’re going to protect the democracy we have and also make it real for everyone — remedying the inequity and exclusion caused by systemic racism and other centuries-old harms seamlessly transplanted online today.

Politicians across the political spectrum desperate to “do something” about the unchecked political and economic power of online platforms like Google and Facebook have taken aim at Section 230, passed in 1996 as part of the Communications Decency Act. Changing or even eliminating this landmark provision appeals to many Republicans and Democrats in DC right now, even if they hope for diametrically opposed outcomes.

People on the left typically want internet platforms to bear more responsibility for dangerous third-party content and to take down more of it, while people on the right typically want platforms to take down less. Or at least less of what’s sometimes described as “conservative” viewpoints, which too often in the Trump era has been unvarnished white supremacy and unhinged conspiracy theories.

Free Press certainly aligns with those who demand that platforms do more to combat hate and disinformation. Yet we know that keeping Section 230, rather than radically altering it, is the way to encourage that. That may sound counter-intuitive, but only because of the confused conversation about this law in recent years.

Preserving Section 230 is key to preserving free expression on the internet, and to making it free for all, not just for the privileged. Section 230 lowers barriers for people to post their ideas online, but it also lowers barriers to the content moderation choices that platforms have the right to make.

Changes to Section 230, if any, have to retain this balance and preserve the principle that interactive computer services are legally liable for their own bad acts but not for everything their users do in real time and at scale.

Powerful Platforms Are Still Powering Hate, and Only Slowly Changing Their Ways

Online content platforms like Facebook, Twitter and YouTube are omnipresent. Their global power has resulted in privacy violations, facilitated civil rights abuses, provided white supremacists and other violent groups a place to organize, enabled foreign-election interference and the viral spread of disinformation, hate and harassment.

In the last few months some of these platforms have begun to address their role in the proliferation and amplification of racism and bigotry. Twitter recently updated its policies by banning links on Twitter to hateful content that resides offsite. That resulted in the de-platforming of David Duke, who had systematically skirted Twitter’s rules by linking to hateful content across the internet while following some limits for what he said on Twitter itself.

Reddit also updated its policies on hate and removed several subreddits. Facebook restricted “boogaloo” and QAnon groups. YouTube banned several white supremacists accounts. Yet despite these changes and our years of campaigning for these kinds of shifts, hate still thrives on these platforms and others.

Some in Congress and on the campaign trail have proposed legislation to rein in these companies by changing Section 230, which shields platforms and other websites from legal liability for the material their users post online. That’s coming from those who want to see powerful social networks held more accountable for third-party content on their services, but also from those who want social networks to moderate less and be more “neutral.”

Taking away Section 230 protections would alter the business models of not just big platforms but every site with user-generated material. And modifying or even getting rid of these protections would not solve the problems often cited by members of Congress who are rightly focused on racial justice and human rights. In fact, improper changes to the law would make these problems worse.

That doesn’t make Section 230 sacrosanct, but the dance between the First Amendment, a platform’s typical immunity for publishing third-party speech, and that same platform’s full responsibility for its own actions, is a complex one. Any changes proposed to Section 230 should be made deliberately and delicately, recognizing that amendments can have consequences not only unintended by their proponents but harmful to their cause.

Revisionist History on Section 230 Can’t Change the Law’s Origins or Its Vitality

To follow this dance it’s important to know exactly what Section 230 is and what it does.

Written in the early web era in 1996, the first operative provision in Section 230 reads: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

When a book or a newspaper goes to print, its publisher is legally responsible for all the words printed. If those words are plagiarized, libelous, or unlawful then that publisher may face legal repercussions. In the terms of Section 230, they are the law’s “information content provider[s]”.

Wiping away Section 230 could revert the legal landscape to the pre-1996 status quo. That’s not a good thing. At the time, a pair of legal decisions had put into a bind any “interactive computer service” that merely hosts or transmits content for others. One case held a web platform that did moderate content could be sued for libel (just as the original speaker or poster could be) if that alleged libel slipped by the platform’s moderators. The other case held sites that did not moderate were not exposed to such liability.

Before Section 230 became law, this pair of decisions meant websites were incentivized to go in one of two directions: either don’t moderate at all, tolerating not just off-topic comments but all kinds of hate speech, defamation, and harassment on their sites; or vet every single post, leading inexorably to massive takedowns and removal of anything that might plausibly subject them to liability for statements made by their users.

The authors of Section 230 wanted to encourage the owners of websites and other interactive computer services, to curate content on their websites as these sites themselves saw fit. But back then that meant those websites could be just as responsible as newspapers for anything anyone said on their platforms if they moderated at all.

In that state of affairs, someone like Mark Zuckerberg or Jack Dorsey would have the legal responsibility to approve every single post made on their services. Alternatively, they would have needed to take a complete, hands-off approach. The overwhelming likelihood is that under a publisher-liability standard those sites would not exist at all, at least not in anything like their present form.

There’s an awful lot we’re throwing out with the bathwater if we attack not just the abuses of ad-supported and privacy-invasive social-media giants but all sites that allow users to share content on platforms they don’t own. Smaller sites likely couldn’t make a go of it at all, even if a behemoth like Facebook or YouTube could attempt the monumental task of bracing for potential lawsuits over the thousands of posts made every second of the day by their billions of users. Only the most vetted, sanitized, and anodyne discussions could take place in whatever became of social media. Or, at the other extreme, social media would descend into an unfiltered and toxic cesspool of spam, fraudulent solicitations, porn, and hate.

Section 230’s authors struck a balance for interactive computer services that carry other people’s speech: platforms should have very little liability for third-party content, except when it violates federal criminal law and intellectual property law.

As a result, websites of all sizes exist across the internet. A truly countless number of these — like Techdirt itself — have comments or content created by someone other than the owner of the website. The law preserved the ability of those websites, regardless of their size, to tend to their own gardens and set standards for the kinds of discourse they allow on their property without having to vet and vouch for every single comment.

That was the promise of Section 230, and it’s one worth keeping today: an online environment where different platforms would try to attract different audiences with varying content moderation schemes that favored different kinds of discussions.

But we must acknowledge where the bargain has failed too. Section 230 is necessary but not sufficient to make competing sites and viewpoints viable online. We also need open internet protections, privacy laws, antitrust enforcement, new models for funding quality journalism in the online ecosystem, and lots more.

Taking Section 230 off the books isn’t a panacea or a pathway to all of those laudable ends. Just the opposite, in fact.

We Can’t Use Torts or Criminal Law to Curb Conduct That Isn’t Tortious or Criminal

Hate and unlawful activity still flourish online. A platform like Facebook hasn’t done enough yet, in response to activist pressure or advertiser boycotts, to further modify its policies or consistently enforce existing terms of service that ban such hateful content.

There are real harms that lawmakers and advocates see when it comes to these issues. It’s not just an academic question around liability for carrying third-party content. It’s a life and death issue when the information in question incites violence, facilitates oppression, excludes people from opportunities, threatens the integrity of our democracy and elections, or threatens our health in a country dealing so poorly with a pandemic.

Should online platforms be able to plead Section 230 if they host fraudulent advertising or revenge porn? Should they avoid responsibility for facilitating either online or real-world harassment campaigns? Or use 230 to shield themselves from responsibility for their own conduct, products, or speech?

Those are all fair questions, and at Free Press we’re listening to thoughtful proposed remedies. For instance, Professor Spencer Overton has argued forcefully that Section 230 does not exempt social-media platforms from civil rights laws, for targeted ads that violate voting rights and perpetuate discrimination.

Sens. John Thune and Brian Schatz have steered away from a takedown regime like the automated one that applies to copyright disputes online, and towards a more deliberative process that could make platforms remove content once they get a court order directing them to do so. This would make platforms more like distributors than publishers, like a bookstore that’s not liable for what it sells until it gets formal notice to remove offending content.

However, not all amendments proposed or passed in recent times have been so thoughtful, in our view, Changes to 230 must take the possibility of unintended consequences and overreach into account, no matter how surgical proponents of the change may think an amendment would be. Recent legislation shows the need for clearly articulated guardrails. In an understandable attempt to cut down on sex trafficking, a law commonly known as FOSTA (the “Fight Online Sex Trafficking Act”) changed Section 230 to make websites liable under state criminal law for the knowing “promotion or facilitation of prostitution.”

FOSTA and the state laws it ties into did not precisely define what those terms meant, nor set the level of culpability for sites that unknowingly or negligently host such content. As a result, sites used by sex workers to share information about clients or even used for discussions about LGBTQIA+ topics having nothing to do with solicitation were shuttered.

So FOSTA chilled lawful speech, but also made sex workers less safe and the industry less accountable, harming some of the people the law’s authors fervently hoped to protect. This was the judgment of advocacy groups like the ACLU that opposed FOSTA all along, but also academics who support changes to Section 230 yet concluded FOSTA’s final product was “confusing” and not “executed artfully.”

That kind of confusion and poor execution is possible even when some of the targeted conduct and content is clearly unlawful. But, rewriting Section 230 to facilitate the take-down of hate speech that is not currently unlawful would be even trickier and fundamentally incoherent. Saying platforms ought to be liable for speech and conduct that would not expose the original speaker to liability would have a chilling impact, and likely still wouldn’t lead to sites making consistent choices about what to take down.

The Section 230 debate ought to be about when it’s appropriate or beneficial to impose legal liability on parties hosting the speech of others. Perhaps this larger debate on the legal limits of speech should be broader. But that has to happen honestly and on its own terms, not get shoehorned into the 230 debate.

Section 230 Lets Platforms Choose To Take Down Hate

Platforms still aren’t doing enough to stop hate, but what they are doing is in large part thanks to having 230 in place.

The second operative provision in the statute is what Donald Trump, several Republicans in Congress, and at least one Republican FCC commissioner are targeting right now. It says “interactive computer services” can “in good faith” take down content not only if it is harassing, obscene or violent, but even if it is “otherwise objectionable” and “constitutionally protected.”

That’s what much hate speech is, at least under current law. And platforms can take it down thanks not only to the platforms’ own constitutionally protected rights to curate, but because Section 230 lets them moderate without exposing themselves to publisher liability as the pre-1996 cases suggested.

That gives platforms a freer hand to moderate their services. It lets Free Press and its partners demand that platforms enforce their own rules against the dissemination of hateful or otherwise objectionable content that isn’t unlawful, but without tempting platforms to block a broader swath of political speech and dissent up front.

Tackling the spread of online hate will require a more flexible multi-pronged approach that includes the policies recommended by Change the Terms, campaigns like Stop Hate for Profit, and other initiatives. Platforms implementing clearer policies, enforcing them equitably, enhancing transparency, and regularly auditing recommendation algorithms are among these much-needed changes.

But changing Section 230 alone won’t answer every question about hate speech, let alone about online business models that suck up personal information to feed algorithms, ads, and attention. We need to change those through privacy legislation. We need to fund new business models too, and we need to facilitate competition between platforms on open broadband networks.

We need to make huge corporations more accountable by limiting their acquisition of new firms, changing stock voting rules so people like Mark Zuckerberg aren’t the sole emperors over these vastly powerful companies, and giving shareholders and workers more rights to ensure that companies are operated not just to maximize revenue but in socially responsible ways as well.

Preserving not just the spirit but the basic structure of Section 230 isn’t an impediment to that effort, it’s a key part of it.

Gaurav Laroia and Carmen Scurato are both Senior Policy Counsel at Free Press.

Filed Under: , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Fighting Hate Speech Online Means Keeping Section 230, Not Burying It”

Subscribe: RSS Leave a comment
144 Comments
Anonymous Coward says:

Platforms still aren’t doing enough to stop hate,

And they are not going to be able to do much better when doing so results in posts from politicians being taken down. Perhaps twitter can try an experiment, and have add moderated and unmoderated feeds for politicians, where the moderated has heavy moderation, and the unmoderated includes tweets that are removed from the view of the general public. Any politician that complains about censorship is only given the unmoderated feed. That should help educate politicians about the outcomes of their meddlings.

Koby (profile) says:

3rd Possibility

The article poses a dilemma between the Cubby and Stratton Oakmont decisions, that sites would need to choose between two models without Section 230: either get a lot of moderators to take down a ton of objectionable content, or don’t moderate at all.

But there CAN be a third path without Section 230: build tools to allow users to have control. Allow users to decide content which is objectionable, profane, or harassing. Websites could avoid liability by performing no moderation abilities themselves. Those who believe certain community moderation actions are too extreme can choose to view the content anyhow. People who believe it’s not strict enough could encourage others to build and share stricter settings. Let the people decide, not big tech corporations.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: 3rd Possibility

But there CAN be a third path without Section 230: build tools to allow users to have control.

But who provides the computing power for those tools along with the filter characteristics. Note, for image, audio and video filters, that can realistically only be the large sites. But by doing so, you do not avoid the Scunthorpe problem, or napalm girl being labelled child porn.

You suggestion only appears to hand control to users on the surface, and in practice only allow users to turn and off a small selection of filters, as too many categories will only confuse users.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: 3rd Possibility

"in practice only allow users to turn and off a small selection of filters, as too many categories will only confuse users."

Not even that. I see people whining all the time about seeing things on Facebook that they could block, but they prefer whining after they see it instead. The problem isn’t the efficiency of the filter, the issue is that when left optional, most people just won’t bother and still blame the platform when they see something they object to.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: 'It gave our mods PTSD, but you should be fine.'

But there CAN be a third path without Section 230: build tools to allow users to have control. Allow users to decide content which is objectionable, profane, or harassing. Websites could avoid liability by performing no moderation abilities themselves. Those who believe certain community moderation actions are too extreme can choose to view the content anyhow. People who believe it’s not strict enough could encourage others to build and share stricter settings. Let the people decide, not big tech corporations.

People are already deciding, and they’re doing so by picking the platforms with content rules that they agree with and avoiding the ones that they don’t. Dump everything on the users and you require individual people to wade through sewage,

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: 'It gave our mods PTSD, but you should be fine.'

(Bah, pressed enter when in the wrong field)

But there CAN be a third path without Section 230: build tools to allow users to have control. Allow users to decide content which is objectionable, profane, or harassing. Websites could avoid liability by performing no moderation abilities themselves. Those who believe certain community moderation actions are too extreme can choose to view the content anyhow. People who believe it’s not strict enough could encourage others to build and share stricter settings. Let the people decide, not big tech corporations.

People are already deciding, and they’re doing so by picking the platforms with content rules that they agree with and avoiding the ones that they don’t. There’s a reason that Facebook and Twitter are successful and attract large audiences while the likes of Gab and Parler only attract relatively tiny amounts of people.

Dump everything on the users and you require individual people to wade through sewage, sewage that already causes significant mental harm to people whose jobs it is to deal with that, and who (theoretically) at least have some training and resources to handle it, training and resources that individual users are most certainly not equipped with it. If presented with the requirement of ‘if you want to use this platform you will have to constantly deal with a parade of bigots and assholes, blocking them one by one’ most people will likely just give a hard pass to the platform, leaving only the assholes and bigots until the site shuts down because of that.

ECA (profile) says:

Re: Re: 'It gave our mods PTSD, but you should be fine.'

But still,
As with TV,News, Fox, opinion news.
You have created representatives for your site.
ITs Similar to this site, trying not to Kill every comment, just covering it up as out of context.
But someone will suggest that they have emboldened into All of us to be the moderators. And as such WE have become the Editors.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re:

For what reason should users have to wade through the filth and muck and mire of violent images, child porn, and language such as racial slurs for the sake of “having control over what they see”? The moderator of a service such as Twitter has a role to play: They get knee-deep in the dreck for the sake of protecting everyone else from having to do the same. Do they get it right all the time? No. (I speak from experience on that.) But they’re doing what they do so the community doesn’t have to waste as much time on flagging all the shit they don’t want to see in their community.

Moderators are a frontline defense against agitators, trolls, and sociopaths (i.e., 8chan users) taking over a service. That you would seek to do away with them and force users to play that role says more about you than you might think — and none of what that says reflects well on you.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re:

"For what reason should users have to wade through the filth and muck and mire of violent images, child porn, and language such as racial slurs for the sake of “having control over what they see”?"

Because, as we always see, it’s the only way Koby and his friends can get back on to the platforms they’ve been rightfully banned from.

"Moderators are a frontline defense against agitators, trolls, and sociopaths (i.e., 8chan users) taking over a service"

That’s exactly what Koby wants, though.

This comment has been flagged by the community. Click here to show it.

RD says:

Re: Re: 3rd Possibility

It seems to have worked pretty good here for 20+ years.

If the tools are provided and people don’t use them, the only have themselves to blame. They can’t whine about being offended when they themselves could do something about it. The site would not be liable, as the mediation tools would be there for all to use.

I doubt this would be a perfect or complete solution, but it might be more preferable place to start. But the question comes down to liability under 230 for any of the methods anyway.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re: 3rd Possibility

This site has a small audience and every one can read all the posts and comments if they want to. Other sites have so many posts a minute that no one can possibly read them all. Therefore different sites moderate in different ways because their need are different.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:

They can’t whine about being offended when they themselves could do something about it.

They shouldn’t have to all do everything about racial slurs, anti-queer speech, anti-abortion propaganda with pictures of aborted fetuses, depictions of beheadings carried out by terrorists, and other such content. Moderators should handle as much of that shit as possible. And if you’re going to say “they shouldn’t handle that shit”, I preëmptively ask you (again) why you believe a website should/must, by law, host all legally protected speech regardless of whether the site’s admins don’t want to host certain kinds of speech.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re: 3rd Possibility

"It seems to have worked pretty good here for 20+ years"

Yes, this self-selecting group of generally intelligent people with similar interests have acted a certain way. Now, why are you so dumb as to think this can apply to the general population?

"But the question comes down to liability under 230 for any of the methods anyway."

There should be no liability for any decision made under section 230, in any direction, unless the owners of the site do or say something illegal. This isn’t difficult.

This comment has been deemed insightful by the community.
Scary Devil Monastery (profile) says:

Re: Re: Re: 3rd Possibility

"It seems to have worked pretty good here for 20+ years."

And for a blog with a narrow focus on tech and politics that works. That solution scales poorly as we have seen plenty of times. It’s the same as claiming that car traffic doesn’t need light signals because that situation works in a small town somewhere in Podunk town, Bumfuck county, Iowa.

Then you try taking down the traffic signals in NY city or Houston and watch the mayhem.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re:

Deplatforming is not censorship. It is a moderator, having told an agitator “we don’t do that here”, kicking the agitator off a platform said agitator has no right to access because they broke the rules without regard or regret.

Refusing to deplatform someone who disrupts the community of a given service sends a message: “We accept that here.” It welcomes others to come into the community and do the same thing. Reddit proved that deplatforming works. All you need do is ban the troublemakers — the spammers, the trolls, the people who defend copyright maximalism — and their trouble leaves with them. Sure, they’ll likely take their bile elsewhere, at which point that becomes a problem for that “elsewhere”. But a refusal to do something about them at all makes it a problem for the community that didn’t want those bastards there in the first place and might disband as a community because the bastards took over without having to worry about moderation stopping them.

Feel free to offer a defense of why the law should force websites to host all legally protected speech. That is your ultimate position, whether you realize it or not, when you criticize censorship of “hate speech”. If you’d like to prove otherwise? Here’s your shot, champ. Don’t fuck it up.

Anonymous Coward says:

Re: Re:

That’s because “hate speech” is undefinable. Your “accepted” speech today could get you disappeared in China or stoned in Iran. Who gets to decide what speech is acceptable? Telling religious organizations that being gay is okay is “hate speech” to them, whereas telling non-religious folks that gays shouldn’t marry may also be considered “hate speech.” Do you let the state decide what is “hate speech”? (No danger in that, right?) Do you let the majority decide it? (No danger here either, right?) How about letting people have their speech (freedom of conscience) and punishing those who actually hurt others?

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re:

Freedom of conscience is not freedom of consequence. Service admins who decide that your speech doesn’t belong on the service have every legal, moral, and ethical right to kick you off that service — no matter its size or popularity. You can’t force Twitter to host your speech any more than you can force a thirty-person Mastodon instance to host your speech.

This comment has been flagged by the community. Click here to show it.

RD says:

Re: Re: Re:

"Who gets to decide what speech is acceptable? "

Ask PaulT. Apparently, he knows what is the "correct" and ONLY speech allowed, and don’t you dare differ in opinion or you too will be "curated" by the "right minded." There is no such thing as hate speech, there is only speech that you disagree with, find abhorrent, whatever. That is a difference of opinion, not an absolute.

Otherwise, who gets to say what is the "Right" kind of speech? And don’t give me "There are consequences for speech" yes there are but that is not what this is about, this is about silencing or controlling the speech and what is said and how and where it is said. You threaten someone, there will be a consequence for that. But should you be prevented from saying it or censured from ever saying anything similar again? Some might say yes, but then you would have to ask, how is that any different from China, Iran, or any other repressive regimes we criticize for violating and controlling citizens’ rights?

If you consider a certain type of speech injury, then the problem is on you, you aren’t strong enough in character to handle it. There is also the option to NOT LISTEN (or read) it. It’s not like anyone is showing up at your house, following you around, shouting "hate" at you all day. and Don’t like it? Don’t go seek it out. Simple.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re: Re:

Why should you determine what speech somebody else carries on the platform that they own?

All that freedom of speech guaranteed is that you can publish your own speech at your own expense, or via somebody else who will voluntarily publish it for you, for free or for a fee. So if the only way you can put the publish the speech that you want on the Internet is to buy and run your own server, you freedom of speech is still intact. If nobody wants to listen to your speech, that is also O.K., as freedom of speech does not guarantee and audience, or require somebody else to provide you with an audience.

So stop demanding that you be allowed to speak where you are not welcome, as that infringes of everybody else’s right to freedom of association.

This comment has been flagged by the community. Click here to show it.

RD says:

Re: Re: Re:2 Re:

These are better points than most of the others in this thread, especially in regards to where certain rights begin/end or overlap with others. A knotty problem to be sure.

All of which just brings us back to, should all of that you describe (correctly I might add) be allowed section 230 provisions then? Can’t have it both ways, at least not as the law sits now.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re:3 Re:

"All of which just brings us back to, should all of that you describe (correctly I might add) be allowed section 230 provisions then?"

The only thing that section 230 describes is that the person committing an act, not the platform they use, are held liable for the act.

What version do you have in your head that had you so confused?

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re: Re:

"Ask PaulT. Apparently, he knows what is the "correct" and ONLY speech allowed, and don’t you dare differ in opinion or you too will be "curated" by the "right minded.""

Ah, back to outright lies again.

"Otherwise, who gets to say what is the "Right" kind of speech?"

The people who own the private property you’re trying to use free of charge to say something. If you don’t like that, use your own property or find someone you’re not pissing off in the process. Or, you know, speak in a way that’s not regularly getting you kicked out of a building for annoying everyone else there.

"There is also the option to NOT LISTEN (or read) it"

…and part of that option is to say you’re not allowed in the same place as the people who disagree with you, which is their right unless you or the government own the property.

This comment has been flagged by the community. Click here to show it.

RD says:

Re: Re: Re:2 Re:

"Ask PaulT. Apparently, he knows what is the "correct" and ONLY speech allowed, and don’t you dare differ in opinion or you too will be "curated" by the "right minded.""

"Ah, back to outright lies again."

Opinions are not lies, they can just merely be incorrect. Being wrong != lying, but that wouldn’t give you a convenient platform to slander people and hand-wave away their opinions.

Also, YOU are the one using Orwellian language here, so maybe YOU should take a closer look at you and less at when someone calls you out on your bullcarp.

This comment has been flagged by the community. Click here to show it.

RD says:

Re: Re: Re:4 Re:

Oh rly?

PaulT (profile), 31 Aug 2020 @ 11:37pm
Re:
"Censoring hate speech does nothing to stop hate."

But, it does stop right-minded people from having to hear it, which is already an improvement.

PaulT (profile), 31 Aug 2020 @ 11:37pm
Re:
"Censoring hate speech does nothing to stop hate."

But, it does stop right-minded people from having to hear it, which is already an improvement.

PaulT (profile), 31 Aug 2020 @ 11:37pm
Re:
"Censoring hate speech does nothing to stop hate."

But, it does stop right-minded people from having to hear it, which is already an improvement.

This comment has been flagged by the community. Click here to show it.

RD says:

Re: Re: Re:6 Re:

Neither did what I said in a different thread last week. Nonetheless you straight-up accused me of it. So allow me to paraphrase to you, in the same manner, that which you oh so righteously pointed out to me the other day:

You are using the cult-like language and phrases of oppression and fascists, therefore you follow Orwellian ideology in your brainwashed echo chamber of Authoritarianism.

PaulT (profile) says:

Re: Re: Re:7 Re:

So, repeating words you don’t understand then, you little cult baby? Bless your heart.

I merely stated that for someone who studies "facts" Independently", you sure copied all the mannerisms, language and false claims as the cult members I’m familiar with. You did nothing to disprove that, and in fact whined and fled in the manner that the cult babies tend to do.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Scary Devil Monastery (profile) says:

Re: Re: Re: Re:

"…that is not what this is about, this is about silencing or controlling the speech and what is said and how and where it is said."

Bullshit.

On a private platform there’s no such thing, no more than a bar owner kicking out a patron means the guy getting kicked out can’t find somewhere else to drink.

It’s pretty clear that anyone banned from twitter or Facebook for being a noxious creep to the people who own that platform has no shortage of alternatives to take that speech.

"…how is that any different from China, Iran, or any other repressive regimes we criticize for violating and controlling citizens’ rights?"

Because Facebook, Twitter and Youtube can’t throw your ass in jail or tell you to shut up forevermore, anywhere?
If you can’t tell the difference between "You’re not welcome here" and "Shut up or we’ll throw you in a deep hole!" then you have a problem with reality – or a disturbing aversion to allowing people to govern their own property.

"If you consider a certain type of speech injury, then the problem is on you, you aren’t strong enough in character to handle it."

I’m not obligated to allow every unpleasant fuckwit a bullhorn and a soap box in my own back yard. Neither is Facebook or Twitter.

"It’s not like anyone is showing up at your house, following you around, shouting "hate" at you all day."

Funny you should mention it because that is exactly what the people now banned from twitter and facebook have been experiencing. As a result of which they toss out those deplorable fuckwits from their personal demesnes.

Did you have ANY argument which in reality didn’t mean people and corporations should be unable to determine who is and who is not allowed on their own property?

Anonymous Coward says:

Re: Re:

That’s because “hate speech” is undefinable. Your “accepted” speech today could get you disappeared in China or stoned in Iran. Who gets to decide what speech is acceptable? Telling religious organizations that being gay is okay is “hate speech” to them, whereas telling non-religious folks that gays shouldn’t marry may also be considered “hate speech.” Do you let the state decide what is “hate speech”? (No danger in that, right?) Do you let the majority decide it? (No danger here either, right?) How about letting people have their speech (freedom of conscience) and punishing those who actually hurt others?

This comment has been flagged by the community. Click here to show it.

RD says:

So much for...

So much for the vaunted maxim "the best way to fight bad speech is with more speech." No longer, now we must provide tools to silence dissent, chill expression, and curate only that which conforms to the "Correct" kind of speech. And no one bothers to note that all of that is relative, not absolute.

This site had it set up best: let people talk, let users/people "flag" a post positively or negatively, and let each decide for themselves what is right or best or proper.

Apparently that kind of freedom is no longer to be allowed, at least not on other sites. Hope it never comes to that here, but I wouldn’t lay money on it.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: So much for...

Should forums about cats, or gardening, or any other topic be allowed to remove off topic posts? If so, why should that make the responsible for the posts that they allow.? If the cant moderate, how do they stop the forum becoming another 8 chan?

This comment has been deemed insightful by the community.
mattfwood (profile) says:

Re: So much for...

You miss the point. Those other sites have "freedom" to moderate. Are you saying that they shouldn’t, and that this site or any other one ought tp be made to moderate in a certain way?

And yeah, so much for that vaunted maxim indeed. I’d refer you to the shatteringly good piece in this series last week by Brandi Collins-Dexter, at the outset.

But also please note that the amount of hate and disinfo and just pure junk churned out online these days also exceeds our capacity as humans to "more speech" it into a corner.

This comment has been flagged by the community. Click here to show it.

RD says:

Re: Re: So much for...

Yes and those are all completely fair and valid points. My point is then more along the lines of "shall we have de facto censorship, or no?" It’s either free speech or it’s not, and pretending its not by calling it "curating" or "managing" is just focusing on the enabling side only.

I’m not saying it’s not necessary, but lets at least be honest about it if it is.

Such is where we are in the 21st I guess, but its a slippery, dangerous slope and instead of promoting caution, everyone instead seems to be falling all over themselves to provide justification for it above all other considerations. More "How can we make it work?" as opposed to "should we be careful about this?"

I’m speaking here about the broader issue "out there" than this site or this article specifically, before you all pile-on.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re: So much for...

Censorship is when the government says you shall not publish that anywhere, on the pain of going to jail. Moderation is saying we do not accept that content here, without closing the door on it being published somewhere else.

This comment has been deemed insightful by the community.
mattfwood (profile) says:

Re: Re: Re:2 So much for...

Exactly. It’s not "censorship" when any private party acts, let alone when the New York Times or another publication decides not to print a particular op-ed. The difference here is that online publishers that let 3rd party content come freely onto their platforms aren’t liable as publishers, thanks to 230. That means there are lower (but not no) barriers to getting your ideas out there on an online platform result. But even without 230, of course the platform ALWAYS had the right to moderate. You can’t tell me that curation is the same thing as censorship unless you’re prepared to argue that every publishing house’s decisions and every newspaper’s decisions about what to publish or curate or moderate are also "censorship."

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Re: Re:2 So much for...

The societal benefits and harms of censorship are the same regardless of who actually implements it.

In the last century places like Ireland had real problems with the catholic church using its near monopoly of schools, hospitals, care homes, and so on to enforce catholicism by refusing to do business with people who had been excommunicated. Things haven’t got that bad in relation to publishing today, but there’s already a near monopoly on public halls (because they’ve been bought up to assemble the concert venue monopoly), and there’s a very small number of highly dominant companies in online media, so in another 20-30 years there could be real problems with getting access to platforms if a handful of companies don’t like you (and that’s not likely to be because you’re a republican).

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:3

there’s a very small number of highly dominant companies in online media, so in another 20-30 years there could be real problems with getting access to platforms if a handful of companies don’t like you

Only of those handful of companies own every platform, every protocol, and every possible way of getting your message out. Until that even remotely looks like it’ll happen, I wouldn’t worry about it. (Also: Don’t mistake the platform for the audience. The difference between having a potential audience of hundreds and having a potential audience of millions is not censorship, no matter how much those Gab motherfuckers want to tell you otherwise.)

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re:3 So much for...

"there’s a very small number of highly dominant companies in online media"

There are. But are they the same companies as 20-30 years ago?

"there could be real problems with getting access to platforms if a handful of companies don’t like you"

If by "don’t like you", you mean "you have repeatedly violated the T&Cs of every one of those platforms so have banned you from their premises", then yes. But the problem there is usually not the platform, especially if you are being barred from multiple places. If you get banned from Burger King, McDonalds, KFC, Wendy’s, Taco Bell and Domino’s because you get hungry when you’re drunk and try picking fights with people, the problem is not that the fast food industry has a vendetta against you.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:

Assume, as a hypothetical, that you run a small Mastodon instance. One of the rules of your instance is that people can’t post hateful speech — racial slurs, anti-queer speech, what have you — under most circumstances. (Exceptions can be made for discussing that language in the context of said language being hateful. Such discussions should be put behind content warnings, tho’.) Your community grows to a reasonable size — let’s say several hundred people — and becomes a tight-knit community even within the broader Fediverse. You have spats here and there, but nothing too community-shattering.

One day, the government contacts you and tells you that your rule against hateful speech is illegal under a new law. The government tells you that you must allow your users — ones already on the instance and ones who have not yet joined the instance — to use hateful speech in any context. As soon as word gets out that your platform no longer punishes hateful speech, agitators flock to your instance en masse and flood it with all kinds of hateful speech. Without the ability to moderate their speech, the pre-“flood” community becomes tired of having to deal with all the bullshit and leaves. (They also blame you, not the law, for letting the place go to shit. Funny how that works~.) You’re left to decide whether you should permanently close and ultimately delete your instance because it has been effectively shut down for you — but regardless of your decision, you’ve still lost the community you helped build (and their respect for you), and all because someone made it illegal for you to moderate that community in the way you saw fit thanks to free speech absolutism.

After all that, would you still believe in the absolutist mantra of “it’s either free speech or it’s not” — the same mantra that, in someone else’s hands, would force you to host speech that you explicitly did not want to host?

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: Cesspits for all, like it or not

Racism, sexism, photos and graphic descriptions of such delightful scenes as car wrecks and suicides. All of those are legal. Exactly what ‘speech’ would you suggest would be best to combat such content, and how eager would you be to sign up to a platform where any or all of that is allowed to flourish?

‘Fight bad speech with good speech’ might sound good on it’s own, and work in some instances, but it fails in spectacular fashion when you factor in people who want to wallow in their filth in front of a crowd, and/or that get great enjoyment from shocking, horrifying and causing disgust in everyone around them.

This comment has been flagged by the community. Click here to show it.

RD says:

Re: Re: Cesspits for all, like it or not

Two things:

1) don’t look at it then. No one is forcing anyone to seek out, view, read or go to any place that has speech (images, whatever) that they find offensive.

2) "Asking platforms to be the arbiters of what speech is good and what speech is bad is fraught with serious problems."
Nazis, The Internet, Policing Content and Free Speech

Mike Masnick,

Free Speech by techdirt

August 25, 2017

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re: Cesspits for all, like it or not

1) don’t look at it then. No one is forcing anyone to seek out, view, read or go to any place that has speech (images, whatever) that they find offensive.

Moderation is solving the problem that bigots racist and troll will simply flood any platform that does not moderate with speech that most users find offensive.

Most people who complain about moderation do so because the platform that allow offensive, to the majority, speech do not have a large number of users, not because they cannot get their speech online. That is they cannot preach their gospel of hatred to a large audience, and they call that being censored,

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: Re: Re: Just don't look at the thing right outside your door

don’t look at it then. No one is forcing anyone to seek out, view, read or go to any place that has speech (images, whatever) that they find offensive.

Right up until a platform is stupid enough to run with the idea of ‘meeting bad speech with good speech’, at which point unless someone drops social media altogether then yes, they damn well are, because if they want to use those platforms then they will be faced with that content because the assholes will be posting it left and right for the laughs.

Saying ‘just don’t look at it’ would be rather like someone on the opposite side of the street in front of your house putting up extremely large, well lit posters of graphic violence and/or gore, such that the second you opened your door or looked out the window it would be immediately visible, and then should you object blowing you off by saying that you don’t have to look at it, that’s on you.

"Asking platforms to be the arbiters of what speech is good and what speech is bad is fraught with serious problems."

Asking platforms to moderate content may have serious potential issues, but having them sit back and leave it all to the users, whether because they face legal risks for moderating or because they’re foolish enough to ‘just let the users sort things out’ is pretty much guaranteed to result in worse issues.

This comment has been flagged by the community. Click here to show it.

RD says:

Re: Re: Re:2 Just don't look at the thing right outside your door

"Saying ‘just don’t look at it’ would be rather like someone on the opposite side of the street in front of your house putting up extremely large, well lit posters of graphic violence and/or gore, such that the second you opened your door or looked out the window it would be immediately visible, and then should you object blowing you off by saying that you don’t have to look at it, that’s on you."

This is a 100% specious example. This is not what any of this is about. Unless you are talking, like, someone injecting ads or redirecting your web links TO a sign like that. The internet doesn’t work (yet – give Big Corp time) with you being confronted with something you didn’t ask for as soon as you open your browser. That is a bullshit garbage example and exists in no way like that on the internet. Try again.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: Re: Re:3 Just don't look at the thing right outside your door

Like hell it is, if you don’t think that assholes and bigots wouldn’t be spamming social media with disgusting and horrifying images/arguments left and right should social media dump dealing with all that on the users I’m not sure what world you’re living in, but it isn’t this one.

While you might be able to avoid that sort of content by only interacting with a known circle of acquaintances any attempt to look out from that would run the risk of running smack into the sort of lovely content that gives the current mods PTSD, which I’m sure would create a lovely experience for your average users.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re:3 Just don't look at the thing right outside your door

"This is a 100% specious example. This is not what any of this is about."

Weird, it’s what everyone else is talking about.

"Unless you are talking, like, someone injecting ads or redirecting your web links TO a sign like that."

No, we’re talking about everyone being happy on the site they already visit, until one of you assholes starts causing shit. The correct response to that is to kick the asshole out, not to force everyone else to listen to him.

This comment has been deemed insightful by the community.
Rocky says:

Re: Re: Re: Cesspits for all, like it or not

1) don’t look at it then. No one is forcing anyone to seek out, view, read or go to any place that has speech (images, whatever) that they find offensive.

You have quite the naive view on how things tend to work out on the internet without moderation. You don’t have to seek it out, because it will be in your face regardless of where you go. Or do you think all those trolls and knuckledraggers who post these images and speech will magically avoid the sites you frequent somehow?

2) "Asking platforms to be the arbiters of what speech is good and what speech is bad is fraught with serious problems."

Perhaps we should include the whole paragraph from the article you quoted from:

As many experts in the field have noted, these things are complicated. And while I know many people have been cheering on each and every service kicking off these users [nazis], we should be careful about what that could lead to. Asking platforms to be the arbiters of what speech is good and what speech is bad is fraught with serious problems.

Next time, make sure to add the correct context when you quote something – because not doing it is dishonest.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

RD says:

Re: Re: Re:5 Cesspits for all, like it or not

Not one word of your accusation is true. The full quote makes the same point as the quoted segment. The meaning doesn’t change, so your charge is specious and slanderous.

The article I pulled it from was this:

https://www.cloudflare.com/en-ca/cloudflare-criticism/

Which is wholly relevant to this topic. I just used the Mike quote from there because it was succinct and convenient.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re:6 Cesspits for all, like it or not

"Not one word of your accusation is true."

Is that before or after you edited it to remove parts of it?

"The article I pulled it from was this:"

…and people were meant to know this psychically, rather than assuming you took it from the primary source, which happens to be the very same site you’re commenting on? A very strange way of doing things, and a good lesson as to why it’¡s a good idea to cite your sources rather than expect everyone else to look for them – you run the risk of people finding a better source than the one you were trying to hide.

This comment has been flagged by the community. Click here to show it.

RD says:

Re: Re: Re:7 Cesspits for all, like it or not

Irrelevant. The context is not changed by that addition of the full paragraph. That would only work if the last sentence, the one from the article I read, upended the point of the quote. It does not, therefore you are tilting at a strawman windmill because you don’t like the point it made.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:8

The context is not changed by that addition of the full paragraph.

Let’s test that theory.

Your snippet of the quote:

Asking platforms to be the arbiters of what speech is good and what speech is bad is fraught with serious problems.

The quote in context of the whole paragraph:

As many experts in the field have noted, these things are complicated. And while I know many people have been cheering on each and every service kicking off these users [nazis], we should be careful about what that could lead to. Asking platforms to be the arbiters of what speech is good and what speech is bad is fraught with serious problems.

Your snippet of the quote implies, without context, that platforms as arbiters of speech on their own platforms is a bad thing. The full quote imples that asking platforms to be arbiters of speech in re: what speech is “good” or “bad” as a broader concept is a questionable notion.

At this point, dude, you can say what you mean: You want the right to force your speech onto a service. I promise that if you say it, we won’t think any less of you than we already do.

This comment has been flagged by the community. Click here to show it.

RD says:

Re: Re: Re:9 Re:

You can run circles around the quote all you want, it doesn’t change anything.
The quote, either the "summary" at the end or the full quote, is the view Mike held as of 2017, maybe still now, maybe not, but it takes nothing away from what the quote SAYS. The point was to show that I AGREED with it, that MIKE believed it was a questionable notion, and the actual implication now is, is that still held to today?

As far as that goes, I would have preferred the full quote as its even more compelling to the point. I just was too lazy to seek it out. Had I done so, believe me, I would have used the whole thing, its a fantastic and fully relevant point about the topic and I agree with it 100%.

Typically, you are all panty-twisted-up about "catching" someone in a "gotcha" that you are ignoring the salient point about it altogether. Use WHICHEVER version of the quote you prefer, and argue (or disagree) from there, I fully support the entire conclusion of it.

Stephen T. Stone (profile) says:

Re: Re: Re:10

The point was to show that I AGREED with it, that MIKE believed it was a questionable notion, and the actual implication now is, is that still held to today?

The implication is that platforms deciding what is “good” or “bad” speech, as a broad concept, is questionable. The implication is not whether an individual platform deciding what is “good” or “bad” speech for that platform is questionable. You’re trying to twist the implication of a cherry-picked quote into a full-throated defense of forcing websites to host speech.

And you’re failing.

And that’s pitiful.

This comment has been flagged by the community. Click here to show it.

RD says:

Re: Re: Re:11 Re:

No, I was point out that MIKE DID.

You have a problem with the idea? Take it up with MIKE. It’s his viewpoint and quote.

Or see if he has changed his tune in the last 3 years, in which case that quote is a reminder that he did hold those views not too long ago, and why now is it different? These are valid questions.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:12

In the time I’ve been reading Techdirt (which is far longer than you have, I would think), Mike Masnick has never once advocated for the position “websites should be forced to host speech even if they wouldn’t otherwise host it”. You’re reading a quote that clearly imples the notion of “interactive web services shouldn’t be the global arbiters of what speech is objectively ‘good’ or ‘bad’ ” and twitsting it in your mind to mean “a given interactive web service shouldn’t be the arbiter of what speech is subjectively ‘good’ or ‘bad’ on that service”. Mike doesn’t support the forcing of speech upon websites that don’t want to host it — you do. If you’re going to keep trying to use Mike as cover for your holding a deeply unpopular opinion that you aren’t willing to defend on its merits, you may as well stop now. Everyone sees through that pitiful ploy because you’re not as good a bullshitter as you believe you are.

This comment has been flagged by the community. Click here to show it.

RD says:

Re: Re: Re:13 Re:

Heh. Ha! HAHA! Yeah punk, longer than me, ok. I have been reading and posting since before the millennium, so unless you started when the site did, I don’t think so.

And you keep inverting the issue and asking the wrong question, at least from a legal standpoint. There is no law on earth that FORCES any site to carry anything. The law only restrict liability for failure to REMOVE content. It might sound like they are the same, only opposite, but I assure you, and ask any lawyer, as a matter of legality, they are not.

Your premises are wrong, therefore your argument and conclusions are as well. You then can’t run that out as a loaded question fallacy and demand someone answer for your strawman.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:14

There is no law on earth that FORCES any site to carry anything.

And you implicitly support the idea that there should be such a law — the idea that, as a matter of law, a given interactive web service should be forced to host all legally protected speech. If you don’t support that idea, you’ve done a shit job of proving it.

The law only restrict liability for failure to REMOVE content.

Yes, that law is 47 U.S.C. § 230, which you’ve repeatedly decried as something that needs changing — ostensibly to force websites into hosting speech that they wouldn’t otherwise host out of some insane dedication to free speech absolutism. Again: if you don’t support that idea, you’ve done a shit job of proving it.

Your premises are wrong, therefore your argument and conclusions are as well.

[projects facts not in evidence]

Anonymous Coward says:

Re: Re: Re:15 Re:

"And you implicitly support the idea that there should be such a law — the idea that, as a matter of law, a given interactive web service should be forced to host all legally protected speech. If you don’t support that idea, you’ve done a shit job of proving it."

No I do not. You are projecting that on to me. I am ASKING if that is what is/should be and what are the legal obligations, and then pointing out that that might be a questionable course to take. Laws have to apply equally or they are unjust laws. What one site is allowed to do, ALL are allowed to to, in this regard, and that could have some major unintended ramifications. I am QUESTIONING that, not advocating for it. This is also why your "why do you think a site be FORCED" stance is specious, as I am not taking that stance, YOU are, then you are trying to strawman me in to defending it.

I am not trying to prove anything, I am pointing out potential problems and asking what is right/legal to do and if that is a good path. I don’t know how to make that any clearer, yet you do the hand-wave/ignore and come back with "yeah, but when did you stop beating your wife?" style loaded question fallacy over and over.

Stephen T. Stone (profile) says:

Re: Re: Re:16

I am ASKING if that is what is/should be and what are the legal obligations, and then pointing out that that might be a questionable course to take.

Falling back on the “just asking questions” defense doesn’t do you any favors here. You know what 230 does and the limits it has; you’ve made that clear now. To question whether 230 should be law is to implicitly advocate for — at a bare minimum — the idea that websites shouldn’t have the right to moderate legally protected speech.

Laws have to apply equally or they are unjust laws.

And 230 applies equally to all interactive web services — even the ones that host patently offensive speech, like Stormfront or Breitbart or DeviantArt.

What one site is allowed to do, ALL are allowed to to, in this regard, and that could have some major unintended ramifications.

Except it doesn’t. Every website can choose to moderate speech however it wishes; what Twitter decides to moderate has no bearing on what other websites decide to moderate, and vice versa.

I am QUESTIONING that, not advocating for it.

Again: By questioning the idea that websites shouldn’t have the right to moderate speech how they see fit, you are at least implicitly advocating for the idea that they shouldn’t have that right.

This is also why your "why do you think a site be FORCED" stance is specious, as I am not taking that stance

Not explicitly, anyway. But you’re questioning whether sites should have the legal right to moderate speech. The implication is easy to follow from there.

I am pointing out potential problems

All such problems have been discussed at length well before this article. The only problem you seem to have is with the fact that 230 protects Twitter, Facebook, etc. from being sued or whatever if they remove speech that those sites don’t want to host even if the speech is legally protected under the First Amendment. If I’m wrong, feel free to say so. If I’m not, feel free to say that, too.

This comment has been flagged by the community. Click here to show it.

RD says:

Re: Re: Re:17 Re:

"Not explicitly, anyway. But you’re questioning whether sites should have the legal right to moderate speech. The implication is easy to follow from there."

Yes and you finally framed the RIGHT question. Legal right to MODERATE the speech. Not FORCE speech acceptance. They may seem to be the same thing to you, but from a legal standpoint, they are not. Inversions don’t always work that way, even if they seem to be logical.

You can’t keep leaning on "yea, but, implicit!!" because you can use that to justify any interpretation of anything. You can infer and and interpret and believe all you want what I think, but that won’t be valid when asking for an explanation or defense of YOUR understanding of something.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:18

Legal right to MODERATE the speech. Not FORCE speech acceptance.

230 has never been about, and will never be about, forcing the acceptance of speech — neither as “users must find this speech acceptable or they get booted” nor as “sites must find this speech acceptable or they get shut down”. To insinuate otherwise is to propagate a lie that you know is a lie but continue believing/saying anyway.

You can’t keep leaning on "yea, but, implicit!!" because you can use that to justify any interpretation of anything.

What I think you’re implying is supported by a far better understanding of 230 than you seem to hold. If you properly understood 230, you wouldn’t be asking whether it should be a thing — you would be asking why some people think it shouldn’t.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re:16 Re:

Laws have to apply equally or they are unjust laws.

Section 230 applies to all sites and user content, and sites are free to moderate to different standards. Also, somebody else refusing to carry your speech is not a free speech issue, as all free speech guarantees is that you can publish at your own expense, buying the equipment necessary to so if that what it takes for you to publish your speech.

This comment has been flagged by the community. Click here to show it.

RD says:

Re: Re: Re:2 Cesspits for all, like it or not

It’s not dishonest because I wasn’t quoting from the original article, but from another article on the subject. That is quoted fully here. Take it up with them if you don’t like it.

The context of the full paragraph in no way negates the point of my posting however. If anything it is an even better example of the point being made.

This comment has been deemed insightful by the community.
Rocky says:

Re: Re: Re:3 Cesspits for all, like it or not

Yes, it IS dishonest – because the quote you gave implies that there is NOTHING but problems when big platforms seems to be the arbiters of speech when the full paragraph and the following paragraphs doesn’t reflect that.

If you got that quote from somewhere else that’s not TD, it’s your responsibility to check the source. Copy-pasting a quote without even doing one iota of due diligence means that you aren’t really interested in being truthful, you are only interesting in being "right" regardless of the facts.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re: Cesspits for all, like it or not

"1) don’t look at it then. No one is forcing anyone to seek out, view, read or go to any place that has speech (images, whatever) that they find offensive."

The issue is when people bring that shit to the places you already are, not that you go out to find them. The problem isn’t that this stuff exists online, it that people don’t want it appearing between cat videos and family photos on Facebook.

RD says:

Re: Re: Re:2 Cesspits for all, like it or not

Then we get back to the point I made about doing things more like what this site does: have vote buttons and let the community decide. Enough "down" (or abuse) votes and its hidden (but not removed, so someone can still see it if THEY CHOOSE) and better comments are promoted.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re:3 Cesspits for all, like it or not

"Then we get back to the point I made about doing things more like what this site does: have vote buttons and let the community decide"

The community has decided, you just don’t like the decision they agreed on. You can just go elsewhere if you disagree, rather than trying to force your opinion on others who have already told you that you’re not wanted.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:3

hidden…but not removed

Pray tell, why should a web service be forced to host speech that either its admins, its users, or both admins and users don’t want to see on the service? (If you want to say something that boils down to “free speech” or “neutrality”, take your free speech absolutism literally anywhere else. Or shove it back up your ass, which is where you found the insane idea that all sites should host all legally protected speech no matter what.)

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: So much for...

"So much for the vaunted maxim "the best way to fight bad speech is with more speech." "

That’s a fantasy that you people love to believe in. If I kick you out of my restaurant for being verbally abusive to my staff, that was the right thing to do. Telling the staff to argue and swear back at you would just get the other customers to walk out.

"Apparently that kind of freedom is no longer to be allowed"

Private property owners can operate the property in the way they wish. If you don’t like it, there are many other places for you to go.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re:3 So much for...

"Maybe you should learn the difference between a lie and an opinion."

I do, and you have been both lying and stating opinions.

"Then you might stop libeling people all the time."

You might want to look up an actual definition of libel.

"The point was your point about people can go other places if they don’t like it."

Yes, they can, and none of what you’re saying changes that fact. But, if the community decides that it is you, rather than the person you offended, that needs to STFU and GTFO, that’s also their right.

This comment has been deemed insightful by the community.
Scary Devil Monastery (profile) says:

Re: So much for...

"This site had it set up best: let people talk, let users/people "flag" a post positively or negatively, and let each decide for themselves what is right or best or proper. "

A) This works on small scale much as not putting light signals up and letting people drive through the streets by informed consent may work for Podunk Township, Bumfuck County, Hillbillyland state.
It’s a less workable solution on scale – i.e. taking the traffic signals down in NYC or San Fran.

B) Irrespective of what works or not it’s the owner of private property who decides which people are welcome in that property or not. A bar owner can toss a patron out on his ear for no reason other than "I didn’t like his face". As a homeowner you aren’t obligated to open your door to anyone. A platform owner isn’t obligated to welcome anyone.

"Apparently that kind of freedom is no longer to be allowed, at least not on other sites."

The freedom to be an asshat has never existed on other people’s property.

And anyone insisting that it should doesn’t know what freedom of speech is, or, for that matter, realizes the concept of property.

ECA (profile) says:

There is a strange history.

There is history here. A story of how the first Printing press’s came around and the Pope.
At First the Pope loved the idea that he could make More Pieces of paper that he could sell to the Rich to Give them Heavenly Forgiveness. And the time to create these and Bibles before this time, took Months and years to hand write each.
Then Someone had the great idea of Publishing the bible for Everyone. Wow! Now the common man and everyone can read it and decide what it says. And the Pope got upset, and then pissed off. And allot of Internal religious wars all over the EU. And if you thought there were problems in translations, there are Many versions of the bible and over 40 different Sects of Being a Christian. With only slight differences, in translations and even name changes int he bible. And if you want to debate Even the Old testament(the original is from Hebrew/Jewish) (which makes Most of us a Subsect of Judaism.) You can start another Christian war.

Onward to now. Even the Jewish faith interprets the Old testament, over and over, and they debate(hardly a war). And we Stand in a nation based on Choice and Democratic ideas, and cant elect persons with Enough intelligence to Think on their OWN?? Where most laws/regs should be balanced/fair/NOT one sided. and it dont matter which we vote for, they all act the same ways. NOT for everyone. Just for themselves.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Stephen T. Stone (profile) says:

Re:

Doesn’t matter. The government doesn’t have an official definition for “hate speech” (mostly because it’s impossible to do that without dinging a lot of protected speech). Anyone can define the term however they wish; everyone who runs an interactive web service can moderate the service according to their own definition.

RD says:

Re: Re: Re:

"Anyone can define the term however they wish; everyone who runs an interactive web service can moderate the service according to their own definition."

Yes but doesn’t that then bring us right back to the 230 provisions? Wouldn’t that negate those protections, and isn’t that the main argument here? Isn’t the idea that you can’t simultaneously be a "curated, moderated" site and also have 230 protections because of "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider?"

Anonymous Coward says:

Re: Re: Re: Re:

Yes but doesn’t that then bring us right back to the 230 provisions? Wouldn’t that negate those protections,..

No because the law was written, as pointed out by its authors, to allow a site to moderate content without becoming liable in law for what they remove, and what the leave up by intent or mistake.

It’s like somebody who provides a physical notice board not being liable for the content of notices even if they remove some that offend their sensibilities.

Stephen T. Stone (profile) says:

Re: Re: Re:

doesn’t that then bring us right back to the 230 provisions?

No.

Wouldn’t that negate those protections, and isn’t that the main argument here?

No and no.

Isn’t the idea that you can’t simultaneously be a "curated, moderated" site and also have 230 protections because of "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider?"

No.

And now I’m going to give you a simple explanation of what 47 U.S.C. § 230 says and does. I suggest you read it and reëxamine your mistaken beliefs.

230 was written to overrule a specific court case that put liability for third-party speech on the service provider (Stratton Oakmont, Inc. v. Prodigy Services Co.). Under the Prodigy ruling, a service that moderated third-party speech could be held liable for the speech it didn’t moderate. 230 overruled that by putting liability for third-party speech where it belongs: on the person who posted the speech.

Section 230 does not grant blanket immunity to services. An employee of the service who has a hand in creating or publishing defamatory speech puts the service in legal jeopardy, for example. But 230 grants immunity to services for moderation decisions. Twitter admins deciding not to allow racial slurs on Twitter does not violate 230 — nor does it violate the First Amendment, the protections of which 230 extends to interactive web services.

The admins of Twitter, Facebook, Mastodon instances, 4chan, Soundcloud, YouTube, and any other service like them have every legal, moral, and ethical right to decide what speech is and isn’t acceptable on their given service. Doesn’t matter what you think is acceptable speech; if you don’t own or moderate the service, you don’t get a say in that, no matter how much you might want to force your speech onto that service. Don’t blame 230 when an admin/moderator tells you “we don’t do that here”, you keep doing “that” there, and you get kicked out. You made the decision to press your luck; getting Whammied is on your head.

This comment has been flagged by the community. Click here to show it.

RD says:

Re: Re: Re:4 Re:

Good, I’m glad we see the truth of it then. So here is the result of that thinking:

Henceforth, it is open season on any liberal/Dem/progressive posting. Anyone holding these ideologies and views is banned.

Any pro-LGBTQ posts are banned.

Any hate speech directed at whites, men, or conservatives is banned.

Any anti-christian speech is banned.

Any pro-science speech is banned.

Any anti-American (or pick your country here) speech is banned.

(and no, I am not including any child pron examples because that stuff is de facto illegal would by law have to be removed anyway. limiting the examples above to actual speech, "spoken"/typed as opposed to the broader idea of speech through images and other media)

This is where that leads. Think carefully about whether you are advocating for a site to do all of the above with full legal support. I ma pretty sure most who have posted in this comment thread will be thinking that’s a "bad thing" to cultivate

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:5

Think carefully about whether you are advocating for a site to do all of the above with full legal support.

Sites can already do that with full legal support. Neither the First Amendment nor Section 230 have some kind of “neutrality” component; a service that doesn’t want to host “conservative speech” has just as much right to make that decision as a service that doesn’t want to host “liberal speech”. That you think bringing up all these hypotheticals and shit like this is some sort of “gotcha” is a mistake only you’re making. Everyone else with an ounce of sense around here knows the law can’t force a service to host speech its admin(s) would otherwise refuse to host. Now, if you’d kindly explain why that shouldn’t be the case, maybe we can have an actual discussion. But if you’re not willing to defend that position — a position you implicitly support — don’t bother replying. Even my patience has its limits.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

RD says:

Re: Re: Re:8 Re:

I can’t answer a strawman creation of your own mind, that’s your problem. Just declaring something implicit doesn’t make it so. You dismiss, ignore and hand-wave any argument away but keep asking "when are you going to answer??" It’s a very effective way to never have to actually debate anything.

You are the same kind of person that will decide on your own judgement that someone is a racist and then use that as justification to commit pre-emptive violence against them, because "well, they are a racist!" and when someone calls you on it, you ask them "why would you defend a racist??"

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:9

Just declaring something implicit doesn’t make it so.

Going over your comments on this article alone shows the implicit belief to which I refer. But if you want me to show you your own belief, I can do that, too!

So much for the vaunted maxim "the best way to fight bad speech is with more speech." No longer, now we must provide tools to silence dissent, chill expression, and curate only that which conforms to the "Correct" kind of speech. And no one bothers to note that all of that is relative, not absolute.

Translation: “It sucks that platforms can delete speech they don’t like.”

This site had it set up best: let people talk, let users/people "flag" a post positively or negatively, and let each decide for themselves what is right or best or proper.

Translation: “Whether a site admin wants to host certain speech shouldn’t matter; users should have to see all speech and decide whether they want to see it.”

Apparently that kind of freedom is no longer to be allowed, at least not on other sites.

Translation: “I wish I could force my speech onto other sites.”

My point is then more along the lines of "shall we have de facto censorship, or no?"

Translation: “Moderation is censorship.”

It’s either free speech or it’s not, and pretending its not by calling it "curating" or "managing" is just focusing on the enabling side only.

Translation: “Moderation curtails my right to speak on a platform that otherwise wouldn’t host my speech, and that sucks.”

This is where that leads. Think carefully about whether you are advocating for a site to do all of the above with full legal support.

Translation: “Moderating speech could lead to speech that sites don’t want to host getting banned, so think about whether you want that to happen.”

This thread is literally advocating for being able to remove speech (or user), ANY speech, for ANY reason from any site.

…I don’t even need to translate this one, you’ve pretty much said the quiet part out loud.

Point is, you implicitly support the idea of the law forcing websites to host speech their admins don’t want to host. Hell, under the most favorable interpretation of your comments, you implicitly support the idea that websites shouldn’t be able to moderate speech (which is pretty much the same thing as forcing them to host speech). Now might be a good time to explicitly denounce both positions if you’re not supportive of them, lest you continue to be “mistaken” for someone who does.

RD says:

Re: Re: Re:10 Re:

Nope, and I surely do not have to answer for your strawman "translations" of my opinions. You can hold whatever view you wish (such is the power of freedom) but that right ends at demanding anything from me of that which you invent in your own head.

Also I already did that in one of the other replies (this is getting long now!) in this thread, so go seek your answers there.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:11

that right ends at demanding anything from me

I’m not demanding anything from you.

I’m just asking questions. ????

(Also: Bold of you to talk about rights and demands in a comments section where you’ve implicitly supported the idea that websites should concede to demands that they host speech they otherwise wouldn’t host.)

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re:5 Re:

"Henceforth, it is open season on any liberal/Dem/progressive posting. Anyone holding these ideologies and views is banned."

…which actually happens on right-wing cesspool sites like Breitbart, Free Republic, Stormfront, etc. I wouldn’t be surprised if you know this first hand.

But, you know what us libs do when faced with that censorship? We don’t whine like tiny children about it, we go to places that aren’t right-wing cesspools.

This comment has been deemed insightful by the community.
Scary Devil Monastery (profile) says:

Re: Re: Re:6 Re:

"But, you know what us libs do when faced with that censorship? We don’t whine like tiny children about it, we go to places that aren’t right-wing cesspools."

Aw, be fair. Our places are well-lit forums where multitudes gather and exchange ideas…well, funny cat memes, mainly…
…his place is a dank hole in the ground under a bridge where the people Stephen King deemed too gross to include in his novels live.

I will at least give old Baghdad Bob the benefit of sympathy when he desperately tries to put adult words to his tear-sodden toddler style temper tantrum of "It’s not fair that your clubhouse is nicer than mine!"

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Re: Re: Re:7 Re:

I will at least give old Baghdad Bob the benefit of sympathy when he desperately tries to put adult words to his tear-sodden toddler style temper tantrum of "It’s not fair that your clubhouse is nicer than mine!"

Sympathy that goes right up in smoke when you notice/point out that the reason their clubhouse is such a shithole is because they keep using it as an outhouse, and they’re trying to get in the better clubhouses with no plans to change because they see nothing wrong with that sort of behavior.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:6

Fair enough, but is that legal?

Yes. The First Amendment gives Twitter the right of association; 230 protects the decision to kick people off the service.

And if its legal, is it the best way to go about things?

If a service admin deems it the best way for that service? Yes.

So

I don’t typically respond to otherwording, but in your case, I’ll make an exception.

its OK then to openly discriminate speech for any criteria or anyone?

Please explain why a service dedicated to giving queer people a space to be themselves without judgment should/must, by law, host anti-queer propaganda (including advertising for the act of torture known as “conversion ‘therapy’ ”). Then explain why an anti-queer service should/must, by law, host pro-queer propaganda (including advertising for Pride parades). If you want, do the same for any other situation with such opposing dynamics (e.g., Stormfront vs a Black Lives Matter forum). But I think you’ll discover that forcing services to host speech that admins don’t want to host isn’t as easily defensible as you’d want it to be.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re:8 Re:

"Good so Facebook does NOT have to take down speech it deems appropriate as well, right?"

Legally? No. If it wants to keep a certain block of active users or paying customers (advertisers) who object to the speech if they don’t block it? Well, that is good business.

"Twitter can take down ONLY conservative speech and accounts, right?"

According to the inane ramblings of the conservatives who got blocked then realised they can’t grift as much on Gab? Yes. According to reality? No.

Also, political leaning has little to do with the bans in most cases. As I often say, if white supremacists are getting banned and you notice that they all identify as "conservative", your problem is not that your "team" is being blocked, it’s that all the assholes are on your "team".

ECA (profile) says:

Whaty is to Hate on Hate speach?

It is telling that the Hate in hate speach tends to be mis-info, or Just Finger pointing at someone to Blame, 90% of the time.
An excuse why you life isnt What you think it Could be.
With abit of listening and understanding, Hard to get from Some, Lots of it could be erased, I would hope.
There are Ideals this nation has HAD to try and prove, and they do work, but what doesnt is finding jobs for everyone.
We have learned that Educated persons tend to have less problems in life. They can find more jobs. Tech schools can lead to Allot more jobs.
But the controls in this system are abit stupid. Creating a Small business that can get bigger, ISNT easy in this nation. for all the ways we have TRIED to help and start more and more Work, something in the background keeps happening. The companies get pushed out, Cant compete with LARGER Cheaper stores(20 miles away), If they start getting good, then they get Bought out(ask MS and Apple about this), Closed and all materials sold off.
This nation and others have had Job problems for years, Over a Century. And few ways to solve the problem(love those wars). But the ideal of Corporations tends to be a company that costs nothing to handle and all the money in the upper pay Mngt checks.

Bartonbqa (user link) says:

asian brides

PM directs building roadmap to open foreign visitors

Hanoi the us govenment Office has issued a dispatch on the Prime Minister directions about building a roadmap to open the door to welcome foreign visitors in a safe, precise and effective manner.

In the document to heads of ministry level and government departments, Chairpersons of the centrally run municipal and provincial People Committees, PM Pham Minh Chinh said for the COVID 19 pandemic, unknown arrivals in Vietnam dropped by 76.7 per cent in 2020 and 95.9 for each in 2021, Hurting relevant sectors in the value chain such as conveyance, dining, hospitality, retail industry and trade, Especially in key tourist areas and the bulk of the key tourist localities.

As the actual is safely, Flexibly adapting to and effectively governing the pandemic while gradually re opening its economy, PM Chinh asked the Ministry of culture, Sports and travel (MoCST) To partner throughout the ministries of health, indigenous defence, Public guarantee, mysterious affairs, bring, Information and sales and marketing communications; Ministries, Agencies concerned and localities to promptly build a plan and roadmap normally indicate.

they were assigned to hold seminars gathering experts, experts and managers to reach their consensus.

People committees of centrally run cities and provinces must assist the MoCST, Health Ministry and concerned agencies to issue detailed plans for the time and effort, Prepare for hr and equipment [url=https://www.love-sites.com/latin-women-date-online-dating-advice-for-men/%5DLatin women date[/url] and come up with plans to cope with the pandemic and any arising medical incidents.

Deputy pm Vu Duc Dam was tasked with directing the building and implementation of the roadmap. This is cached page on VietNam disobeying News.
[—-]

Rishmdq (user link) says:

dating a filipina what to expect

as the Bereaved Wish To Be Left Alone

BlogsCelebrating LifeComforting WordsSincere CondolencesTools for Tough TimesWidow on earth, All BlogsMost of us who have lost your teenage daughter seek comfort from our friends, family unit, And public. indeed, It might come as a surprise to learn that some bereaved are not seeking solace and ‘d rather be left alone. I learned this when my collegue’s mother in law died. I offered her my condolences and asked where I could send a monetary gift in memory of the deceased. My friend shared that her husband had a toxic relationship with his mother and was not focused on any reminders. She suggested I relax and I followed her request.It is unusual to learn that somebody who is bereaved has no interest in condolences and while it is rare, dealing happen. not long met Kathy, A woman who chose to talk about her personal story. Kathy’s mom was identified as having schizophrenia and her father walked out on the family. When Kathy was twenty she assumed the duty for her mother and her personal/financial affairs. Kathy were able to marry, Have teens, And work, All while shouldering the obligation of an erratic and difficult mother. She eventually placed her mother in a care facility and when she finally died at age eighty four, Kathy was treated. She had her mother’s remains cremated and disposed of the ashes. She let two family know and she was ready to close this chapter in her life. But her husband leaked what is the news. He let colleagues know his mother in law had died and Kathy received acknowledgement notes and flowers. She did not want them and asked me how to deal with them.We assume that every one bereaved are sad and as caring people, We want to comfort them in heaps of different ways. And that’s what happened to Kathy. She wanted to close this chapter, But now others had reached out and she wondered what you’ll do. I suggested she discover the condolences simply with, "we appreciate the condolences following the death in our family,How will you know if you are confronted with someone like Kathy who wishes to handle their loss in private? You won’t unless you ask as I did with my friend. Just because someone had a difficult relationship with the deceased doesn’t necessarily mean they wish to grieve alone. to be able to cues from the bereaved. pose, "What can I do that will, and / or "Where [url=https://www.love-sites.com/10-simple-rules-of-dating-shy-asian-brides/%5DAsian dating rules[/url] can I donate in memory of the actual, Determine utilizing their response how to reach out. And if they tell you individuals nothing, Honor their likes. just remember this is their grief, Not mail. She has written How to Say It When you do not know What to Say, A guide to help readers communicate safely and effectively when those they care about experience loss, available these days as e books for "illness Death, "suicide, "losing the unborn baby, "Death of your teen, "Death of a Stillborn or baby, "Pet loss, "Caregiver duties, "separation and divorce" and also "Job departure, All game titles are in Amazon’s Kindle Store.
[—-]

Add Your Comment

Your email address will not be published.

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
12:04 SOPA Didn't Die. It's Just Lying In Wait. (5)
09:30 Demanding Progress: From Aaron Swartz To SOPA And Beyond (3)
12:00 How The SOPA Blackout Happened (2)
09:30 Remembering The Fight Against SOPA 10 Years Later... And What It Means For Today (16)
12:00 Winding Down Our Latest Greenhouse Panel: Content Moderation At The Infrastructure Layer (16)
12:00 Does An Internet Infrastructure Taxonomy Help Or Hurt? (13)
14:33 OnlyFans Isn't The First Site To Face Moderation Pressure From Financial Intermediaries, And It Won't Be The Last (12)
10:54 A New Hope For Moderation And Its Discontents? (7)
12:00 Infrastructure And Content Moderation: Challenges And Opportunities (7)
12:20 Against 'Content Moderation' And The Concentration Of Power (32)
13:36 Social Media Regulation In African Countries Will Require More Than International Human Rights Law (4)
12:00 The Vital Role Intermediary Protections Play for Infrastructure Providers (7)
12:00 Should Information Flows Be Controlled By The Internet Plumbers? (9)
12:11 Bankers As Content Moderators (6)
12:09 The Inexorable Push For Infrastructure Moderation (7)
13:35 Content Moderation Beyond Platforms: A Rubric (4)
12:00 Welcome To The New Techdirt Greenhouse Panel: Content Moderation At The Infrastructure Level (8)
12:00 That's A Wrap: Techdirt Greenhouse, Broadband In The Covid Era (17)
12:05 Could The Digital Divide Unite Us? (29)
12:00 How Smart Software And AI Helped Networks Thrive For Consumers During The Pandemic (40)
12:00 With Terrible Federal Broadband Data, States Are Taking Matters Into Their Own Hands (18)
12:00 A National Solution To The Digital Divide Starts With States (19)
12:00 The Cost Of Broadband Is Too Damned High (12)
12:00 Can Broadband Policy Help Create A More Equitable And inclusive Economy And Society Instead Of The Reverse? (11)
12:03 The FCC, 2.5 GHz Spectrum, And The Tribal Priority Window: Something Positive Amid The COVID-19 Pandemic (6)
12:00 Colorado's Broadband Internet Doesn't Have to Be Rocky (9)
12:00 The Trump FCC Has Failed To Protect Low-Income Americans During A Health Crisis (24)
12:10 Perpetually Missing from Tech Policy: ISPs And The IoT (10)
12:10 10 Years Of U.S. Broadband Policy Has Been A Colossal Failure (7)
12:18 Digital Redlining: ISPs Widening The Digital Divide (17)
More arrow