Hide Techdirt is off for the long weekend! We'll be back with our regular posts tomorrow.

Katie Couric Is Wrong: Repealing Section 230 Won’t Stop Online Misinformation

from the misinformation-about-misinformation dept

Katie Couric recently claimed that repealing Section 230 would help combat online misinformation. The problem is, she couldn’t be more wrong. Worse, as a prominent voice, she’s contributing to the widespread misinformation around Section 230 herself.

A few years ago, for reasons that are unclear to me, Katie Couric chaired a weird Aspen Institute “Commission on Disinformation,” which produced a report on how to tackle disinformation. The report was, well, not good. It was written by people with no real experience tackling issues related to disinformation and it shows. As we noted at the time, it took a “split the baby” approach to trying to deal with disinformation. It described how there were no good answers, that doing anything might make the problem worse, and then still suggested that maybe repealing Section 230 for certain kinds of content (not clearly defined) might help.

The report’s recommendations were a mix of unworkable and nonsensical ideas, betraying the authors’ lack of true expertise on the complex issues and, more importantly, the tradeoffs around online disinformation.

Repealing Section 230 would not magically solve misinformation online. In fact, it would likely make the problem worse. Section 230 is what allows websites to moderate content and experiment with anti-misinformation measures, without fear of lawsuits. Removing that protection would incentivize sites to take a hands-off approach, or shut down user content entirely. The end result would be fewer places for online discourse, dominated by a few tech giants – hardly a recipe for truth.

Still, it appears that Couric is now presenting herself as an expert on disinformation. The NY Times Dealbook has a series of “influential people” supposedly “sharing their insights” on big topics of the day, and they asked Couric about disinformation. Her response was that she was upset Section 230 won’t be repealed.

What is the best tool a person has to combat misinformation today?

There are many remedies for combating misinformation, but sadly getting rid of Section 230 and requiring more transparency by technology companies may not happen.

But again, that only raises serious questions about how little she actually understands the role of Section 230 and how it functions. The idea that repealing Section 230 would be a remedy for combating misinformation is misinformation itself.

Remember, Section 230 is what frees companies to try to respond to and combat misinformation. There are many market forces that push companies to respond to misinformation: the loss of users, the loss of advertisers, the rise of competition. Indeed, we’re seeing all three of those occurring these days as ExTwitter and Facebook have decided to drop any pretense of trying to combat misinformation.

But then you need Section 230 to allow websites that actually are trying to combat misinformation to apply whatever policies they can come up with. It’s what allows them to experiment and to adjust in the face of ever sneakier and ever more malicious users trying to push misinformation.

Without Section 230, each decision and each policy could potentially lead to liability. This means that instead of having moderation teams focused on what will make for the best community overall, you have legal teams focused on what will reduce liability or threats of litigation.

The underlying damning fact here is that the vast majority of misinformation is very much protected speech. And it needs to be if you want to have free speech. Otherwise, you have people like incoming President Trump declare any news that is critical of him as “fake news” and allowing him to take legal action over it.

On top of that, the standard under the First Amendment is that if there is violative content hosted by an intermediary (such as a bookseller), there needs to be actual knowledge not just that the content exists, but that it somehow violates the law.

The end result then is that if you repeal Section 230, you don’t end up with less misinformation. You almost certainly end up with way more. Because websites are encouraged to avoid making moderation decisions, because everything will need to be reviewed by an expensive legal team who will caution against most decisions. It also creates incentives to decrease even reviewing content, out of a fear that a court might deem any moderation effort to be “actual knowledge.”

Thus, the websites that continue to host third-party user-generated content are likely to do significantly less trust & safety work, because the law is saying that if they continue to do that work, they may face greater legal threats for it. That won’t lead to less misinformation, it will lead to more.

The main thing that repealing Section 230 would do is probably lead to many fewer places willing to host third-party content at all, because of that kind of legal liability. Many online forums that want to support communities in a safe and thoughtful way will realize that the risk of liability is too great, and will exit the market (or never enter at all).

So the end result is that you have basically wiped the market of upstarts, smaller spaces, and competitors and left the market to Mark Zuckerberg and Elon Musk. I’m curious if Katie Couric thinks that’s a better world.

Indeed, the only spaces that will remain are those that take the path described above, of limiting their moderation decisions to the legally required level. Only a few sites will do this, and they will quickly become garbage sites that users and advertisers won’t be as interested in participating in.

So we have more power given to Zuck and Musk, fewer competitive spaces, and the remaining sites are incentivized to do less content moderation. Plenty of experts have explained this, including those listed as advisors to Couric’s commission.

I can guarantee that she (or whoever the actual staffers who handled this issue) was told about this impact. But she seems to have internalized just the “repeal 230” part, which is just fundamentally backwards.

That said, I actually do think that the rest of her answer is a pretty good summary of what the real response needs to be: better education, better media literacy, and better teaching people how to fend for themselves against attempts to mislead and lie to them.

As a result, it’s mostly up to the individual to be vigilant about identifying misinformation and not sharing it. This will require intensive media literacy, which will help people understand the steps required to consider the source. That means investigating websites that may be disseminating inaccurate information and understanding their agendas, second-sourcing information, and if it’s an individual, learning more about that person’s background and expertise. Of course, this is all time-consuming and a lot to ask of consumers, but for now, I ascribe to the Sy Syms adage: “An educated consumer is our best customer.”

But, of course, the semi-ironic point in all of this is that having Section 230 around makes that more possible. Without Section 230, we have fewer useful resources to help teach media literacy. We have fewer ways of educating people on how to do things right.

For example, Wikipedia has made clear that it cannot exist without Section 230, and it has become a key tool in information literacy these days (which is ironic, given that in its early days it was widely accused of being a vector of misinformation).

Combating online misinformation is a complex challenge with no easy answers. But despite Couric’s claims, repealing Section 230 is the wrong solution. It would lead to less content moderation, more concentrated power in the hands of a few tech giants, and, ultimately, even more misinformation spreading unchecked online. Policymakers and thought leaders need to move beyond simplistic soundbites and engage with the real nuances of these issues.

Katie Couric is a big name with a big platform. Misinforming the public about these issues does a real disservice to the issue.

Now, maybe the NY Times can ask actual experts who understand the tradeoffs, rather than the famous talking head who doesn’t, next time they want to ask questions about complex and nuanced subjects? I mean, that would involve not spreading misinformation about Section 230, so probably not.

Filed Under: , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Katie Couric Is Wrong: Repealing Section 230 Won’t Stop Online Misinformation”

Subscribe: RSS Leave a comment
117 Comments
This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

maybe the NY Times can ask actual experts who understand the tradeoffs, rather than the famous talking head who doesn’t, next time they want to ask questions about complex and nuanced subjects?

That would involve actually talking to people with expertise. In the age of Trumpism, that is a huge no-no⁠—especially for news outlets that want to stay in the good graces of Dear Leader.

This comment has been deemed insightful by the community.
Anonymous Coward says:

“Katie Couric is a big name with a big platform. Misinforming the public about these issues does a real disservice to the issue.”

It does more than that, it might even give lawmakers more encouragement to repeal it.

…Theen again, we’ve had much more politically influential people talk about it before, and yet 230 is still here, so maybe not.

Still, frustrating to see this kind of misinformation from a supposed expert in tackling misinformation..

Anonymous Coward says:

I’m pretty sure the SC & other courts would not allow that to happen.

Regulating 230 is complicated, & takes a LONG time & about 2 years for repeal or reform to be impalemented.

Besides, look at the bills they’re tried to pass into law to limit 230. They failed, & it’s likely they will do it again in the 2025-26 Session

This comment has been flagged by the community. Click here to show it.

Koby (profile) says:

Re: Re:

The original court case that started this all, Stratton Oakmont v. Prodigy, wasn’t based on the First Amendment. The court found that a publisher was liable for the postings of its users, as long as it engaged in editorial control (i.e.- moderating the boards). There is no First Amendment right fob off liability.

So the only thing that gives social media companies this right is Section 230, not the Bill of Rights. Challenging a repeal of this portion of the 1996 CDA cannot realistically happen, considering that it was never a constitutional right in the first place.

Stephen T. Stone (profile) says:

Re: Re: Re:6

So section 230 COULD be defended on first amendment grounds, then?

It already is. The First Amendment is what gives services like Twitter the right to moderate speech however the powers-that-be for those services want; 230 is what makes sure those services (and their powers-that-be) don’t face legal liability for those decisions. Anyone who wants a repeal of 230 wants one of three things: the ability to file service-destroying Steve Dallas lawsuits that don’t get short-circuited before they start, the ability to force services into hosting speech they would otherwise refuse to host, or both of the above.

This comment has been flagged by the community. Click here to show it.

Koby (profile) says:

Re: Re: Re:2

No, not screwed. Things would revert back to the system that existed prior to the 1996 CDA. Which was the Cubby v. Compuserve model. This 1991 court case found that hosting companies did not have liability, as long as they did not moderate. I suppose some people who are adverse to considering opposing viewpoints would freak out. But the internet existed prior to 1996, and it could function as such once again.

Many of us Section 230 reformers actually like most of it, particularly (c)(1), and we simply want to remove the “otherwise objectionable” language of (c)(2)(A), thereby allowing moderation for the original intent purpose of removing obscenity and pornography. Hopefully if that was the extent of the reform, then it would alleviate concerns even more.

This comment has been flagged by the community. Click here to show it.

Koby (profile) says:

Re: Re: Re:4

It would likely depend on how much moderation is occurring. Specifically for Discord, I’m not sure because I’m not familiar with what their moderation efforts are. But basically, if Discord doesn’t patrol servers, read messages, listen in on voice chat, ect. then probably nothing would change.

One benefit might be if the platform does not moderate, and only the Discord server folks with admin could moderate, then that would further insulate the platform, thereby removing liability and continuing everything as it is right now.

Stephen T. Stone (profile) says:

Re: Re: Re:3

Yes or no: If a Twitter user posts a racist statement that doesn’t use any racial slurs, foul language, or otherwise “obscene” content, should Twitter have the right to remove the statement and/or ban the user?

  • If “yes”: Repealing 230 would make that impossible without putting liability for all other such statements that Twitter doesn’t remove onto Twitter instead of the end users.
  • If “no”: Congratulations, you believe in compelled involuntary association between private entities under threat of government intervention and punishment.
Stephen T. Stone (profile) says:

Re: Re: Re:

Yes or no: Should Twitter be held liable if a user posts a defamatory statement without anyone at Twitter (including Elon Musk) having prior knowledge of the statement or the user’s intent to post that statement?

  • If “yes”: Should a hardware store be held liable if a customer who buys a hammer from that store uses it to bludgeon someone to death?
  • If “no”: Section 230 is what protects Twitter from being sued for third-party speech that it didn’t directly help create or publish, so what’s the big fucking deal that some people can’t break Twitter with a Steve Dallas lawsuit?

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Stephen T. Stone (profile) says:

Re: Re: Re:5

And therein lies the problem: If Prodigy could be held legally liable for content it didn’t moderate, all other services could, too. The Internet as we know it today wouldn’t exist without 230 giving services the protection they needed to moderate third-party speech as they saw fit. I get that you might be more than happy with every interactive web service being a 4chan clone, but I assure you that your position isn’t as popular (or smart) as you believe it is.

Stephen T. Stone (profile) says:

Re: Re: Re:7

Just to be clear, are you arguing that interactive web services should be held legally liable for their moderation decisions and therefore should never moderate any speech⁠—including legally protected yet widely reviled speech such as spam, pornography, and racial slurs⁠—in any way whatsoever?

Arianity says:

Re: Re: Re:8

Just to be clear, are you arguing that interactive web services should be held legally liable for their moderation decisions

They’re just pointing out that Prodigy and Cubby don’t conflict, and you’re misunderstanding which case said what. (Prodigy actually states this: Let it be clear that this Court is in full agreement with Cubby and Auvil)

It would’ve led to a bad precedent, but Prodigy didn’t upend anything about Cubby. There was a difference in the underlying facts of the case (Cubby didn’t moderate at all, Prodigy did). Under Prodigy (and first established in Cubby), if you did no moderation at all, you’re still not liable.

What establishes the liability in Prodigy is the editorial control. If you don’t moderate there isn’t editorial control. To quote: PRODIGY’s conscious choice, to gain the benefits of editorial control, has opened it up to a greater liability than CompuServe and other computer networks that make no such choice.

(Cubby also mentions this, but in less detail: CompuServe has no more editorial control over such a publication than does a public library, book store, or newsstand, and it would be no more feasible for CompuServe to examine every publication it carries for potentially defamatory statements than it would be for any other distributor to do so.)

Stephen T. Stone (profile) says:

Re: Re: Re:9

What establishes the liability in Prodigy is the editorial control. If you don’t moderate there isn’t editorial control.

Techdirt deleted a bunch of spam comments today. Yes or no: Should Techdirt be held legally liable for all the spam comments that are still on the site? Because the problem with the Prodigy ruling is that, if Prodigy had become national precedent, the answer would unequivocally be “yes”. 230 makes sure that answer is “no”. Any change to 230 would push the law in the direction of Prodigy and make moderating third-party speech on any interactive web service⁠—including Techdirt’s comments section⁠s—enough of a burden that the smaller services out there⁠—including Techdirt’s comments sections⁠—would likely shut down to avoid a Steve Dallas lawsuit that would shut things down anyway.

Your mythical “magic change” to 230 that somehow keeps the current status quo intact doesn’t exist. Any change to 230 will destroy the status quo of the Internet as we know it. But feel free to keep suggesting otherwise. Maybe you’ll fool some dumbass who doesn’t already know your anti-230 arguments are complete horseshit.

Arianity says:

Re: Re: Re:10

Yes or no: Should Techdirt be held legally liable for all the spam comments that are still on the site? Because the problem with the Prodigy ruling is that, if Prodigy had become national precedent, the answer would unequivocally be “yes”

No. I know, and Prodigy is bad because of this. So is Cubby. I do not like Prodigy. The definition it uses to classify a publisher is insane and unworkable.

Your mythical “magic change” to 230 that somehow keeps the current status quo intact doesn’t exist.

The comment you’re responding to isn’t pushing for any changes, so I’m not sure why you’re bringing it up? If you’re going to clown on Koby (and you absolutely should), I want you to do it while correctly knowing Cubby/Prodigy, that’s all I was getting at. Making a minor historical mistake comparing Cubby to Prodigy makes him look better, and he doesn’t deserve that. That said:

Any change to 230 will destroy the status quo of the Internet as we know it.

I don’t really see how you can unequivocally say that for any change. Any changes that don’t have guardrails, or is too major, sure, but any at all is really broad. You shit on changes for being mythical/magic, but that’s just as handwave-y. Especially when asking for (and being given) a specific list of proposed changes.

This comment has been deemed insightful by the community.
Strawb (profile) says:

Re: Re: Re:

As usual, koward, you’re wrong. For starters, the creation of section 230 isn’t just based on Stratton Oakmont v. Prodigy, but also on Cubby, Inc v. CompuServe, Inc four years prior.

So the only thing that gives social media companies this right is Section 230, not the Bill of Rights.

Wrong again, koward. The First Amendment covers the right to free association, and that association includes speech. So privately owned social media companies are as free to dictate what can or can’t be said on their platforms as you are in dictating the same thing in your own home.

Here’s Mike Masnick to explain:
Internet companies are sued all the time. Section 230 merely protects them from a narrow set of frivolous lawsuits, in which the websites are sued […] for the moderation choices they make, which are mostly protected by the 1st Amendment anyway (but Section 230 helps get those frivolous lawsuits kicked out faster).

Challenging a repeal of this portion of the 1996 CDA cannot realistically happen, considering that it was never a constitutional right in the first place.

QED, koward.

Arianity says:

Re: Re: Re:2

The First Amendment covers the right to free association, and that association includes speech. So privately owned social media companies are as free to dictate what can or can’t be said on their platforms as you are in dictating the same thing in your own home.

A privately owned social media company can dictate who can post, but that doesn’t make it immune to liability. Under just the 1st Amendment, it can still be liable for publisher liability (or weaker forms of liability like distributor liability).

A paper newspaper can control who can or cannot publish in the newspaper. It still takes on publisher liability for what is published by writers. (ditto for book stores/libraries and distributor liability)

Stephen T. Stone (profile) says:

Re: Re: Re:3

You’re missing the key difference between newspapers and social media services: Newspapers choose exactly what speech (including third-party speech) goes into a given paper before it’s published, whereas social media services don’t choose what third-party speech is posted on a given service before it’s posted. 230 exists to protect those services from being held legally liable for speech that a given service neither created, published, nor had a direct hand in creating/publishing. If you can think of a good reason why that shouldn’t be the case, by all means: Share it with the rest of us.

Arianity says:

Re: Re: Re:4

You’re missing the key difference between newspapers and social media services: Newspapers choose exactly what speech (including third-party speech) goes into a given paper before it’s published, whereas social media services don’t choose what third-party speech is posted on a given service before it’s posted

Yep? It just wasn’t relevant to the specific point they were making, w.r.t. freedom of association vs liability. 1A freedom of association doesn’t preclude liability.

230 exists to protect those services from being held legally liable for speech that a given service neither created, published, nor had a direct hand in creating/publishing. If you can think of a good reason why that shouldn’t be the case, by all means: Share it with the rest of us.

No, I generally like that part of 230. To the extent that I complain about 230, it’s when it applies to content where the service is acting like a traditional publisher (or distributor), but still gets full protection. It’s not a coincidence that all the examples I reference are like this one.

230 exists to protect those services from being held legally liable for speech that a given service neither created, published, nor had a direct hand in creating/publishing

230 also protects services who publish third party content, including if they had a direct hand (that can be reviewing, editing, deciding whether to publish or not, etc). It only doesn’t protect first party content.

230 protects websites for their publishing activity of third-party content. It clearly debunks the completely backwards notion that you are “either a platform or a publisher” and only “platforms” get 230 protections. In Barnes, the court is quite clear that what Yahoo is doing is publishing activity, but since it is an interactive computer service and the underlying content is from a third party, it cannot be held liable as the publisher for that publishing activity under Section 230. link

Arianity says:

Re: Re: Re:

Stratton Oakmont v. Prodigy, wasn’t based on the First Amendment.

That’s not quite right. While Prodigy doesn’t explicitly mention the First Amendment, whether Prodigy was a publisher or distributor (and the appropriate liability) was based on the First Amendment. (And as mentioned in other comments, Cubby came first. And it does actually explicitly talk about 1A: CompuServe has no more editorial control over such a publication than does a public library, book store, or newsstand, and it would be no more feasible for CompuServe to examine every publication it carries for potentially defamatory statements than it would be for any other distributor to do so. “First Amendment guarantees have long been recognized as protecting distributors of publications…. Obviously, the national distributor of hundreds of periodicals has no duty to monitor each issue of every periodical it distributes. Such a rule would be an impermissible burden on the First Amendment.” Lerman v. Flynt Distributing Co…)

Ultimately, Prodigy was held to be a publisher (which comes with more liability than a distributor) because it exerted more editorial control than Compuserve did. But the underlying analysis in the case of whether someone is a publisher vs distributor, and what liability is allowed for each, is 1A based.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
BernardoVerda (profile) says:

Re:

Yes. It is indeed far from the first time the New York Times has published articles misinforming the public about about how section 230 is a bad thing. It’s become a clear and consistent pattern.

Yet it seems {ahem} distinctly unlikely that the people in charge at the New York Times, one of the nation’s largest, oldest and most prestigious news outlets and media institutions, somehow simply don’t actually understand basic laws directly relevant to the media industry.

At some point, one is forced to consider the possibility that this “misunderstanding” of basic media law is a deliberate ploy — and ask what purpose that ploy may serve.

Anonymous Coward says:

Considering the whole idea of ‘stopping’ misinformation in the context of repealing Section 230 just means ‘wanting to be able to sue people for lying’, I don’t know if articles like Couric’s are misinformed so much as driven by a very specific idea of what the ‘solution’ is rather than mere ignorance. There’s often this very specific idea of wanting Consequences for Bad Posters when you start scratching at this kind of argument.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

ECA (profile) says:

repeal?

What is A’ truth, compared to REAL truth?
The Problems come in groups. The First is religion and dealing with what SOME think is the Only truth, without understanding HOW we got to where we are today.
Raise your hands if you wish to remove all LAWS that arnt from the Bible. Do you know how many Corps would Jump on that band wagon? How about Food Processors?
If we do it ‘THEIR WAY’, we might as well Just goto being Muslim.
Are there any women here, that Understand the Old Bible and the standing of Women? Your rights would be GONE.
Back to the days you needed/Need 6 men to declare you were Raped, Before you are Punished for BEING raped??

Look back at the 1960’s Newspapers and how they WERE controlled with News about 2 wars. They Never contested or they would have been Ruined.
And even After we demanded an OPEN gov. and freedom of Info in this country, Where is it?

This comment has been flagged by the community. Click here to show it.

ECA (profile) says:

Re: Which part?

Every group has its own truth, they think.
With no reason/way to have the right to remove what is Probably Wrong. What choices do you have.
Post whats Stomped all over your site, even when its not Quite true to facts?

Then there is the religious groups that would Love to have Basic fundamental laws, but dont realize the reality of the complexity of Life and times. And the Corps would Love for Many laws to disappear along with going back to basics Bible laws.
So you have Lots of backing for these Fundamentalists.
Then as an example, is how the Gov. in the 1960’s Covered up what they were doing in Vietnam and Korea, with all the News services, And Lying to the People of the USA.(not the Whole story was told) And even returning Military were astounded by What they were hearing, nad Passed info around. And the Protests started, but you didnt see 1/2 of the news showing the protests. It was the Hippies vs the National guard.
How bad do we want freedom of the press? Check out 60 minutes and the law suits they have had. It took them years to get enough info and data to have a program to tell the truth, and IF’ taken to court, they had Some protection.

This comment has been flagged by the community. Click here to show it.

William Null says:

The important bit.

“Removing that protection would incentivize sites to take a hands-off approach”

Yeah, that’s kinda the point. Katie is certainly misguided here, but it works for the purpose – to return to the glory days of the early Internet before CDA was even considered. When anyone could say anything on any Usenet (early forums) group without the fear of reprimand of any kind. And if some sites will shut down UGC (user generated content) parts? Good for them, and also some sites can’t really do that without making their entire existence pointless, sites like YouTube, X, Facebook, Instagram – those WILL HAVE TO take the hands off approach to moderation and the Internet (and society) will be all the better for it.

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Teka says:

Re: Re: Re:

“forums are completely unmoderated with people running them not even looking at what’s being posted. Similarly to stuff like Compuserve forums, for example.”

You’re just describing an open sewer, boiling over with filth and decay.

Why would anyone use or operate any kind of forum when it means giving free reign to the lowest scum?

“The topic is neopets? oops, all nazis and hardcore snuff pics! Better not do any moderating ever because the trolls doing the posting are waiting to sue you”

unworkable.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Oh, you poor fool.

those WILL HAVE TO take the hands off approach to moderation and the Internet (and society) will be all the better for it

If you truly believe this, I have a challenge for you: Go to 4chan’s /b/ board and stay there for one uninterrupted hour. Here are the ground rules:

  • You must read the entire text of every post you see.
  • You must keep all images visible at all times and view as many of them as possible at full size.
  • You must view every video file that you come across in its entirety.
  • You must not use any adblockers, userscripts, or browser extensions to hide offensive content.
  • The only exception to these rules can, will, and should be made for any CSAM you may run across as you browse.

Also: You don’t have to view individual threads; you need only browse the front page of that board and refresh it for updates once you’re done reading the page in its current state.

If you can last the full hour without without wondering why anyone would want to host that speech, to post that speech, to experience that speech every day in every corner of the Internet? Only then will you have earned the right to argue in favor of turning all other sites into 4chan. And if you can’t? Well…

…don’t say I didn’t warn you.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Rocky (profile) says:

Re: Re: Re:3

In the end it all comes down to “I want to be able to be an asshole in public without consequence”.

He talked about the “glory days of the internet”, even back then this shit didn’t really fly and the scumbags that got squished back then usually set up their own little cozy corner for likeminded instead of being entitled little snowflakes having meltdowns and demanding a place at the adults table.

This comment has been deemed insightful by the community.
Rocky (profile) says:

Re:

Yeah, that’s kinda the point. Katie is certainly misguided here, but it works for the purpose – to return to the glory days of the early Internet before CDA was even considered.

Every time I hear someone speak of the “glory days of the internet” I hear someone who thinks running a forum with a couple of hundred users is the same as running social media platforms that has tens or hundreds of millions of users. Someone who has forgot that the admins of the early services and BBSes at the time booted people regularly, ruling their fiefdoms like tin-pot dictators. Someone who has forgot that the “good old days” had barriers of entry, you usually needed to have some interest in computers which limited what kind of people was on the internet back then.

When anyone could say anything on any Usenet (early forums) group without the fear of reprimand of any kind.

That’s not how it worked, you could post whatever you wanted but there was no guarantee it would be seen because you couldn’t know if some server-admin somewhere just refused to carry that group or if they filtered out messages from the feed.

And if some sites will shut down UGC (user generated content) parts? Good for them

Not much for free speech are you? Seems to me that you are arguing that sites you don’t like should be disappeared from the internet.

sites like YouTube, X, Facebook, Instagram – those WILL HAVE TO take the hands off approach to moderation and the Internet (and society) will be all the better for it.

Another one who thinks the hecklers veto is the greatest thing since sliced bread. Tell me, do you even understand what happens when certain people are free to do as they wish without consequences? Taking the stance you are doing can only be explained by either you not being smart enough to understand human nature and the worst people problem, or you are one of the people I mentioned.

Society will never be better for it, because if those with antisocial behavior are allowed to force themselves on others without consequence society will suffer for it. Go read some history, or psychology for that matter, and you might actually learn something.

This comment has been flagged by the community. Click here to show it.

William Null says:

Re: Re:

The only way out of Masnick’s Impossibility Theorem that says that moderation at scale is impossible to do well, is to do no moderation in the first place.

As for free speech, I support individual right to free speech. Everyone should be allowed to say anything anywhere. Companies aren’t individuals, individuals work for them, but there need to be separation of personal beliefs from workplace stuff.

Rocky (profile) says:

Re: Re: Re:

The only way out of Masnick’s Impossibility Theorem that says that moderation at scale is impossible to do well, is to do no moderation in the first place.

Ah, so if we can’t do a thing well we shouldn’t do it all? Interesting position, I do hope you are living by that rule and apply it to everything you intend to do which would be mostly be nothing.

As for free speech, I support individual right to free speech. Everyone should be allowed to say anything anywhere.

My property, my rules and I reserve the right to kick you out for any reason at all just like what any other property owner would.

Companies aren’t individuals, individuals work for them, but there need to be separation of personal beliefs from workplace stuff.

Companies consists of individuals and they do have rights, like the freedom of association just like any other group of people who decided to associate. Don’t like it? Tough shit, but that is how reality actually works because normal people doesn’t force themselves on others.

All I hear from you is entitlement, what you want is more important than other people’s rights.

Stephen T. Stone (profile) says:

Re: Re: Re:

Companies aren’t individuals, individuals work for them, but there need to be separation of personal beliefs from workplace stuff.

Yes or no: If an individual posts a racial slur on social media and creates a negative association between the individual and their place of work, should that company be forced by law to keep employing that individual anyway?

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:3

A resounding YES

Just to be clear: Do you believe the government should force a company to remain associated with someone whose words/deeds create a negative association between that person and the company, such that average people associate the asshole with the company and stop associating themselves (and their money) with the company?¹

just because someone is hired by a certain company doesn’t make every single opinion of that individual the opinion of the company as a whole

Doesn’t matter if the company distances itself from the employee by issuing a “this is their opinion, not ours” statement⁠—as long as the employee remains on the payroll, people will link that employee (and their bullshit) to the company. That will only ever generate negative emotions about that company until that employee is fired.

¹ — If you don’t get the point, you may want to learn about the right of association in the United States. You may also want to learn about at-will/“right-to-work” laws.

This comment has been deemed insightful by the community.
Anonymous Coward says:

[…] the standard under the First Amendment is that if there is violative content hosted by an intermediary (such as a bookseller), there needs to be actual knowledge not just that the content exists, but that it somehow violates the law.

It seems that many lawmakers think that platforms like social networks are not intermediary but author, believing that algorithms and rankings are the main way for users to access to content, forgetting that most people are actually looking for something precise.

It’s pretty much like about 20 years ago when people started to use search engines to access to the web (instead of just following web links), it was difficult for some people to understand that many were really searching for something instead of opening the browser and expecting content to magically filling their screen.

They need to clearly understand that a social networks are not just a feed or a homepage with curated content, but a gazillion posts published every day.

This comment has been flagged by the community. Click here to show it.

Arianity says:

Remember, Section 230 is what frees companies to try to respond to and combat misinformation.

It also frees them to not respond or combat it. 230 forecloses that. She may have been wrong in past statements, but what she said in this quote is correct. 230 makes it a nonstarter.

I get that you may not like her based on past statements, but what she said in this quote is anodyne. Unless there’s a longer quote somewhere else.

(which is ironic, given that in its early days it was widely accused of being a vector of misinformation).

It is (and can be) both. A freedom can be both used and misused.

sidenote: You forgot the link: https://www.nytimes.com/2024/12/11/business/dealbook/leaders-advice-insights.html

This comment has been deemed insightful by the community.
Strawb (profile) says:

Re:

I get that you may not like her based on past statements, but what she said in this quote is anodyne.

No. If anything, repealing it would make it harder to combat misinformation, since platforms would be more likely to take a hands-off approach out of fear of liability. Especially so for smaller platforms who can’t afford lengthy court cases.

That One Guy (profile) says:

As reliably as gravity

And the streak remains unbroken, it is impossible to argue against 230 honestly and with fact/evidence/reality-based arguments, because either no-one has found them or they don’t exist.

That to date no-one has come up with a fact-based argument for why scrapping 230 would improve things really is the greatest indicator of just how insanely good the law was and is.

This comment has been flagged by the community. Click here to show it.

Stephen T. Stone (profile) says:

Re: Re: Re:

I’ve never argued for scrapping 230 entirely

Yes, you’ve argued that someone should change 230 in a way that can somehow magically keep the status quo of the Internet intact while…well, to be fair, it doesn’t really matter why you want the change, given that Republicans and Democrats both have different reasons for changing 230 but also think it can be done without wrecking the open Internet.

This comment has been flagged by the community. Click here to show it.

Matthew M Bennett says:

You have no experience "tackling misinformation"

I basically just show up here to see what dipshit opinion you’re voicing today. I see you’re posting less, that’s good.

The new administration and the subsequent dismantling of the censorship industrial complex must really be hurting your revenue stream, huh?

All your ideas are bad and most people have realized it by now.

This comment has been flagged by the community. Click here to show it.

William Null says:

Just FYI

Once 230 is ripped out and the information is flowing freely, you won’t be able to remove comments anymore or even hide them as you do now. Make no mistake, 230 WILL be ripped out, it has already be decided. And if you decide to remove comments entirely? Nobody will visit the site anymore. Because people don’t really read websites for the articles, they’re there for the entertaining side show of freaks in the comment section.

Rocky (profile) says:

Re:

You don’t really understand what 230 is, do you?

A site can tell you to fuck off, moderate you or shadow ban you all they want without 230 existing – it’s their 1A rights that allows them to do that and not 230.

The only thing 230 does, is placing liability on the speaker to avoid an endless litany of SLAPP-suits. So your dream of “information flowing freely” is just the byproduct of drug use. Instead, one of things that’s likely to happen is that you will have to identify yourself everywhere before being allowed to comment, this because the site can then go after you for monetary damages if they get sued for something you did. Other sites will just stop accepting UGC due to the risk of being sued for speech some troglodyte vomited out.

Try to educate yourself how the world actually functions instead of making stuff up as you go because it makes you look stupid.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...