Katie Couric Is Wrong: Repealing Section 230 Won’t Stop Online Misinformation
from the misinformation-about-misinformation dept
Katie Couric recently claimed that repealing Section 230 would help combat online misinformation. The problem is, she couldn’t be more wrong. Worse, as a prominent voice, she’s contributing to the widespread misinformation around Section 230 herself.
A few years ago, for reasons that are unclear to me, Katie Couric chaired a weird Aspen Institute “Commission on Disinformation,” which produced a report on how to tackle disinformation. The report was, well, not good. It was written by people with no real experience tackling issues related to disinformation and it shows. As we noted at the time, it took a “split the baby” approach to trying to deal with disinformation. It described how there were no good answers, that doing anything might make the problem worse, and then still suggested that maybe repealing Section 230 for certain kinds of content (not clearly defined) might help.
The report’s recommendations were a mix of unworkable and nonsensical ideas, betraying the authors’ lack of true expertise on the complex issues and, more importantly, the tradeoffs around online disinformation.
Repealing Section 230 would not magically solve misinformation online. In fact, it would likely make the problem worse. Section 230 is what allows websites to moderate content and experiment with anti-misinformation measures, without fear of lawsuits. Removing that protection would incentivize sites to take a hands-off approach, or shut down user content entirely. The end result would be fewer places for online discourse, dominated by a few tech giants – hardly a recipe for truth.
Still, it appears that Couric is now presenting herself as an expert on disinformation. The NY Times Dealbook has a series of “influential people” supposedly “sharing their insights” on big topics of the day, and they asked Couric about disinformation. Her response was that she was upset Section 230 won’t be repealed.
What is the best tool a person has to combat misinformation today?
There are many remedies for combating misinformation, but sadly getting rid of Section 230 and requiring more transparency by technology companies may not happen.
But again, that only raises serious questions about how little she actually understands the role of Section 230 and how it functions. The idea that repealing Section 230 would be a remedy for combating misinformation is misinformation itself.
Remember, Section 230 is what frees companies to try to respond to and combat misinformation. There are many market forces that push companies to respond to misinformation: the loss of users, the loss of advertisers, the rise of competition. Indeed, we’re seeing all three of those occurring these days as ExTwitter and Facebook have decided to drop any pretense of trying to combat misinformation.
But then you need Section 230 to allow websites that actually are trying to combat misinformation to apply whatever policies they can come up with. It’s what allows them to experiment and to adjust in the face of ever sneakier and ever more malicious users trying to push misinformation.
Without Section 230, each decision and each policy could potentially lead to liability. This means that instead of having moderation teams focused on what will make for the best community overall, you have legal teams focused on what will reduce liability or threats of litigation.
The underlying damning fact here is that the vast majority of misinformation is very much protected speech. And it needs to be if you want to have free speech. Otherwise, you have people like incoming President Trump declare any news that is critical of him as “fake news” and allowing him to take legal action over it.
On top of that, the standard under the First Amendment is that if there is violative content hosted by an intermediary (such as a bookseller), there needs to be actual knowledge not just that the content exists, but that it somehow violates the law.
The end result then is that if you repeal Section 230, you don’t end up with less misinformation. You almost certainly end up with way more. Because websites are encouraged to avoid making moderation decisions, because everything will need to be reviewed by an expensive legal team who will caution against most decisions. It also creates incentives to decrease even reviewing content, out of a fear that a court might deem any moderation effort to be “actual knowledge.”
Thus, the websites that continue to host third-party user-generated content are likely to do significantly less trust & safety work, because the law is saying that if they continue to do that work, they may face greater legal threats for it. That won’t lead to less misinformation, it will lead to more.
The main thing that repealing Section 230 would do is probably lead to many fewer places willing to host third-party content at all, because of that kind of legal liability. Many online forums that want to support communities in a safe and thoughtful way will realize that the risk of liability is too great, and will exit the market (or never enter at all).
So the end result is that you have basically wiped the market of upstarts, smaller spaces, and competitors and left the market to Mark Zuckerberg and Elon Musk. I’m curious if Katie Couric thinks that’s a better world.
Indeed, the only spaces that will remain are those that take the path described above, of limiting their moderation decisions to the legally required level. Only a few sites will do this, and they will quickly become garbage sites that users and advertisers won’t be as interested in participating in.
So we have more power given to Zuck and Musk, fewer competitive spaces, and the remaining sites are incentivized to do less content moderation. Plenty of experts have explained this, including those listed as advisors to Couric’s commission.
I can guarantee that she (or whoever the actual staffers who handled this issue) was told about this impact. But she seems to have internalized just the “repeal 230” part, which is just fundamentally backwards.
That said, I actually do think that the rest of her answer is a pretty good summary of what the real response needs to be: better education, better media literacy, and better teaching people how to fend for themselves against attempts to mislead and lie to them.
As a result, it’s mostly up to the individual to be vigilant about identifying misinformation and not sharing it. This will require intensive media literacy, which will help people understand the steps required to consider the source. That means investigating websites that may be disseminating inaccurate information and understanding their agendas, second-sourcing information, and if it’s an individual, learning more about that person’s background and expertise. Of course, this is all time-consuming and a lot to ask of consumers, but for now, I ascribe to the Sy Syms adage: “An educated consumer is our best customer.”
But, of course, the semi-ironic point in all of this is that having Section 230 around makes that more possible. Without Section 230, we have fewer useful resources to help teach media literacy. We have fewer ways of educating people on how to do things right.
For example, Wikipedia has made clear that it cannot exist without Section 230, and it has become a key tool in information literacy these days (which is ironic, given that in its early days it was widely accused of being a vector of misinformation).
Combating online misinformation is a complex challenge with no easy answers. But despite Couric’s claims, repealing Section 230 is the wrong solution. It would lead to less content moderation, more concentrated power in the hands of a few tech giants, and, ultimately, even more misinformation spreading unchecked online. Policymakers and thought leaders need to move beyond simplistic soundbites and engage with the real nuances of these issues.
Katie Couric is a big name with a big platform. Misinforming the public about these issues does a real disservice to the issue.
Now, maybe the NY Times can ask actual experts who understand the tradeoffs, rather than the famous talking head who doesn’t, next time they want to ask questions about complex and nuanced subjects? I mean, that would involve not spreading misinformation about Section 230, so probably not.
Filed Under: content moderation, free speech, katie couric, misinformation, section 230
Techdirt is off for the long weekend! We'll be back with our regular posts tomorrow.


Comments on “Katie Couric Is Wrong: Repealing Section 230 Won’t Stop Online Misinformation”
That would involve actually talking to people with expertise. In the age of Trumpism, that is a huge no-no—especially for news outlets that want to stay in the good graces of Dear Leader.
Re:
Well, talking to Couric isn’t exactly going to do them favors in Fairyland, either.
Oh look, another person yapping about how section 230 (and thus the internet as a side-effect) needs to die.
They’re getting stale at this point, can we have some “experts” explain to the news media that section 230 ISN’T the legal devil, before lawmakers start getting too many ideas?
Re:
They’ve tried that over and over and over again.
The lawmakers still had (stupid) ideas.
Re: Re:
True, perhaps I overestimate the influence these people have on the actual lawmakers.
Re:
What do you think you just read? Now go ahead and forward it to news media reps of your choice.
“Katie Couric is a big name with a big platform. Misinforming the public about these issues does a real disservice to the issue.”
It does more than that, it might even give lawmakers more encouragement to repeal it.
…Theen again, we’ve had much more politically influential people talk about it before, and yet 230 is still here, so maybe not.
Still, frustrating to see this kind of misinformation from a supposed expert in tackling misinformation..
I’m pretty sure the SC & other courts would not allow that to happen.
Regulating 230 is complicated, & takes a LONG time & about 2 years for repeal or reform to be impalemented.
Besides, look at the bills they’re tried to pass into law to limit 230. They failed, & it’s likely they will do it again in the 2025-26 Session
Re:
That does make me wonder if a repeal law can be challenged in court.
Given you would be effectively limiting peoples’ first amendment rights online.
This comment has been flagged by the community. Click here to show it.
Re: Re:
The original court case that started this all, Stratton Oakmont v. Prodigy, wasn’t based on the First Amendment. The court found that a publisher was liable for the postings of its users, as long as it engaged in editorial control (i.e.- moderating the boards). There is no First Amendment right fob off liability.
So the only thing that gives social media companies this right is Section 230, not the Bill of Rights. Challenging a repeal of this portion of the 1996 CDA cannot realistically happen, considering that it was never a constitutional right in the first place.
Re: Re: Re:
So if section 230 gets repealed, we’re just kind of..Screwed, then?
At least untill they realize the immediate economic consequences, that is.
Re: Re: Re:2
do be warned your talking to koby which is a known troll account here
Re: Re: Re:3
Doesn’t necessarily mean he’s wrong in this case.
Re: Re: Re:4
But he is wrong in this case, anyhow.
(And Koby is wrong so consistently, that it’s actually kind of impressive, in a very sad sort of way.)
Re: Re: Re:5
So section 230 COULD be defended on first amendment grounds, then?
That would make sense to me, but I suppose it would still be a bit of a stretch.
Re: Re: Re:6
It already is. The First Amendment is what gives services like Twitter the right to moderate speech however the powers-that-be for those services want; 230 is what makes sure those services (and their powers-that-be) don’t face legal liability for those decisions. Anyone who wants a repeal of 230 wants one of three things: the ability to file service-destroying Steve Dallas lawsuits that don’t get short-circuited before they start, the ability to force services into hosting speech they would otherwise refuse to host, or both of the above.
Re: Re: Re:7
Ohh, true.
I had kind of lost the fact in all the noise, but I remember now what makes 230 so important; prevents people from being able to tank companies with BS lawsuits over their moderation.
This comment has been flagged by the community. Click here to show it.
Re: Re: Re:2
No, not screwed. Things would revert back to the system that existed prior to the 1996 CDA. Which was the Cubby v. Compuserve model. This 1991 court case found that hosting companies did not have liability, as long as they did not moderate. I suppose some people who are adverse to considering opposing viewpoints would freak out. But the internet existed prior to 1996, and it could function as such once again.
Many of us Section 230 reformers actually like most of it, particularly (c)(1), and we simply want to remove the “otherwise objectionable” language of (c)(2)(A), thereby allowing moderation for the original intent purpose of removing obscenity and pornography. Hopefully if that was the extent of the reform, then it would alleviate concerns even more.
Re: Re: Re:3
How would messenger platforms like Discord be affected by a repeal, if I may ask? Would they just, cease to exist? Would all kinds of online messenging/communication cease to exist?
Once again, I imagine not, but I found it worth bringing up.
This comment has been flagged by the community. Click here to show it.
Re: Re: Re:4
It would likely depend on how much moderation is occurring. Specifically for Discord, I’m not sure because I’m not familiar with what their moderation efforts are. But basically, if Discord doesn’t patrol servers, read messages, listen in on voice chat, ect. then probably nothing would change.
One benefit might be if the platform does not moderate, and only the Discord server folks with admin could moderate, then that would further insulate the platform, thereby removing liability and continuing everything as it is right now.
Re: Re: Re:5
They do moderate (though not exactly as “proactively” or efficiently as they claim), but I suppose in this event, they could effectively just take their hands off of moderation and let the individual servers fend for themselves in that regard.
Re: Re: Re:6
“They do moderate (though not exactly as “proactively” or efficiently as they claim),” most of the time they just remove the reported messages and don’t do much after wards
Re: Re: Re:4
why are you listening to koby?
Re: Re: Re:3
Yes or no: If a Twitter user posts a racist statement that doesn’t use any racial slurs, foul language, or otherwise “obscene” content, should Twitter have the right to remove the statement and/or ban the user?
Re: Re: Re:
Yes or no: Should Twitter be held liable if a user posts a defamatory statement without anyone at Twitter (including Elon Musk) having prior knowledge of the statement or the user’s intent to post that statement?
Re: Re: Re:2
Those do sound like pretty solid arguments if a repeal is attempted.
As it often happens, when you actually go into the details on how section 230 works, arguments against it tend to fall apart.
This comment has been flagged by the community. Click here to show it.
Re: Re: Re:2
No, but you’re wrong about the remainder of your statement. The 1991 court case of Cubby v. Compuserve established that tech platforms with no knowledge of the defamation could not be held liable, which was 5 years prior to the 1996 CDA that established section 230. Please read up on your internet history.
Re: Re: Re:3
Stratton-Oakmont would’ve upended that state of affairs had it become national precedent, which is why 230 exists. You’re just mad that 230 prevents the government from forcing sites like Twitter to host speech they don’t want to host—like, say, yours.
This comment has been flagged by the community. Click here to show it.
Re: Re: Re:4
Nope, because Prodigy actually moderated the content. Which means that they weren’t shielded from liability the way Compuserve was (which didn’t moderate).
Re: Re: Re:5
And therein lies the problem: If Prodigy could be held legally liable for content it didn’t moderate, all other services could, too. The Internet as we know it today wouldn’t exist without 230 giving services the protection they needed to moderate third-party speech as they saw fit. I get that you might be more than happy with every interactive web service being a 4chan clone, but I assure you that your position isn’t as popular (or smart) as you believe it is.
Re: Re: Re:6 You forgot something
If Prodigy wouldn’t moderate anything at all whatsoever, the case would end up just like Compuserve. They’ve opened themselves to the liability the moment they’ve removed first message or banned the first person.
Re: Re: Re:7
Just to be clear, are you arguing that interactive web services should be held legally liable for their moderation decisions and therefore should never moderate any speech—including legally protected yet widely reviled speech such as spam, pornography, and racial slurs—in any way whatsoever?
Re: Re: Re:8
They’re just pointing out that Prodigy and Cubby don’t conflict, and you’re misunderstanding which case said what. (Prodigy actually states this: Let it be clear that this Court is in full agreement with Cubby and Auvil)
It would’ve led to a bad precedent, but Prodigy didn’t upend anything about Cubby. There was a difference in the underlying facts of the case (Cubby didn’t moderate at all, Prodigy did). Under Prodigy (and first established in Cubby), if you did no moderation at all, you’re still not liable.
What establishes the liability in Prodigy is the editorial control. If you don’t moderate there isn’t editorial control. To quote: PRODIGY’s conscious choice, to gain the benefits of editorial control, has opened it up to a greater liability than CompuServe and other computer networks that make no such choice.
(Cubby also mentions this, but in less detail: CompuServe has no more editorial control over such a publication than does a public library, book store, or newsstand, and it would be no more feasible for CompuServe to examine every publication it carries for potentially defamatory statements than it would be for any other distributor to do so.)
Re: Re: Re:9
Techdirt deleted a bunch of spam comments today. Yes or no: Should Techdirt be held legally liable for all the spam comments that are still on the site? Because the problem with the Prodigy ruling is that, if Prodigy had become national precedent, the answer would unequivocally be “yes”. 230 makes sure that answer is “no”. Any change to 230 would push the law in the direction of Prodigy and make moderating third-party speech on any interactive web service—including Techdirt’s comments sections—enough of a burden that the smaller services out there—including Techdirt’s comments sections—would likely shut down to avoid a Steve Dallas lawsuit that would shut things down anyway.
Your mythical “magic change” to 230 that somehow keeps the current status quo intact doesn’t exist. Any change to 230 will destroy the status quo of the Internet as we know it. But feel free to keep suggesting otherwise. Maybe you’ll fool some dumbass who doesn’t already know your anti-230 arguments are complete horseshit.
Re: Re: Re:10
No. I know, and Prodigy is bad because of this. So is Cubby. I do not like Prodigy. The definition it uses to classify a publisher is insane and unworkable.
The comment you’re responding to isn’t pushing for any changes, so I’m not sure why you’re bringing it up? If you’re going to clown on Koby (and you absolutely should), I want you to do it while correctly knowing Cubby/Prodigy, that’s all I was getting at. Making a minor historical mistake comparing Cubby to Prodigy makes him look better, and he doesn’t deserve that. That said:
I don’t really see how you can unequivocally say that for any change. Any changes that don’t have guardrails, or is too major, sure, but any at all is really broad. You shit on changes for being mythical/magic, but that’s just as handwave-y. Especially when asking for (and being given) a specific list of proposed changes.
Re: Re: Re:
As usual, koward, you’re wrong. For starters, the creation of section 230 isn’t just based on Stratton Oakmont v. Prodigy, but also on Cubby, Inc v. CompuServe, Inc four years prior.
Wrong again, koward. The First Amendment covers the right to free association, and that association includes speech. So privately owned social media companies are as free to dictate what can or can’t be said on their platforms as you are in dictating the same thing in your own home.
Here’s Mike Masnick to explain:
“Internet companies are sued all the time. Section 230 merely protects them from a narrow set of frivolous lawsuits, in which the websites are sued […] for the moderation choices they make, which are mostly protected by the 1st Amendment anyway (but Section 230 helps get those frivolous lawsuits kicked out faster).”
QED, koward.
Re: Re: Re:2
A privately owned social media company can dictate who can post, but that doesn’t make it immune to liability. Under just the 1st Amendment, it can still be liable for publisher liability (or weaker forms of liability like distributor liability).
A paper newspaper can control who can or cannot publish in the newspaper. It still takes on publisher liability for what is published by writers. (ditto for book stores/libraries and distributor liability)
Re: Re: Re:3
You’re missing the key difference between newspapers and social media services: Newspapers choose exactly what speech (including third-party speech) goes into a given paper before it’s published, whereas social media services don’t choose what third-party speech is posted on a given service before it’s posted. 230 exists to protect those services from being held legally liable for speech that a given service neither created, published, nor had a direct hand in creating/publishing. If you can think of a good reason why that shouldn’t be the case, by all means: Share it with the rest of us.
Re: Re: Re:4
Yep? It just wasn’t relevant to the specific point they were making, w.r.t. freedom of association vs liability. 1A freedom of association doesn’t preclude liability.
No, I generally like that part of 230. To the extent that I complain about 230, it’s when it applies to content where the service is acting like a traditional publisher (or distributor), but still gets full protection. It’s not a coincidence that all the examples I reference are like this one.
230 also protects services who publish third party content, including if they had a direct hand (that can be reviewing, editing, deciding whether to publish or not, etc). It only doesn’t protect first party content.
230 protects websites for their publishing activity of third-party content. It clearly debunks the completely backwards notion that you are “either a platform or a publisher” and only “platforms” get 230 protections. In Barnes, the court is quite clear that what Yahoo is doing is publishing activity, but since it is an interactive computer service and the underlying content is from a third party, it cannot be held liable as the publisher for that publishing activity under Section 230. link
Re: Re: Re:
That’s not quite right. While Prodigy doesn’t explicitly mention the First Amendment, whether Prodigy was a publisher or distributor (and the appropriate liability) was based on the First Amendment. (And as mentioned in other comments, Cubby came first. And it does actually explicitly talk about 1A: CompuServe has no more editorial control over such a publication than does a public library, book store, or newsstand, and it would be no more feasible for CompuServe to examine every publication it carries for potentially defamatory statements than it would be for any other distributor to do so. “First Amendment guarantees have long been recognized as protecting distributors of publications…. Obviously, the national distributor of hundreds of periodicals has no duty to monitor each issue of every periodical it distributes. Such a rule would be an impermissible burden on the First Amendment.” Lerman v. Flynt Distributing Co…)
Ultimately, Prodigy was held to be a publisher (which comes with more liability than a distributor) because it exerted more editorial control than Compuserve did. But the underlying analysis in the case of whether someone is a publisher vs distributor, and what liability is allowed for each, is 1A based.
Re: Re: Re:
Correct, it was based on a court case that effectively violated the First Amendment rights of a website. You were saying?
Re:
Probably. For now. Take a look at some of the recent rulings from the 5th Circuit of Appeals and imagine what will happen when judges like those are appointed to the Supreme Court.
Haven’t you heard? Facts are no longer allowed. The scientific method has been debunked, probability theory is based on invented mathematics, and the Washington Post finally realizes that darkness isn’t what kills democracy; money kills democracy.
This comment has been flagged by the community. Click here to show it.
Re:
and before you go to your defeatism rant i will tell you to go somewhere else bro
Oh wait, I just realized this isn’t the first time the NYtimes have published a false article about section 230 being a bad thing.
Kiiinda starting to suspect the issue lies with the company itself and their choice in who to interview..
Re:
Yes. It is indeed far from the first time the New York Times has published articles misinforming the public about about how section 230 is a bad thing. It’s become a clear and consistent pattern.
Yet it seems {ahem} distinctly unlikely that the people in charge at the New York Times, one of the nation’s largest, oldest and most prestigious news outlets and media institutions, somehow simply don’t actually understand basic laws directly relevant to the media industry.
At some point, one is forced to consider the possibility that this “misunderstanding” of basic media law is a deliberate ploy — and ask what purpose that ploy may serve.
Re: Re:
Maybe a misjudged attempt to claw back some relevance by reducing the amount of third-party news sources? (With some classic blissful unawareness that this would also affect THEM as well.)
Re: Re:
Once can be a coincidence.
Twice is suspicious.
Three times is enemy action.
They know exactly what they’re doing hosting so many anti-230 articles, the only question is, as you noted, why they’re trying to kill it.
Considering the whole idea of ‘stopping’ misinformation in the context of repealing Section 230 just means ‘wanting to be able to sue people for lying’, I don’t know if articles like Couric’s are misinformed so much as driven by a very specific idea of what the ‘solution’ is rather than mere ignorance. There’s often this very specific idea of wanting Consequences for Bad Posters when you start scratching at this kind of argument.
It doesn’t actually matter whether it’s true. All that matters is whether they can get enough people to believe it and enough corrupt politicians (including the robed variety) to see it through.
Re:
Biden already called for its repeal once and SCOTUS has had at least one, if not more, chances to overturn it.
Still here.
This comment has been flagged by the community. Click here to show it.
Re: Re:
you might wanna flag them i can tell there going the defeatist route
If you feel you have a disinformation problem, you don’t have a Section 230 problem, you have a First Amendment problem.
This comment has been flagged by the community. Click here to show it.
repeal?
What is A’ truth, compared to REAL truth?
The Problems come in groups. The First is religion and dealing with what SOME think is the Only truth, without understanding HOW we got to where we are today.
Raise your hands if you wish to remove all LAWS that arnt from the Bible. Do you know how many Corps would Jump on that band wagon? How about Food Processors?
If we do it ‘THEIR WAY’, we might as well Just goto being Muslim.
Are there any women here, that Understand the Old Bible and the standing of Women? Your rights would be GONE.
Back to the days you needed/Need 6 men to declare you were Raped, Before you are Punished for BEING raped??
Look back at the 1960’s Newspapers and how they WERE controlled with News about 2 wars. They Never contested or they would have been Ruined.
And even After we demanded an OPEN gov. and freedom of Info in this country, Where is it?
Re:
what.
What’re you on about with this?
Re:
what the actual fuck
Re: Re:
You guys still read their posts? LOL
This comment has been flagged by the community. Click here to show it.
Re: Which part?
Every group has its own truth, they think.
With no reason/way to have the right to remove what is Probably Wrong. What choices do you have.
Post whats Stomped all over your site, even when its not Quite true to facts?
Then there is the religious groups that would Love to have Basic fundamental laws, but dont realize the reality of the complexity of Life and times. And the Corps would Love for Many laws to disappear along with going back to basics Bible laws.
So you have Lots of backing for these Fundamentalists.
Then as an example, is how the Gov. in the 1960’s Covered up what they were doing in Vietnam and Korea, with all the News services, And Lying to the People of the USA.(not the Whole story was told) And even returning Military were astounded by What they were hearing, nad Passed info around. And the Protests started, but you didnt see 1/2 of the news showing the protests. It was the Hippies vs the National guard.
How bad do we want freedom of the press? Check out 60 minutes and the law suits they have had. It took them years to get enough info and data to have a program to tell the truth, and IF’ taken to court, they had Some protection.
Re: Re:
Honestly I have a really hard time comprehending what you say, with your wording and all.
You’re jumping all over the place man.
Re:
Another MAGAt…
This comment has been flagged by the community. Click here to show it.
The important bit.
“Removing that protection would incentivize sites to take a hands-off approach”
Yeah, that’s kinda the point. Katie is certainly misguided here, but it works for the purpose – to return to the glory days of the early Internet before CDA was even considered. When anyone could say anything on any Usenet (early forums) group without the fear of reprimand of any kind. And if some sites will shut down UGC (user generated content) parts? Good for them, and also some sites can’t really do that without making their entire existence pointless, sites like YouTube, X, Facebook, Instagram – those WILL HAVE TO take the hands off approach to moderation and the Internet (and society) will be all the better for it.
Re:
In the context of the modern internet though, wouldn’t this outright kill the chance of any future forums, archives, etc instead?
This comment has been flagged by the community. Click here to show it.
Re: Re:
Nope, as long as:
Re: Re: Re:
I’ve heard the new online safety regulations in the UK have forced some forums to shut down over the threat of legal repercussions + the expenses require to comply. Wouldn’t forums in the US risk something similair here?
This comment has been flagged by the community. Click here to show it.
Re: Re: Re:2
No, because we’re not ADDING new regulations, we REMOVE those that are already there. Also most of EU is a hellhole (especially UK) when it comes to freedom of speech, if you have the means, emigrate and take any businesses you may own with you.
Re: Re: Re:3
TIL: The UK didn’t vote to leave the EU in 2016.
Re: Re: Re:4
It started to become one long before it left EU and you know it.
Re: Re: Re:5
At least you finally admitted the Uk is no longer a part of the EU.
Re: Re: Re:6
Never claimed it was.
Re: Re: Re:7
AC:
William Null:
William Null’s original comment in this thread:
Get it yet, or are you gonna continue to lie badly?
Re: Re: Re:
“forums are completely unmoderated with people running them not even looking at what’s being posted. Similarly to stuff like Compuserve forums, for example.”
You’re just describing an open sewer, boiling over with filth and decay.
Why would anyone use or operate any kind of forum when it means giving free reign to the lowest scum?
“The topic is neopets? oops, all nazis and hardcore snuff pics! Better not do any moderating ever because the trolls doing the posting are waiting to sue you”
unworkable.
Re: Re: Re:2
Because you don’t have a choice as every other site looks exactly like that. And the fact that you bring up Neopets is especially funny since that site is already a scientology hellhole (look who’s funding it).
Re: Oh, you poor fool.
If you truly believe this, I have a challenge for you: Go to 4chan’s /b/ board and stay there for one uninterrupted hour. Here are the ground rules:
Also: You don’t have to view individual threads; you need only browse the front page of that board and refresh it for updates once you’re done reading the page in its current state.
If you can last the full hour without without wondering why anyone would want to host that speech, to post that speech, to experience that speech every day in every corner of the Internet? Only then will you have earned the right to argue in favor of turning all other sites into 4chan. And if you can’t? Well…
…don’t say I didn’t warn you.
This comment has been flagged by the community. Click here to show it.
Re: Re: 4chan is not unmoderated.
Not even 4chan is fully unmoderated. But yeah, I am frequent visitor there under a variety of aliases. Protip: Speech you’re offended by doesn’t make it offensive. It only shows how emotionally weak you are.
Re: Re: Re:
So…Worse than 4chan?
Dear god.
Re: Re: Re:
Doesn’t make it worth hosting, either. If someone wants to host it, all power to them—but for what reason should the government force the hosting of that speech onto every service on the Internet?
Re: Re: Re:2
Besides, the Moody ruling literally made it clear the government can’t force companies to host speech they don’t want.
Re: Re: Re:2
I think you missed the important giveaway in that they seem to want every site to turn into 4chan because they like what the user experience is there.
Re: Re: Re:
Would you show everything you post on the internet to all your family, friends, co-workers and employer?
If the answer is no, then you are emotionally weak per your own definition.
Re: Re: Re:2
In their comment they admit to using multiple aliases to post so they’ve already answered no to that question even in that limited scope, never mind a wider one.
Re: Re: Re:3
In the end it all comes down to “I want to be able to be an asshole in public without consequence”.
He talked about the “glory days of the internet”, even back then this shit didn’t really fly and the scumbags that got squished back then usually set up their own little cozy corner for likeminded instead of being entitled little snowflakes having meltdowns and demanding a place at the adults table.
Re: Re: Re:
TIL: I’m emotionally weak for being offended by the word “n****r”. Fuck you, you racist asshole.
Re:
Right, because, as we all know, websites filling up with bile, idiocy, trolling and harassment is a surefire way of keeping users engaged.
Platforms taking a hands-off approach would turn every site with a comments section into 4chan. If you think that’s a societal gain, you must be a 4channer yourself.
Re:
Every time I hear someone speak of the “glory days of the internet” I hear someone who thinks running a forum with a couple of hundred users is the same as running social media platforms that has tens or hundreds of millions of users. Someone who has forgot that the admins of the early services and BBSes at the time booted people regularly, ruling their fiefdoms like tin-pot dictators. Someone who has forgot that the “good old days” had barriers of entry, you usually needed to have some interest in computers which limited what kind of people was on the internet back then.
That’s not how it worked, you could post whatever you wanted but there was no guarantee it would be seen because you couldn’t know if some server-admin somewhere just refused to carry that group or if they filtered out messages from the feed.
Not much for free speech are you? Seems to me that you are arguing that sites you don’t like should be disappeared from the internet.
Another one who thinks the hecklers veto is the greatest thing since sliced bread. Tell me, do you even understand what happens when certain people are free to do as they wish without consequences? Taking the stance you are doing can only be explained by either you not being smart enough to understand human nature and the worst people problem, or you are one of the people I mentioned.
Society will never be better for it, because if those with antisocial behavior are allowed to force themselves on others without consequence society will suffer for it. Go read some history, or psychology for that matter, and you might actually learn something.
This comment has been flagged by the community. Click here to show it.
Re: Re:
The only way out of Masnick’s Impossibility Theorem that says that moderation at scale is impossible to do well, is to do no moderation in the first place.
As for free speech, I support individual right to free speech. Everyone should be allowed to say anything anywhere. Companies aren’t individuals, individuals work for them, but there need to be separation of personal beliefs from workplace stuff.
Re: Re: Re:
Just to be clear, this is not only wrong, but ridiculously ignorant. The Impossibility is inclusive of doing no moderation, which is not just worse than doing some, but is (in most cases) the dumbest possible result.
If you do no moderation you will get shut down for posting CSAM material, copyright infringing material, and more. You are legally required to.
Second, if you do no moderation at all, your board will be overrun by spam in no time. At that point it becomes useless. If you don’t want it overrun by spam, you need to do moderation.
Third, if you stop the illegal stuff and the spam, then you have to deal with people literally plotting crimes, which is going to get you in deep shit as well. Also, abuse and harassment.
Only people who have no clue what they’re talking about and have never moderated anything, even a local book group, thinks that “no moderation” is workable, let alone a solution to anything.
Re: Re: Re:2
Corollary to the Impossibility Theorem: Anyone proposing no moderation is likely one who wants CSAM, illegal content, spam, and/or abuse & harassment.
Re: Re: Re:
Ah, so if we can’t do a thing well we shouldn’t do it all? Interesting position, I do hope you are living by that rule and apply it to everything you intend to do which would be mostly be nothing.
My property, my rules and I reserve the right to kick you out for any reason at all just like what any other property owner would.
Companies consists of individuals and they do have rights, like the freedom of association just like any other group of people who decided to associate. Don’t like it? Tough shit, but that is how reality actually works because normal people doesn’t force themselves on others.
All I hear from you is entitlement, what you want is more important than other people’s rights.
Re: Re: Re:
Yes or no: If an individual posts a racial slur on social media and creates a negative association between the individual and their place of work, should that company be forced by law to keep employing that individual anyway?
This comment has been flagged by the community. Click here to show it.
Re: Re: Re:2 A resounding YES
And also it’s a high time that just because someone is hired by a certain company doesn’t make every single opinion of that individual the opinion of the company as a whole, unless it is posted on the official company’s channels. Even if the person in question is the CEO.
Re: Re: Re:3
Just to be clear: Do you believe the government should force a company to remain associated with someone whose words/deeds create a negative association between that person and the company, such that average people associate the asshole with the company and stop associating themselves (and their money) with the company?¹
Doesn’t matter if the company distances itself from the employee by issuing a “this is their opinion, not ours” statement—as long as the employee remains on the payroll, people will link that employee (and their bullshit) to the company. That will only ever generate negative emotions about that company until that employee is fired.
¹ — If you don’t get the point, you may want to learn about the right of association in the United States. You may also want to learn about at-will/“right-to-work” laws.
Re: Re: Re:2
According to the Employment Appeal Tribunal in the UK, yes.
Re: Re: Re:3
No, that case is specifically about CGD taking action against Forstater for something she did on her on time. If she voiced her bigotry at CGD she wouldn’t have had a leg to stand on legal-wise.
Re: Re: Re:4
FYI, the point still stands.
Re: Re: Re:5
No, it doesn’t. That you can’t understand the difference between a property owner’s rights to an employer’s legal obligations is entirely on you.
Re: Re: Re:6
Actually, AC is right. Forstater did post her transphobic bullshit on her private accounts, but because it was under her real name, it got associated with the organization she worked for.
Re: Re: Re:
If that were remotely true then why didn’t you post that under your real name Billy?
Narrator voice It was because Pseudonymous Billy wanted the speech without the consequences.
Re: Re: Re:
FYI, “moderation at scale is impossible to do well” =/= “no moderation should be done”, MAGAt troll.
It seems that many lawmakers think that platforms like social networks are not intermediary but author, believing that algorithms and rankings are the main way for users to access to content, forgetting that most people are actually looking for something precise.
It’s pretty much like about 20 years ago when people started to use search engines to access to the web (instead of just following web links), it was difficult for some people to understand that many were really searching for something instead of opening the browser and expecting content to magically filling their screen.
They need to clearly understand that a social networks are not just a feed or a homepage with curated content, but a gazillion posts published every day.
This comment has been flagged by the community. Click here to show it.
It also frees them to not respond or combat it. 230 forecloses that. She may have been wrong in past statements, but what she said in this quote is correct. 230 makes it a nonstarter.
I get that you may not like her based on past statements, but what she said in this quote is anodyne. Unless there’s a longer quote somewhere else.
It is (and can be) both. A freedom can be both used and misused.
sidenote: You forgot the link: https://www.nytimes.com/2024/12/11/business/dealbook/leaders-advice-insights.html
Re:
Well, what’s the solution then? And do you really think congress could come up with something that won’t screw over smaller sites?
Re:
No. If anything, repealing it would make it harder to combat misinformation, since platforms would be more likely to take a hands-off approach out of fear of liability. Especially so for smaller platforms who can’t afford lengthy court cases.
Re:
What?
As reliably as gravity
And the streak remains unbroken, it is impossible to argue against 230 honestly and with fact/evidence/reality-based arguments, because either no-one has found them or they don’t exist.
That to date no-one has come up with a fact-based argument for why scrapping 230 would improve things really is the greatest indicator of just how insanely good the law was and is.
Re:
It also gives it a good deal of protection in legal cases.
It’s gonna be hard for lawmakers to argue for a repeal when you get into the technical facts of how the law works.
Re:
Examples that prove the rule: Koby and Arianity in this thread.
This comment has been flagged by the community. Click here to show it.
Re: Re:
I’ve never argued for scrapping 230 entirely, and I’ve been quite clear about that.
Re: Re: Re:
Yes, you’ve argued that someone should change 230 in a way that can somehow magically keep the status quo of the Internet intact while…well, to be fair, it doesn’t really matter why you want the change, given that Republicans and Democrats both have different reasons for changing 230 but also think it can be done without wrecking the open Internet.
Re: Re:
Actually, Koby and Arianity do not prove the rule at all, they provide glaring examples of it.
This comment has been flagged by the community. Click here to show it.
You have no experience "tackling misinformation"
I basically just show up here to see what dipshit opinion you’re voicing today. I see you’re posting less, that’s good.
The new administration and the subsequent dismantling of the censorship industrial complex must really be hurting your revenue stream, huh?
All your ideas are bad and most people have realized it by now.
Re:
If that were true, you wouldn’t have to come here and say it. By saying it, you’re proving it’s not true.
Re:
“ censorship industrial complex”
Who did you steal that from bro? Because we all know (you included) you are far, far too stupid to have come up with yourself.
Katie Couric is a “misinformation expert” in the same way that Ted Bundy is a murder expert. Can’t believe people are still listening to her.
This comment has been flagged by the community. Click here to show it.
Just FYI
Once 230 is ripped out and the information is flowing freely, you won’t be able to remove comments anymore or even hide them as you do now. Make no mistake, 230 WILL be ripped out, it has already be decided. And if you decide to remove comments entirely? Nobody will visit the site anymore. Because people don’t really read websites for the articles, they’re there for the entertaining side show of freaks in the comment section.
Re:
You don’t really understand what 230 is, do you?
A site can tell you to fuck off, moderate you or shadow ban you all they want without 230 existing – it’s their 1A rights that allows them to do that and not 230.
The only thing 230 does, is placing liability on the speaker to avoid an endless litany of SLAPP-suits. So your dream of “information flowing freely” is just the byproduct of drug use. Instead, one of things that’s likely to happen is that you will have to identify yourself everywhere before being allowed to comment, this because the site can then go after you for monetary damages if they get sued for something you did. Other sites will just stop accepting UGC due to the risk of being sued for speech some troglodyte vomited out.
Try to educate yourself how the world actually functions instead of making stuff up as you go because it makes you look stupid.
Re: Re:
If you read all of this guy’s comments on this post, it’s pretty clear he’s a know nothing troll.
Re:
You’re just trying to rile up the doomerism.