Democratic Senator Mark Kelly and Republican Senator John Curtis want to gut Section 230 to combat “political radicalization”—in honor of Charlie Kirk, whose entire career was built on political radicalization.
Kirk styled himself as a “free speech warrior” because he would show up on college campuses to “debate” people, but as we’ve covered, the “debate me bro” shtick was just trolling designed to generate polarizing content for social media. He made his living pushing exactly the kind of inflammatory political content that these senators now claim is so dangerous it requires dismantling core legal protections for speech. Their solution to political violence inspired by online rhetoric is to create a legal framework that will massively increase censorship of political speech.
Which they claim they’re doing… in support of free speech.
Almost everything about what they’re saying is backwards.
The two Senators spoke at an event at Utah Valley University, where Charlie Kirk was shot, to talk about how they were hoping to stop political violence. That’s a worthwhile goal, but their proposed solution reveals they don’t understand how Section 230 actually works.
The senators also used their bipartisan panel on Wednesday to announce plans to hold social media companies accountable for the type of harmful content promoted around the assassination of Kirk, which they say leads to political violence.
During their televised discussion, Curtis and Kelly previewed a bill they intend to introduce shortly that would remove liability protection for social media companies that boost content that contributes to political radicalization and violence.
The “Algorithm Accountability Act” would transform one of the pillars of internet governance by reforming a 30-year-old regulation known as Section 230 that gives online platforms legal immunity for content posted by their users.
“What we’re saying is this is creating an environment that is causing all sorts of harm in our society and particularly with our youth, and it needs to be addressed,” Curtis told the Deseret News.
The bill would strip Section 230 protections from companies if it can be proven in court that they used an algorithm to amplify content that caused harm. This change means tech giants would “own” the harmful content they promote, creating a private cause of action for individuals to sue.
Like so many politicians who want to gut Section 230, Kelly and Curtis clearly don’t understand how it actually works. Their “Algorithm Accountability Act” would create exactly the kind of censorship regime they claim to oppose.
It’s kind of incredible how many times I’ve had to say this to US Senators, but repealing 230 doesn’t make companies automatically responsible for speech. That’s literally not how it works. They’re still protected by the First Amendment.
It just makes it much more expensive to defend hosting speech, which means they will take one of two approaches: (1) host way less speech and become much, much more restricted in what people can say or (2) do little to no moderation, because under the First Amendment, they can only be held liable if they have knowledge of legally violative content.
And most of the content that would be covered by this bill “speech that contributes to political radicalization” is, um, kinda quintessentially protected by the First Amendment.
Kelly’s comments reveal the stunning cognitive dissonance at the heart of this proposal:
“I did not agree with him on much. But I’ll tell you what, I will go to war to fight for his right to say what he believes,” said Kelly, who is a former Navy pilot. “Even if you disagree with somebody, doesn’t mean you put a wall up between you and them.”
This is breathtaking doublethink. Kelly claims he’ll “go to war” to protect Kirk’s right to speak while literally authoring legislation that will silence the platforms where that speech happens. It’s like saying “I’ll defend your right to assembly” while bulldozing every meeting hall in town.
Curtis manages to be even more confused:
What this bill would do, Curtis explained, is open up these trillion-dollar companies to the same kind of liability that tobacco companies and other industries face.
“If they’re responsible for something going out that caused harm, they are responsible. So think twice before you magnify. Why do these things need to be magnified at all?” Curtis said.
This comparison is absurdly stupid. Tobacco is a physical product that literally destroys your lungs and causes cancer. Speech is expression protected by the First Amendment. Curtis is essentially arguing that if political speech influences someone’s behavior in a way he doesn’t like, the platform should be liable—as if words and ideas are chemically addictive carcinogens.
The entire point of the First Amendment is that we don’t consider speech to be harmful.
What Curtis is proposing is holding companies liable whenever speech “causes harm,” which is fucking terrifying when Trump and his FCC are already threatening platforms for hosting criticism of the administration.
The political implications here are staggering. Kelly, a Democrat, is signing onto a bill that will let Trump and MAGA supporters (the bill has a private right of action that will let anyone sue!) basically sue every internet platform for “promoting” content they deem politically polarizing, which they will say is anything that criticizes Trump or promotes “woke” views.
And why is he pushing such a bill in supposed support of Charlie Kirk, a person whose only job was pushing political polarization, and whose entire “debate me bro” shtick was entirely designed to push political polarization online?
What are we even doing here?
This entire proposal is a monument to confused thinking. Kelly and Curtis claim they want to honor Charlie Kirk by passing legislation that would have silenced the very platforms where he built his career. They claim to support free speech while authoring a bill designed to chill political expression. They worry about political polarization while creating a legal weapon that will be used almost exclusively by the most polarizing political actors to silence their critics.
Rolling back Section 230 will lead to much greater censorship, not less. Claiming it’s necessary to diminish political polarization is disconnected from reality. But at least it will come in handy for whoever challenges this law as unconstitutional—the backers are out there openly admitting they’re introducing legislation designed to violate the First Amendment.
Brian Reed’s “Question Everything” podcast built its reputation on careful journalism that explores moral complexity within the journalism field. It’s one of my favorite podcasts. Which makes his latest pivot so infuriating: Reed has announced he’s now advocating to repeal Section 230—while demonstrating he fundamentally misunderstands what the law does, how it works, and what repealing it would accomplish.
If you’ve read Techdirt for basically any length of time, you’ll know that I feel the exact opposite on this topic. Repealing, or really almost all proposals to reform Section 230, would be a complete disaster for free speech on the internet, including for journalists.
The problem isn’t advocacy journalism—I’ve been doing that myself for years. The problem is Reed’s approach: decide on a solution, then cherry-pick emotional anecdotes and misleading sources to support it, while ignoring the legal experts who could explain why he’s wrong. It’s the exact opposite of how to do good journalism, which is unfortunate for someone who holds out his (otherwise excellent!) podcast as a place to explore how to do journalism well.
Last week, he published the first episode of his “get rid of 230” series, and it has so many problems, mistakes, and nonsense, that I feel like I had to write about it now, in the hopes that Brian might be more careful in future pieces. (Reed has said he plans to interview critics of his position, including me, but only after the series gets going—which seems backwards for someone advocating major legal changes.)
The framing of this piece is around the conspiracy theory regarding the Sandy Hook school shootings, and someone who used to believe them. First off, this feels like a cheap journalistic hook, basing a larger argument on an emotional hook that clouds the issues and the trade-offs. The Sandy Hook shooting was horrible! The fact that some jackasses pushed conspiracy theories about it is also horrific! That primes you in the form of “something must be done, this is something, we must do this” to accept Reed’s preferred solution: repeal 230.
But he doesn’t talk to any actual experts on 230, misrepresents Section 230, misleads people into understanding how repealing 230 would impact that specific (highly emotional) story, and then closes on an emotionally manipulative hook (convincing the person he spoke to who used to believe in Sandy Hook conspiracy theories, that getting rid of 230 would work, despite her lack of understanding or knowledge of what would actually happen).
In listening to the piece, it struck me that Reed here is doing part of what he (somewhat misleadingly) claims social media companies are doing: hooking you with manipulative lies and misrepresentations to keep you hooked and to convince you something false is true by lying to his listeners. It’s a shame, but it’s certainly not journalism.
Let’s dig into some of the many problems with the piece.
The Framing is Manipulative
I already mentioned that the decision to frame the entire piece around one extraordinary, but horrific story is manipulative, but it goes beyond that. Reed compares the fact that some of the victims from Sandy Hook successfully sued Alex Jones for defamation over the lies and conspiracy theories he spread regarding that event, to the fact that they can’t sue YouTube.
But in 2022, family members of 10 of the Sandy Hook victims did win a defamation case against Alex Jones’s company, and the verdict was huge. Jones was ordered to pay the family members over a billion dollars in damages.
Just this week, the Supreme Court declined to hear an appeal from Jones over it. A semblance of justice for the victims, though infuriatingly, Alex Jones filed for bankruptcy and has avoided paying them so far. But also, and this is what I want to focus on, the lawsuits are a real deterrent to Alex Jones and others who will likely think twice before lying like this again.
So now I want you to think about this. Alex Jones did not spread this lie on his own. He relied on social media companies, especially YouTube, which hosts his show, to send his conspiracy theory, out to the masses. One YouTube video spouting this lie shortly after the shooting got nearly 11 million views in less than 2 weeks. And by 2018 when the family sued him. Alex Jones had 1.6 billion views on his YouTube channel. The Sandy Hook lie was laced throughout that content, burrowing its way into the psyche of millions of people, including Kate and her dad.
Alex Jones made money off of each of those views. But so did YouTube. Yet, the Sandy Hook families, they cannot sue YouTube for defaming them because of section 230.
There are a ton of important details left out of this, that, if actually presented, might change the understanding here. First, while the families did win that huge verdict, much of that was because Jones defaulted. He didn’t really fight the defamation case, basically ignoring court orders to turn over discovery. It was only after the default that he really tried to fight things at the remedy stage. Indeed, part of the Supreme Court cert petition that was just rejected was because he claimed he didn’t get a fair trial due to the default.
You simply can’t assume that because the families won that very bizarre case in which Jones treated the entire affair with contempt, that means that the families would have a case against YouTube as well. That’s not how this works.
This is Not How Defamation Law Works
Reed correctly notes that the bar for defamation is high, including that there has to be knowledge to qualify, but then immediately seems to forget that. Without a prior judicial determination that specific content is defamatory, no platform—with or without Section 230—is likely to meet the knowledge standard required for liability. That’s kind of important!
Now this is really important to keep in mind. Freedom of speech means we have the freedom to lie. We have the freedom to spew absolute utter bullshit. We have the freedom to concoct conspiracy theories and even use them to make money by selling ads or subscriptions or what have you.
Most lies are protected by the First Amendment and they should be.
But there’s a small subset of lies that are not protected speech even under the First Amendment. The old shouting fire in a crowded theater, not necessarily protected. And similarly, lies that are defamatory aren’t protected.
In order for a statement to be defamatory, okay, for the most part,whoever’s publishing it has to know it’s untrueand it has to cause damage to the person or the institution the statement’s about. Reputational damage, emotional damage, or a lie could hurt someone’s business. The bar for proving defamation is high in the US. It can be hard to win those cases.
I bolded the key part here: while there’s some nuance here, mostly, the publisher has to know the statement is untrue. And the bar here is very high. To survive under the First Amendment, the knowledge standard is important.
It’s why booksellers can’t be held liable for “obscene” books on their shelves. It’s why publishers aren’t held liable for books they publish, even if those books lead people to eat poisonous mushrooms. The knowledge standard matters.
And even though Reed mentions the knowledge point, he seems to immediately forget it. Nor does he even attempt to deal with the question of how an algorithm can have the requisite knowledge (hint: it can’t). He just brushes past that kind of important part.
But it’s the key to why his entire argument premise is flawed: just making it so anyone can sue web platforms doesn’t mean anyone will win. Indeed, they’ll lose in most cases. Because if you get rid of 230, the First Amendment still exists. But, because of a bunch of structural reasons explained below, it will make the world of internet speech much worse for you and I (and the journalists Reed wants to help), while actually clearing the market of competitors to the Googles and Metas of the world Reed is hoping to punish.
That’s Not How Section 230 Works
Reed’s summary is simply inaccurate. And not in the “well, we can differ on how we describe it.” He makes blatant factual errors. First, he claims that “only internet companies” get 230 protections:
These companies have a special protection that only internet companies get. We need to strip that protection away.
But that’s wrong. Section 230 applies to any provider of an interactive computer service (which is more than just “internet companies”) and their users. It’s right there in the law. Because of that latter part, it has protected people forwarding emails and retweeting content. It has been used repeatedly to protect journalists on that basis. It protects you and me. It is not exclusive to “internet companies.” That’s just factually wrong.
The law is not, and has never been, some sort of special privilege for certain kinds of companies, but a framework for protecting speech online, by making it possible for speech distributing intermediaries to exist in the first place. Which helps journalists. And helps you and me. Without it, there would be fewer ways in which we could speak.
Reed also appears to misrepresent or conflate a bunch of things here:
Section 230, which Congress passed in 1996, it makes it so that internet companies can’t be sued for what happened happens on their sites. Facebook, YouTube, Tik Tok, they bear essentially no responsibility for the content they amplify and recommend to millions, even billions of people. No matter how much it harms people, no matter how much it warps our democracy under section 230, you cannot successfully sue tech companies for defamation, even if they spread lies about you. You can’t sue them for pushing a terror recruitment video on someone who then goes and kills your family member. You can’t sue them for bombarding your kids. with videos that promote eating disorders or that share suicide methods or sexual content.
First off, much of what he describes is First Amendment protected speech. Second, he ignores that Section 230 doesn’t apply to federal criminal law, which is what things like terrorist content would likely cover (I’m guessing he’s confused based on the Supreme Court cases from a few years ago, where 230 wasn’t the issue—the lack of any traceability of the terrorist attacks to the websites was).
But, generally speaking, if you’re advocating for legal changes, you should be specific in what you want changed and why. Putting out a big list of stuff, some of which would be protected, some of which would not be, as well as some that the law covers and some it doesn’t… isn’t compelling. It suggests you don’t understand the basics. Furthermore, lumping things like eating disorders in with defamation and terrorist content, suggests an unwillingness to deal with the specifics and the complexities. Instead, it suggests a desire for a general “why can’t we pass a law that says ‘bad stuff isn’t allowed online?'” But that’s a First Amendment issue, not a 230 issue (as we’ll explain in more detail below).
Reed also, unfortunately, seems to have been influenced by the blatantly false argument that there’s a platform/publisher distinction buried within Section 230. There isn’t. But it doesn’t stop him from saying this:
I’m going to keep reminding you what Section 230 is, as we covered on this show, because I want it to stick. Section 230, small provision in a law Congress passed in 1996, just 26 words, but words that were so influential, they’re known as the 26 words that created the internet.
Quick fact check: Section 230 is way longer than 26 words. Yes, Section (c)(1) is 26 words. But, the rest matters too. If you’re advocating to repeal a law, maybe read the whole thing?
Those words make it so that internet platforms cannot be treated as publishers of the content on their platform. It’s why Sandy Hook parents could sue Alex Jones for the lies he told, but they couldn’t sue the platforms like YouTube that Jones used to spread those lies.
And there is a logic to this that I think made sense when Section 230 was passed in the ’90s. Back then, internet companies offered chat rooms, message boards, places where other people posted, and the companies were pretty passively transmitting those posts.
Reed has this completely backwards. Section 230 was a direct response to Stratton Oakmont v. Prodigy, where a judge ruled that Prodigy’s active moderation to create a “family friendly” service made it liable for all content on the platform.
The two authors of Section 230, Ron Wyden and Chris Cox, have talked about this at length for decades. They wanted platforms to be active participants and not dumb conduits passively transmitting posts. Their fear was without Section 230, those services would be forced to just be passive transmitters, because doing anything to the content (as Prodigy did) would make them liable. But given the amount of content, that would be impossible.
So Cox and Wyden’s solution to encourage platforms to be more than passive conduits was to say “if you do regular publishing activities—such as promoting, rearranging, and removing certain content then we won’t treat you like a publisher.”
The entire point was to encourage publisher-like behavior, not discourage it.
Reed has the law’s purpose exactly backwards!
That’s kind of shocking for someone advocating to overturn the law! It would help to understand it first! Because if the law actually did what Reed pretends it does, I might be in favor of repeal as well! The problem is, it doesn’t. And it never did.
One analogy that gets thrown around for this is that the platforms, they’re like your mailman. They’re just delivering somebody else’s letter about the Sandy Hook conspiracy. They’re not writing it themselves. And sure, that might have been true for a while, but imagine now that the mailman reads the letter he’s delivering, sees it’s pretty tantalizing. There’s a government conspiracy to take away people’s guns by orchestrating a fake school shooting, hiring child actors, and staging a massacre and a whole 911 response.
The mailman thinks, “That’s pretty good stuff. People are going to like this.” He makes millions of copies of the letter and delivers them to millions of people. And then as all those people start writing letters to their friends and family talking about this crazy conspiracy, the mailman keeps making copies of those letters and sending them around to more people.
And he makes a ton of money off of this by selling ads that he sticks into those envelopes. Would you say in that case the mailman is just a conduit for someone else’s message? Or has he transformed into a different role? A role more like a publisher who should be responsible for the statements he or she actively chooses to amplify to the world. That is essentially what YouTube and other social media platforms are doing by using algorithms to boost certain content. In fact, I think the mailman analogy is tame for what these companies are up to.
Again, the entire framing here is backwards. It’s based on Reed’s false assumption—an assumption that any expert in 230 would hopefully disabuse him of—that the reason for 230 was to encourage platforms to be “passive conduits” but it’s the exact opposite.
Cox and Wyden were clear (and have remained clear) that the purpose of the law was exactly the opposite. It was to give platforms the ability to create different kinds of communities and to promote/demote/moderate/delete at will.
The key point was that, because of the amount of content, no website would be willing and able to do any of this if they were potentially held liable for everything.
As for the final point, that social media companies are now way different from “the mailman,” both Cox and Wyden have talked about how wrong that is. In an FCC filing a few years back, debunking some myths about 230, they pointed out that this claim of “oh sites are different” is nonsense and misunderstands the fundamentals of the law:
Critics of Section 230 point out the significant differences between the internet of 1996 and today.Those differences, however, are not unanticipated. When we wrote the law, we believed the internet of the future was going to be a very vibrant and extraordinary opportunity for people to become educated about innumerable subjects, from health care to technological innovation to their own fields of employment. So we began with these two propositions: let’s make sure that every internet user has the opportunity to exercise their First Amendment rights; and let’s deal with the slime and horrible material on the internet by giving both websites and their users the tools and the legal protection necessary to take it down.
The march of technology and the profusion of e-commerce business models over the last two decadesrepresent precisely the kind of progress that Congress in 1996 hoped would follow from Section 230’s protectionsfor speech on the internet and for the websites that host it. The increase in user-created content in the years since then is both a desired result of the certainty the law provides, and further reason that the law is needed more than ever in today’s environment.
The Understanding of How Incentives Work Under the Law is Wrong
Here’s where Reed’s misunderstanding gets truly dangerous. He claims Section 230 removes incentives for platforms to moderate content. In reality, it’s the opposite: without Section 230, websites would have less incentive to moderate, not more.
Why? Because under the First Amendment, you need to show that the intermediary had actual knowledge of the violative nature of the content. If you removed Section 230, the best way to prove that you have no knowledge is not to look, and not to moderate.
You potentially go back to a Stratton Oakmont-style world, where the incentives are to do less moderation because any moderation you do introduces more liability. The more liability you create, the less likely someone is to take on the task. Any investigation into Section 230 has to start from understanding those basic facts, so it’s odd that Reed so blatantly misrepresents them and suggests that 230 means there’s no incentive to moderate:
We want to make stories that are popular so we can keep audiences paying attention and sell ads—or movie tickets or streaming subscriptions—to support our businesses. But in the world that every other media company occupies, aside from social media, if we go too far and put a lie out that hurts somebody, we risk getting sued.
It doesn’t mean other media outlets don’t lie or exaggerate or spin stories, but there’s still a meaningful guard rail there. There’s a real deterrent to make sure we’re not publishing or promoting lies that are so egregious, so harmful that we risk getting sued, such as lying about the deaths of kids who were killed and their devastated parents.
Social media companies have no such deterrent and they’re making tons of money. We don’t know how much money in large part because the way that kind of info usually gets forced out of companies is through lawsuits which we can’t file against these tech behemoths because of section 230. So, we don’t know, for instance, how much money YouTube made from content with the Sandy Hook conspiracy in it. All we know is that they can and do boost defamatory lies as much as they want, raking cash without any risk of being sued for it.
But this gets at a fundamental flaw that shows up in these debates: that the only possible pressure on websites is the threat of being sued. That’s not just wrong, it, again, totally gets the purpose and function of Section 230 backwards.
There are tons of reasons for websites to do a better job moderating: if your platform fills up with garbage, users start to go away. As do advertisers, investors, other partners as well.
This is, fundamentally, the most frustrating part about every single new person who stumbles haphazardly into the Section 230 debate without bothering to understand how it works within the law. They get the incentives exactly backwards.
230 says “experiment with different approaches to making your website safe.” Taking away 230 says “any experiment you try to keep your website safe opens you up to ruinous litigation.” Which one do you think leads to a healthier internet?
It Misrepresents how Companies Actually Work
Reed paints tech companies as cartoon villains, relying on simplistic and misleading interpretations of leaked documents and outdated sources. This isn’t just sloppy—it’s the kind of manipulative framing he’d probably critique in other contexts.
For example, he grossly misrepresents (in a truly manipulative way!) what the documents Frances Haugen released said, just as much of the media did. For example, here’s how Reed characterizes some of what Haugen leaked:
Haugen’s document dump showed that Facebook leadership knew about the harms their product is causing, including disinformation and hate speech, but also product designs that were hurting children, such as the algorithm’s tendency to lead teen girls to posts about anorexia. Francis Haugen told lawmakers that top people at Facebook knew exactly what the company was doing and why it was doing.
Except… that’s very much out of context. Here’s how misleading Reed’s characterization is. The actual internal research Haugen leaked—the stuff Reed claims shows Facebook “knew about the harms”—looked like this:
The headline of that slide sure looks bad, right? But then you look at the context, which shows that in nearly every single category they studied across boys and girls, they found that more users found Instagram made them feel better, not worse. The only category where that wasn’t true was teen girls and body image, where the split was pretty equal. That’s one category out of 24 studied! And this was internal research calling out that fact because the point was to convince the company to figure out ways to better deal with that one case, not to ignore it.
And, what we’ve heard over and over again since all this is that companies have moved away from doing this kind of internal exploration, because they know that if they learn about negative impacts of their own service, it will be used against them by the media.
Reed’s misrepresentation creates exactly the perverse incentive he claims to oppose: companies now avoid studying potential harms because any honest internal research will be weaponized against them by journalists who don’t bother to read past the headline. Reed’s approach of getting rid of 230’s protections would make this even worse, not better.
Because as part of any related lawsuit there would be discovery, and you can absolutely guarantee that a study like the one above that Haugen leaked would be used in court, in a misleading way, showing just that headline, without the necessary context of “we called this out to see how we could improve.”
So without Section 230 and with lawsuits, companies would have much less incentive to look for ways to improve safety online, because any such investigation would be presented as “knowledge” of the problem. Better not to look at all.
There’s a similar problem with the way Reed reports on the YouTube algorithm. Reed quotes Guillaume Chaslot but doesn’t mention that Chaslot left YouTube in 2013—12 years ago. That’s ancient history in tech terms. I’ve met Chaslot and been on panels with him. He’s great! And I think his insights on the dangers of the algorithm in the early days were important work and highlighted to the world the problems of bad algorithms. But it’s way out of date. And not all of the algorithms are bad.
Conspiracy theories are are really easy to make. You can just make your own conspiracy theories in like one hour shoot it and then it get it can get millions of views. They’re addictive because people who live in this filter bubble of conspiracy theories and they don’t watch the classical media. So they spend more time on YouTube.
Imagine you’re someone who doesn’t trust the media, you’re going to spend more time on YouTube. So since you spend more time on YouTube, the algorithm thinks you’re better than anybody else. The definition of better for the algorithm, it’s who spends more time. So it will recommend you more. So there’s like this vicious call.
It’s a vicious circle, Chaslot says, where the more conspiratorial the videos, the longer users stay on the platform watching them, the more valuable that content becomes, the more YouTube’s algorithm recommends the conspiratorial videos.
Since Chaslot left YouTube, there have been a series of studies that have shown that, while some of that may have been true back when Chaslot was at the company, it hasn’t been true in many, many years.
A study in 2019 (looking at data from 2016 onwards) found that YouTube’s algorithm actually pushed people away from radicalizing content. A further study a couple of years ago similarly found no evidence of YouTube’s algorithm sending people down these rabbit holes.
It turns out that things like Chaslot’s public berating of the company, as well as public and media pressure, not to mention political blowback, had helped the company re-calibrate the algorithm away from all that.
And you know what allowed them to do that? The freedom Section 230 provided, saying that they wouldn’t face any litigation liability for adjusting the algorithm.
A Total Misunderstanding of What Would Happen Absent 230
Reed’s fundamental error runs deeper than just misunderstanding the law—he completely misunderstands what would happen if his “solution” were implemented. He claims that the risk of lawsuits would make the companies act better:
We need to be able to sue these companies.
Imagine the Sandy Hook families had been able to sue YouTube for defaming them in addition to Alex Jones. Again, we don’t know how much money YouTube made off the Sandy Hook lies. Did YouTube pull in as much cash as Alex Jones, five times as much? A hundred times? Whatever it was, what if the victims were able to sue YouTube? It wouldn’t get rid of their loss or trauma, but it could offer some compensation. YouTube’s owned by Google, remember, one of the most valuable companies in the world. More likely to actually pay out instead of going bankrupt like Alex Jones.
This fantasy scenario has three fatal flaws:
First, YouTube would still win these cases. As we discussed above, there’s almost certainly no valid defamation suit here. Most complained about content will still be First Amendment-protected speech, and YouTube, as the intermediary, would still have the First Amendment and the “actual knowledge” standard to fall back on.
The only way to have actual knowledge of content being defamatory is for there to be a judgment in court about the content. So, YouTube couldn’t be on the hook in this scenario until after the plaintiffs had already taken the speaker to court and received a judgment that the content was defamatory. At that point, you could argue that the platform would then be on notice and could no longer promote the content. But that wouldn’t stop any of the initial harms that Reed thinks they would.
Second, Reed’s solution would entrench Big Tech’s dominance. Getting a case dismissed on Section 230 grounds costs maybe $50k to $100k. Getting the same case dismissed on First Amendment grounds? Try $2 to $5 million.
For a company like Google or Meta, with their buildings full of lawyers, this is still pocket change. They’ll win those cases. But it means that you’ve wiped out the market for non-Meta, non-Google sized companies. The smaller players get wiped out because a single lawsuit (or even a threat of a lawsuit) can be existential.
The end result: Reed’s solution gives more power to the giant companies he paints as evil villains.
Third, there’s vanishingly little content that isn’t protected by the First Amendment. Using the Alex Jones example is distorting and manipulative, because it’s one of the extremely rare cases where defamation has been shown (and that was partly just because Jones didn’t really fight the case).
Reed doubles down on these errors:
But on a wider scale, The risk of massive lawsuits like this, a real threat to these companies’ profits, could finally force the platforms to change how they’re operating. Maybe they change the algorithms to prioritize content from outlets that fact check because that’s less risky. Maybe they’d get rid of fancy algorithms altogether, go back to people getting shown posts chronologically or based on their own choice of search terms. It’d be up to the companies, but however they chose to address it, they would at least have to adapt their business model so that it incorporated the risk of getting sued when they boost damaging lies.
This shows Reed still doesn’t understand the incentive structure. Companies would still win these lawsuits on First Amendment grounds. And they’d increase their odds by programming algorithms and then never reviewing content—the exact opposite of what Reed suggests he wants.
And here’s where Reed’s pattern of using questionable sources becomes most problematic. He quotes Frances Haugen advocating for his position, without noting that Haugen has no legal expertise on these issues:
For what it’s worth, this is what Facebook whistleblower Frances Haugen argued for in Congress in 2021.
I strongly encourage reforming Section 230 to exempt decisions about algorithms. They have 100% control over their algorithms and Facebook should not get a free pass on choices it makes to prioritize growth and virality and reactiveness over public safety. They shouldn’t get a free pass on that because they’re paying for their profits right now with our safety. So, I strongly encourage reform of 230 in that way.
But, as we noted when Haugen said that, this is (again) getting it all backwards. At the very same time that Haugen was testifying with those words, Facebook was literally running ads all over Washington DC, encouraging Congress to reform Section 230 in this way. Facebook wants to destroy 230.
Why? Because Zuckerberg knows full well what I wrote above. Getting rid of 230 means a few expensive lawsuits that his legal team can easily win, while wiping out smaller competitors who can’t afford the legal bills.
Meta’s usage has been declining as users migrate to smaller platforms. What better way to eliminate that competition than making platform operation legally prohibitive for anyone without Meta’s legal budget?
Notably, not a single person Reed speaks to is a lawyer. He doesn’t talk to anyone who lays out the details of how all this works. He only speaks to people who dislike tech companies. Which is fine, because it’s perfectly understandable to hate on big tech companies. But if you’re advocating for a massive legal change, shouldn’t you first understand how the law actually works in practice?
For a podcast about improving journalism, this represents a spectacular failure of basic journalistic practices. Indeed, Reed admits at the end that he’s still trying to figure out how to do all this:
I’m still trying to figure out how to do this whole advocacy thing. Honestly, pushing for a policy change rather than just reporting on it. It’s new to me and I don’t know exactly what I’m supposed to be doing. Should I be launching a petition, raising money for like a PAC? I’ve been talking to marketing people about slogans for a campaign. We’ll document this as I stumble my way through. It’s all a bit awkward for me. So, if you have ideas for how you can build this movement to be able to sue big tech. Please tell me.
There it is: “I’m still trying to figure out how to do this whole advocacy thing.” Reed has publicly committed to advocating for a specific legal change—one that would fundamentally reshape how the internet works—while admitting he doesn’t understand advocacy, hasn’t talked to experts, and is figuring it out as he goes. Generally it’s a bad idea to come up with a slogan when you still don’t even understand the thing you’re advocating for.
This is advocacy journalism in reverse: decide your conclusion, then do the research. It’s exactly the kind of shoddy approach that Reed would rightly criticize in other contexts.
I have no problem with advocacy journalism. I’ve been doing it for years. But effective advocacy starts with understanding the subject deeply, consulting with experts, and then forming a position based on that knowledge. Reed has it backwards.
The tragedy is that there are so many real problems with how big tech companies operate, and there are thoughtful reforms that could help. But Reed’s approach—emotional manipulation, factual errors, and backwards legal analysis—makes productive conversation harder, not easier.
Maybe next time, try learning about the law first, then deciding whether to advocate for its repeal.
Americans are not peasants. We are citizens of a republic founded on the revolutionary proposition that ordinary people can govern themselves. This isn’t poetry or aspiration—it’s the foundational premise of the American project. And right now, a faction of tech oligarchs is betting everything on proving that premise wrong.
They want to replace “We the People” with “We the Users.”
When Peter Thiel writes that democracy and freedom are incompatible, he’s not making a philosophical observation. He’s stating a preference. When Elon Musk guts federal agencies while posting American flags, he’s not reforming government. He’s replacing citizenship with administration. When Silicon Valley oligarchs speak about “optimization” and “efficiency,” they’re not talking about improving systems that serve citizens. They’re talking about managing peasants.
Because that’s what they think we are. Peasants. Masses incapable of self-governance. Users to be monetized. Workers to be replaced. Voters to be manipulated through algorithmic feeds designed to exploit our psychological vulnerabilities. Populations requiring management by those with superior intelligence and technological sophistication.
You see this in your daily life. An algorithm decides what news you see, not your own judgment about what matters. Your feed is curated by systems optimized for engagement rather than truth, designed to keep you scrolling rather than thinking. Your attention becomes their commodity. Your consciousness becomes their resource. Your capacity for independent judgment gets systematically eroded by platforms that treat you as a user to be optimized rather than a citizen capable of self-governance.
This represents the complete inversion of the American founding premise. The revolutionary generation staked everything on a radical proposition: that ordinary people could govern themselves, that citizenship was possible, that republican self-governance was superior to rule by kings, aristocrats, or anyone claiming the right to govern based on superior status, breeding, or intelligence.
“We hold these truths to be self-evident” means exactly what it says—not that kings acknowledge these truths, not that the intelligent agree with them, not that the powerful grant them, but that citizens assert them as the foundation of legitimate government. Self-evident to whom? To us. To the people who govern ourselves through collective deliberation rather than submitting to administration by our betters.
Lincoln understood what was at stake when he stood at Gettysburg and declared that the war would determine whether “government of the people, by the people, for the people, shall not perish from the earth.” Not government for the people by superior managers. Not government of the people by technological elites. But government by the people themselves—the radical proposition that citizens possess the capacity to govern rather than requiring governance by those who claim superior qualification.
The distinction between citizens and peasants isn’t semantic. It’s ontological. Peasants exist to be governed. Their role is obedience, tribute, and acceptance of decisions made by those qualified to make them. Citizens govern themselves. Their role is participation, judgment, and shared responsibility for collective outcomes.
We are not peasants. And yet every assault on American institutions over the past several years represents the systematic effort to transform us into exactly that.
The systematic elimination of civil service protections doesn’t improve government efficiency—it replaces professional judgment answerable to law with personal loyalty answerable to power. The attacks on independent agencies don’t reduce bureaucratic waste—they eliminate the institutional mechanisms through which citizens check oligarchic extraction. The celebration of “disruption” doesn’t foster innovation—it destroys the stable frameworks within which genuine self-governance becomes possible.
DOGE isn’t a government efficiency project. It’s the systematic replacement of citizenship with administration, democratic accountability with optimization metrics, collective self-governance with management by superior intelligence. When Elon Musk eliminates entire agencies staffed by career professionals and replaces them with political loyalists, he’s not improving government. He’s implementing his explicit belief that most people are incapable of meaningful judgment and require direction from those smart enough to know better.
This is why the flag-posting rings so hollow. Genuine patriotism implies reciprocal obligation—that loving your country means contributing to its maintenance as a collective project, that national pride entails responsibility for national institutions, that citizenship is something you participate in rather than perform. What the tech oligarchs demonstrate is nationalism without reciprocity: they want the aesthetic of belonging to a great nation while refusing every actual obligation that citizenship requires.
They love America as a brand, as an identity marker, as a territory they control. But they hate America as an actual collective project requiring their submission to democratic judgment, their participation in shared governance, their acceptance that other citizens possess equal standing to challenge their preferences and constrain their power.
Even Steve Bannon—nationalist populist, former Trump strategist, authoritarian movement builder—recognizes what the Silicon Valley faction represents. In a rare point of agreement across factional lines, Bannon has observed that the tech oligarchs aren’t patriots but post-national extractors using patriotic language to disguise systematic looting. When even authoritarian allies can see that you’re not engaged in national renewal but oligarchic capture, the performance has become too obvious to maintain.
Americans are not peasants. We are citizens of a republic founded on the revolutionary proposition that self-governance is possible, that ordinary people possess the capacity for judgment, that democratic deliberation beats optimization by superior intelligence. Every accommodation to oligarchic extraction, every acceptance of their framing, every failure to defend citizenship against those who would reduce us to subjects in their optimization experiments—all of it betrays the fundamental premise that makes America America.
We deserve better than this because citizenship is the foundation of what we are. Not subjects. Not users. Not populations to be managed. Citizens.
And citizens don’t wait for permission to defend what we are. We govern, or we lose everything that makes us who we are. The choice is here. The choice is now. History will not forgive us if we forget what we are—and surrender without a fight to those who would reduce us to peasants in a land our ancestors bled to make free.
We are not peasants. We are citizens. And citizenship is not a gift granted by superior intelligence. It is a responsibility we claim, a burden we carry, a right we defend—or lose forever to those who never believed we deserved it in the first place.
Mike Brock is a former tech exec who was on the leadership team at Block. Originally published at his Notes From the Circus.
Back in April 2023, when Substack CEO Chris Best refused to answer basic questions about whether his platform would allow racist content, I noted that his evasiveness was essentially hanging out a “Nazis Welcome” sign. By December, when the company doubled down and explicitly said they’d continue hosting and monetizing Nazi newsletters, they’d fully embraced their reputation as the Nazi bar.
Last week, we got a perfect demonstration of what happens when you build your platform’s reputation around welcoming Nazis: your recommendation algorithms start treating Nazi content as more than worth tolerating, to content worth promoting.
As Taylor Lorenz reported on User Mag’s Patreon account, Substack sent push notifications to users encouraging them to subscribe to “NatSocToday,” a newsletter that “describes itself as ‘a weekly newsletter featuring opinions and news important to the National Socialist and White Nationalist Community.'”
As you can see, the notification included the newsletter’s swastika logo, leading confused users to wonder why they were getting Nazi symbols pushed to their phones.
“I had [a swastika] pop up as a notification and I’m like, wtf is this? Why am I getting this?” one user said. “I was quite alarmed and blocked it.” Some users speculated that Substack had issued the push alert intentionally in order to generate engagement or that it was tied to Substack’s recent fundraising round. Substack is primarily funded by Andreessen Horowitz, a firm whose founders have pushed extreme far right rhetoric.
“I thought that Substack was just for diaries and things like that,” a user who posted about receiving the alert on his Instagram story told User Mag. “I didn’t realize there was such a prominent presence of the far right on the app.”
Substack’s response was predictable corporate damage control:
“We discovered an error that caused some people to receive push notifications they should never have received,” a spokesperson told User Mag. “In some cases, these notifications were extremely offensive or disturbing. This was a serious error, and we apologize for the distress it caused.”
But here’s the thing about algorithmic “errors”—they reveal the underlying patterns your system has learned. Recommendation algorithms don’t randomly select content to promote. They surface content based on engagement metrics: subscribers, likes, comments, and growth patterns. When Nazi content consistently hits those metrics, the algorithm learns to treat it as successful content worth promoting to similar users.
There may be some randomness involved, and algorithms aren’t perfectly instructive of how a system has been trained, but it at least raises some serious questions about what Substack thinks people will like based on its existing data.
As Lorenz notes, the Nazi newsletter that got promoted has “746 subscribers and hundreds of collective likes on Substack Notes.” More troubling, users who clicked through were recommended “related content from another Nazi newsletter called White Rabbit,” which has over 8,600 subscribers and “is also being recommended on the Substack app through its ‘rising’ leaderboard.”
This isn’t a bug. It’s a feature working exactly as designed. Substack’s recommendation systems are doing precisely what they’re built to do: identify content that performs well within the platform’s ecosystem and surface it to potentially interested users. The “error” isn’t that the algorithm malfunctioned—it’s that Substack created conditions where Nazi content could thrive well enough to trigger promotional systems in the first place.
When you build a platform that explicitly welcomes Nazi content, don’t act surprised when that content performs well enough to trigger your promotional systems. When you’ve spent years defending your decision to help Nazis monetize their content, you can’t credibly claim to be “disturbed” when your algorithms recognize that Nazi content is succeeding on your platform.
The real tell here isn’t the push notification itself—it’s that Substack’s discovery systems are apparently treating Nazi newsletters as content worth surfacing to new users. That suggests these publications aren’t just surviving on Substack, they’re thriving well enough to register as “rising” content worthy of algorithmic promotion.
This is the inevitable endpoint of Substack’s content moderation philosophy. You can’t spend years positioning yourself as the platform that won’t “censor” Nazi content, actively help those creators monetize, and then act shocked when your systems start treating that content as editorially valuable.
This distinction matters enormously in terms of what sort of speech you are endorsing: there’s a world of difference between passively hosting speech and actively promoting it. When Substack defended hosting Nazi newsletters, they could claim they were simply providing infrastructure for discourse. But push notifications and algorithmic recommendations are something different—they’re editorial decisions about what content deserves amplification and which users might be interested in it.
To be clear, that’s entirely protected speech under the First Amendment as all editorial choices are protected. Substack is allowed to promote Nazis. But they should really stop pretending they don’t mean to. They’ve made it clear that they welcome literal Nazis on their platform and now it’s been made clear that their algorithm recognizes that Nazi content performs well.
This isn’t about Substack “supporting free speech”—it’s about Substack’s own editorial speech and what it’s choosing to say. They’re not just saying “Nazis welcome.” They’re saying “we think other people will like Nazi content too.”
And the public has every right to use their own free speech to call out and condemn such a choice. And use their own free speech rights of association to say “I won’t support Substack” because of this.
All the corporate apologies in the world can’t change what their algorithms revealed: when you welcome Nazis, you become the Nazi bar. And when you become the Nazi bar, your systems start working to bring more customers to the Nazis.
Your reputation remains what you allow. But it’s even more strongly connected to what you actively promote.
Earlier this year, I was a part of a CNN documentary, Twitter: Breaking the Bird, which gave me much pause for reflection about the state of social media and how we got here. This year alone we’ve witnessed an unprecedented wave of disruption across these platforms.
Government workers, locked out of their jobs, struggled to organize securely. Protestors, seeking to plan No Kings marches, wondered which app could be the most trusted. Inbound international travelers have been deleting their social apps for fear that immigration officers will search their phones. And during major disasters, like the tragic Texas floods and the LA fires, emergency responders and volunteers find their critical updates buried by algorithms that prioritize engagement over urgency. On a daily basis, countless online communities face arbitrary deplatforming, surveillance, and loss of their digital spaces without recourse or explanation.
These aren’t isolated incidents: they’re symptoms of a fundamental crisis in how we’ve allowed our digital communities to be governed. We’ve unwittingly accepted a system where massive corporations control the public sphere; algorithms optimize for advertising revenue rather than human connection, and we the people have no real agency over our digital existence.
We’ve Lost Our Way
I’ve spent decades building social technologies, including working at Odeo, the company that ultimately pivoted to become Twitter. There I was the social app’s first employee and de facto CTO until late 2006; and have since built numerous other community organizing platforms. I’ve watched with growing concern as our digital spaces have become increasingly toxic and hostile to genuine community needs. The promise of social media as we defined it in the early days—to connect and empower communities of people—has been subverted by a business model that treats human connection as a commodity to be monetized.
Today, if you run a Facebook Group with thousands of members, you have no real authority – your community exists at the whim of corporate policies you cannot influence. This is fundamentally at odds with how real-world communities have always operated. Your local gardening club, bowling league, or neighborhood association has democratic processes for leadership and decision-making. Why should our digital communities be any different?
It’s Time For a New Social Media Bill Of Digital Rights
I believe that the time has come for a new Social Media Bill of Digital Rights. Just as the original Bill of Rights protected individual freedoms from government overreach, we need fundamental protections for our digital communities from corporate control and surveillance capitalism.
So what could such a Social Media Bill of Rights include?
The right to privacy & security: The ability to communicate and organize without fear of surveillance or exploitation.
The right to own and control your identity: People and their communities must own their digital identities, connections and data. And, as the owner of an account, you can exercise the right to be forgotten.
The right to choose and understand algorithms (transparency): Choosing the algorithms that shape your interactions: no more black box systems optimizing for engagement at the expense of community well-being.
The right to community self-governance: Crucially, communities of users need the right to self govern, setting their own rules for behavior which are contextually relevant to their community. (Note: this does not preclude developer governance.)
The right to full portability – the right to exit: The freedom to port your community in its entirety, to another app without losing your connections and content.
To determine whether these are the appropriate “Rights,” I’ve just launched a new podcast, Revolution.Social where I invite my guests, including the likes of Jack Dorsey, Cory Doctorow, Yoel Roth, Kara Swisher and Renee DiResta, to share their feedback and debate where we need to head next.
Architecting For A Better Future
The good news is that the technical foundations for a better future already exist through open protocols that work like the web itself – interconnected and controlled by no single entity.
The Fediverse, powered by ActivityPub, enables platforms like Mastodon to create interconnected communities free from corporate control.
Nostr provides a foundation for decentralized, encrypted communication that no one can shut down.
BlueSky is pioneering user choice in algorithms.
Signal demonstrates that private, secure communication is possible at scale.
Unlike the walled gardens of Meta, TikTok, and Twitter (now X), these open protocols allow communities to connect across platforms while maintaining control of their spaces. When you use email or browse the web, you don’t worry about which email provider or browser your friends use – it just works. Our social spaces should function the same way.
What’s missing is the bridge between these technical capabilities and the tools communities actually need to thrive. We need to move from closed, corporate platforms to open protocols that communities can shape and control. This isn’t just a technical challenge – it needs to become a social movement. We need to build systems that are co-designed with communities, that respect their autonomy, and that enable their authentic purposes.
Evan Henshaw-Plath, known as “rabble,” is an activist and technologist passionate about building commons-based social media apps that prioritize equity and sustainability.
In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Mike is joined by guest host Zeve Sanderson, the founding Executive Director of the NYU Center for Social Media & Politics. Together, they cover:
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our sponsor Modulate. In our Bonus Chat, we speak with Modulate CTO Carter Huffman about how their voice technology can actually detect fraud.
Earlier this year, California passed SB 976, yet another terrible and obviously unconstitutional bill with the moral panicky title “Protecting Our Kids from Social Media Addiction Act.” The law restricts minors’ access to social media and imposes burdensome requirements on platforms. It is the latest in a string of misguided attempts by California lawmakers to regulate online speech “for the children.” And like its predecessors, it is destined to fail a court challenge on First Amendment grounds.
The bill’s sponsor, Senator Nancy Skinner, has a history of relying on junk science and misrepresenting research to justify her moral panic over social media. Last year, in pushing for a similar bill, Skinner made blatantly false claims based on her misreading of already misleading studies. It seems facts take a backseat when there’s a “think of the children!” narrative to push.
The law builds on the Age Appropriate Design Code, without acknowledging that much of that law was deemed unconstitutional by an appeals court earlier this year (after being found similarly unconstitutional by the district court last year). This bill, like a similar one in New York, assumes (falsely and without any evidence) that “algorithms” are addictive.
As we just recently explained, if you understand the history of the internet, algorithms have long played an important role in making the internet usable. The idea that they’re “addictive” has no basis in reality. But the law insists otherwise. It would then ban these “addictive algorithms” if a website knows a user is a minor. It also has restrictions on when notifications can be sent to a “known” minor (basically no notifications during school hours or late at night).
California is again attempting to unconstitutionally regulate minors’ access to protected online speech—impairing adults’ access along the way. The restrictions imposed by California Senate Bill 976 (“Act” or “SB976”) violate bedrock principles of constitutional law and precedent from across the nation. As the United States Supreme Court has repeatedly held, “minors are entitled to a significant measure of First Amendment protection.” Brown v. Ent. Merchs. Ass’n, 564 U.S. 786, 794 (2011) (cleaned up) (quoting Erznoznik v. Jacksonville, 422 U.S. 205, 212-13 (1975)). And the government may not impede adults’ access to speech in its efforts to regulate what it deems acceptable for minors. Ashcroft v. ACLU, 542 U.S. 656, 667 (2004); Reno v. ACLU, 521 U.S. 844, 882 (1997). These principles apply with equal force online: Governments cannot “regulate [‘social media’] free of the First Amendment’s restraints.” Moody v. NetChoice, LLC, 144 S. Ct. 2383, 2399 (2024).
That is why courts across the country have enjoined similar state laws restricting minors’ access to online speech. NetChoice, LLC v. Reyes, 2024 WL 4135626 (D. Utah Sept. 10, 2024) (enjoining age-assurance, parental-consent, and notifications-limiting law); Comput. & Commc’n Indus. Ass’n v. Paxton, 2024 WL 4051786 (W.D. Tex. Aug. 30, 2024) (“CCIA”) (enjoining law requiring filtering and monitoring of certain content-based categories of speech on minors’ accounts); NetChoice, LLC v. Fitch, 2024 WL 3276409 (S.D. Miss. July 1, 2024) (enjoining ageverification and parental-consent law); NetChoice, LLC v. Yost, 716 F. Supp. 3d 539 (S.D. Ohio 2024) (enjoining parental-consent law); NetChoice, LLC v. Griffin, 2023 WL 5660155 (W.D. Ark. Aug. 31, 2023) (enjoining age-verification and parental-consent law).
This Court should similarly enjoin Defendant’s enforcement of SB976 against NetChoice members
As we’ve discussed, the politics behind challenging these laws makes it a complex and somewhat fraught process. So I’m glad that NetChoice continues to step up and challenge many of these laws.
The complaint lays out that the parental consent requirements in the bill violate the First Amendment:
The Act’s parental-consent provisions violate the First Amendment. The Act requires that covered websites secure parental consent before allowing minor users to (1) access “feed[s]” of content personalized to individual users, § 27001(a); (2) access personalized feeds for more than one hour per day, § 27002(b)(2); and (3) receive notifications during certain times of day, § 27002(a). Each of these provisions restricts minors’ ability to access protected speech and websites’ ability to engage in protected speech. Accordingly, each violates the First Amendment. The Supreme Court has held that a website’s display of curated, personalized feeds is protected by the First Amendment. Moody, 144 S. Ct. at 2393. And it has also held that governments may not require minors to secure parental consent before accessing or engaging in protected speech. Brown, 564 U.S. at 799;
So too do the age assurance requirements:
The Act’s requirements that websites conduct age assurance to “reasonably determine” whether a user is a minor, §§ 27001(a)(1)(B), 27002(a)(2), 27006(b)-(c), also violate the First Amendment. Reyes, 2024 WL 4135626, at 16 n.169 (enjoining age-assurance requirement); Fitch, 2024 WL 3276409, at 11-12 (enjoining age-verification requirement); Griffin, 2023 WL 5660155, at *17 (same). All individuals, minors and adults alike, must comply with this age-assurance requirement—which would force them to hand over personal information or identification that many are unwilling or unable to provide—as a precondition to accessing and engaging in protected speech. Such requirements chill speech, in violation of the First Amendment. See, e.g., Ashcroft, 542 U.S. at 673; Reno, 521 U.S. at 882.
It also calls out that there’s an exemption for consumer review sites (good work, Yelp lobbyists!), which highlights how the law is targeting specific types of content, which is not allowed under the First Amendment.
“SB976 does not regulate speech,” Bonta’s office said in an emailed statement. “The same companies that have committed tremendous resources to design, deploy, and market social media platforms custom-made to keep our kids’ eyes glued to the screen are now attempting to halt California’s efforts to make social media safer for children” the statement added, saying the attorney general’s office would respond in court.
Except he said that about the Age Appropriate Design Code and lost in court. He said that about the Social Media Transparency bill and lost in court. He said that about the recent AI Deepfake law… and lost in court.
See a pattern?
It would be nice if Rob Bonta finally sat down with actual First Amendment lawyers and learned how the First Amendment worked. Perhaps he and Governor Newsom could take that class together so Newsom stops signing these bills into law?
When it comes to Section 230, we’ve seen a parade of embarrassingly wrong takes over the years, all sharing one consistent theme: the authors confidently opine on the law despite clearly not understanding it. Each time I think we’ve hit bottom with the most ridiculously wrong take, along comes another challenger.
This week’s is a doozy.
I don’t want to come off as harsh in critiquing these articles, but it’s exasperating. It’s very possible for the people writing these articles to actually educate themselves. And in this case in particular, at least two of the authors have published something similar before and have been called out for their factual errors, and have chosen to double down, rather than educate themselves. So if the tone of this piece sounds angry, it’s exasperation that the authors are now deliberately choosing to misrepresent reality.
I’ve written twice about Professor Allison Stanger, both times in regards to her extraordinarily confused misunderstandings about Section 230 and how it intersects with the First Amendment. It appears that she has not taken the opportunity in the interim to learn literally anything about the law. Instead, she is now taking (1) an association with Harvard’s prestigious Kennedy School to further push utter batshit nonsense disconnected from reality, and (2) sullying others’ reputations in the process.
I first wrote about her when she teamed up with infamous (and frequently wrong) curmudgeon Jaron Lanier to write a facts-optional screed against Section 230 in Wired magazine that got so much factually wrong that it was embarrassing. The key point that Stanger/Lanier claimed was that Section 230 somehow gave the internet an ad-based business model, which is not even remotely close to true. Among other things, that article confused Section 230 with the DMCA (two wholly different laws) and then tossed in a bunch of word salad about “data dignity,” a meaningless phrase.
Even weirder, the beginning of that article seems to complain that not enough content is moderated (too much bad content!), but by the end they’re complaining that too much good content is moderated. Somehow, the article suggests, if we got rid of Section 230, exactly the right kinds of content would be moderated, and somehow advertising would no longer be bad and harassment would disappear. Then they say websites should only moderate based on the First Amendment which would forbid sites from moderating a bunch of the things the article said needed moderating. I dunno, man. It made no sense.
Somehow, Stanger leveraged that absolute nonsense into a chance to appear before a congressional committee, where she falsely claimed that decentralized social media apps were the same thing as decentralized autonomous organizations. They’re wholly different things. She also told the committee that Wikipedia wouldn’t be sued without Section 230 because “their editing is done by humans who have first amendment rights.”
Which is quite an incredibly confusing thing to say. Humans with First Amendment rights still get sued all the time.
Unfortunately, this time, they’ve dragged along Audrey Tang as a co-author. I’ve met Tang and I have tremendous respect for her. As digital minister of Taiwan, she did some amazing things to use the internet for good in the world of civic tech. She’s also spoken about the importance of the internet on free speech in Taiwan, and the importance of the open World Wide Web on democracy in Taiwan. She’s very thoughtful about the intersection of technology, speech, and law.
But she is not an expert on Section 230 or the First Amendment, and it shows in this piece.
At least this article starts with a recognition of the First Amendment, but it even gets the very basics of that wrong:
The First Amendment is often misunderstood as permitting unlimited speech. In reality, it has never protected fraud, libel, or incitement to violence. Yet Section 230, in its current form, effectively shields these forms of harmful speech when amplified by algorithmic systems. It serves as both an unprecedented corporate liability shield and a license for technology companies to amplify certain voices while suppressing others. To truly uphold First Amendment freedoms, we must hold accountable the algorithms that drive harmful virality while protecting human expression.
Yes, some people misunderstand the First Amendment that way, but no, Section 230 does not shield “those forms of harmful speech.” Also, the “incitement to violence” is from the Brandenburg Test and is technically “incitement to imminent lawless action” which is not the same thing as “incitement to violence.” To pass the Brandenburg test, the speech has to be “intended to incite or produce imminent lawless action, and likely to incite such action.”
This is an extremely high bar, and nearly all harassment does not cross that bar.
Also, this completely misunderstands Section 230, which does not actually “shield these forms of harmful speech.” If the speech is actually illegal under the First Amendment, Section 230 does absolutely nothing to “shield” it. All 230 does is say that we place the liability on the speaker. If the speech actually does violate the First Amendment (and, as we’ll get to, this piece plays fast and loose with how the First Amendment actually works), then 230 doesn’t stand in the way at all of holding the speaker liable.
Yet, this piece seems to argue that if we got rid of Section 230 and somehow forced websites to only moderate to the Brandenburg standard, it would somehow magically stop harassment.
The choice before us is not binary between unchecked viral harassment and heavy-handed censorship. A third path exists: one that curtails viral harassment while preserving the free exchange of ideas. This balanced approach requires careful definition but is achievable, just as we’ve defined limits on viral financial transactions to prevent Ponzi schemes. Current engagement-based optimization amplifies hate and misinformation while discouraging constructive dialogue.
To put it mildly, this is delusional. This “third path” is basically just advocating for dictatorial control over speech.
This is a common stance for people with literally zero experience with the challenges of trust & safety and content moderation. These people seem to think if only they were put in charge of writing the rules, it’s possible to write perfect rules that stop the bad stuff but leave the good stuff.
That’s not possible. And anyone with any experience in a trust & safety role would know that. Which is why it would be great if non-experts stopped cosplaying as if they understand this stuff.
There’s a reason that we created two separate trust & safety and content moderation games to help people like the authors of this piece understand that it’s not so simple. People are complicated. So many things involve subjective calls in murky gray areas, that even experts in the field who have spent years adjudicating these things rarely agree on how best to handle different situations.
Our proposed “repeal and renew” approach would remove the liability shield for social media companies’ algorithmic amplification while protecting citizens’ direct speech. This reform distinguishes betweenfearless speech—which deserves constitutional protection—and reckless speech that causes demonstrable harm. The evidence of such harm is clear: from the documented mental health impacts of engagement-optimized content to the spread of child sexual abuse material (CSAM) through algorithm-driven networks.
Ah, so your problem is with the First Amendment, not Section 230. The idea that only “fearless speech” deserves constitutional protection is a lovely fantasy for law professors, but it’s not the law. And never has been. You would need to first completely dismantle over a century’s worth of First Amendment jurisprudence before we even get to the question of 230, which wouldn’t do what you want it to do in the first place.
Under the First Amendment, “reckless speech” remains protected, except in some very specific, well-delineated cases. And you can’t just wave your arms and pretend otherwise, even though that’s what Stanger, Lanier, and Tang do here.
That’s not how it works.
And, because the three of them seem to be coming up with simplistically wrong solutions to inherently complex problems, let’s dig in a bit more on the examples they have. First off, CSAM is already extremely illegal and not protected by either the First Amendment or Section 230. So it’s bizarre that it’s even mentioned here (unless you don’t understand how any of this works).
But how about “the documented mental health impacts of engagement-optimized content”? That’s… not actually proven? This has been discussed widely over the last few years, but the vast majority of research finds no such causal links. Yes, you have a few folks who claim it’s proven, but many of the leading researchers in the field, and multiple meta-analyses of the research have found no actual evidence to support a causal link between social media and mental health.
So… then what?
Stanger, Lanier, and Tang seem to take it as given that such harm is there, even as the evidence has disagreed with that claim. Do we wave a magic wand and say “well, because these three non-experts insist that social media is harmful to mental health that we suddenly make such content… no longer protected under the First Amendment?”
That’s not how the First Amendment works, and it’s not how anything works.
Or, how about we take a more specific example, even though it’s not directly raised in the article. One area of content that many people are very concerned about is “eating disorder content.” Based on what’s in this article, I’m pretty sure that Stanger, Lanier, and Tang would argue that, obviously, eating disorder content should be deemed “harmful” and therefore unprotected under the First Amendment (again, this would require a massive change to the First Amendment, but let’s leave that fantasyland in place for a moment.)
Okay, but now what?
Multiple studies have shown that (1) determining what actually is “eating disorder content” is way more difficult than most people think, because the language around it is so ever-changing, to the point that sometimes people argue that photos of gum are “eating disorder content” and (2) perhaps more importantly, simply removing eating disorder content has been shown to make eating disorder issues worse for some users!
Often, this is because eating disorder content is a demand-side issue, where people are looking for it, rather than being driven to eating disorders based on the content. Removing it often just drives those seeking it out into darker corners of the internet where, unlike in the mainstream areas of the internet, they’re less likely to see useful interventions and resources (including help from others who have recovered from eating disorders).
So, what should be done here? Under the Stanger/Lanier/Tang proposal, the answer is to make such content illegal and require websites to block it, even though that likely does even more harm to vulnerable people.
And that’s ignoring the whole First Amendment problem. Repeatedly throughout the article, Stanger/Lanier/Tang handwave around all this by suggesting that you can create a new law that concretely determines what content is allowed (and must be carried) and what content is not.
But that’s not how it works in both directions. The law can no more compel websites to keep up speech they don’t want to host, than it can force them to take down content the three authors think is “harmful” but does not pass the existing tests regarding what is not protected under the First Amendment.
Given its many problems regarding the authors’ understanding of speech, it will not surprise you that they trot out the “fire in a crowded theater” line, which is the screaming siren of “this is written by people unfamiliar with the First Amendment.”
Just as someone shouting “fire” in a crowded theater can be held liable for resulting harm, operators of algorithms that incentivize harassment for engagement should face accountability.
Earlier in the piece, they pointed (incorrectly) to the Brandenburg test on incitement to imminent lawless action. Given that, you might think that someone might have pointed out to them that Brandenburg effectively rejected Schenck, the case in which the “fire in a crowded theater” line was uttered as dicta (i.e., not controlling or meaningful). But, nope. They pretend it’s the law (it’s not), just like they pretend the Brandenburg standard can magically be extended to harassment (it cannot).
The piece concludes with even more nonsense:
Section 230 today inadvertently circumvents the First Amendment’s guarantees of free speech, assembly, and petition. It enables an ad-driven business model and algorithmic moderation that optimize for engagement at the expense of democratic discourse. Algorithmic amplification is a product, not a public service. By sunsetting Section 230 and implementing new legislation that holds proprietary algorithms accountable for demonstrable harm, we can finally extend First Amendment protections to the digital public square, something long overdue.
Literally every sentence of that paragraph is wrong. Harvard should be ashamed for publishing something that would flunk a first-year Harvard Law class. Section 230 does nothing to “circumvent” the First Amendment. The First Amendment does not guarantee free speech, assembly, and petition on private property. It simply limits the government from suppressing it. Private property owners still have the editorial discretion to do as they wish, which is supported by Section 230.
As for the claim that you can magically apply liability to “algorithmic amplification” and not have that violate the First Amendment, that’s also wrong. We discussed that just last week, so I’m not going to rehash the entire argument. But algorithmic amplification is literally speech as well, and it is very much protected under the First Amendment as an opinion on “we think you’d like this.” You can’t just magically move that outside of the First Amendment. That’s not how it works.
The point is that this piece is not serious. It does not grapple with the realities of the First Amendment. It does not grapple with the impossibilities of content moderation. It does not grapple with the messiness of societal level problems with no easy solution. It ignores the evidence on social media’s supposed harms.
It sets up a fantasyland First Amendment that does not exist, it misrepresents what Section 230 does, it mangles the concept of “harms” in the online speech context, and it punts on what the simple “rules” they think they can write to get around all of that would be.
It’s embarrassing how disconnected from reality the article is.
Yet, Harvard’s Kennedy School was happy to put it out. And that should be embarrassing for everyone involved.
The NY Times has real difficulty not misrepresenting Section 230. Over and over and over and over and over again it has misrepresented how Section 230 works, even having to once run this astounding correction (to an article that had a half-page headline saying Section 230 was at fault):
A day later, it had to run another correction on a different article also misrepresenting Section 230:
You would think with all these mistakes and corrections that the editors at the NY Times might take things a bit more slowly when either a reporter or a columnist submits a piece purportedly about Section 230.
Apparently not.
Julia Angwin has done some amazing reporting on privacy issues in the past and has exposed plenty of legitimately bad behavior by big tech companies. But, unfortunately, she appears to have been sucked into nonsense about Section 230.
She recently wrote a terribly misleading opinion piece, bemoaning social media algorithms and blaming Section 230 for their existence. The piece is problematic and wrong on multiple levels. It’s disappointing that it ever saw the light of day without someone pointing out its many flaws.
A history lesson:
Before we get to the details of the article, let’s take a history lesson on recommendation algorithms, because it seems that many people have very short memories.
The early internet was both great and a mess. It was great because anyone could create anything and communicate with anyone. But it was a mess because that came with a ton of garbage and slop. There were attempts to organize that information and make it useful. Things like Yahoo became popular not because they had a search engine (that came later!) but because they were an attempt to “organize” the internet (Yahoo originally stood for “Yet Another Hierarchical Officious Oracle”, recognizing that there were lots of attempts to “organize” the internet at that time).
After that, searching and search algorithms became a central way of finding stuff online. In its simplest form, search is a recommendation algorithm based on the keywords you provide run against its index. In the early days, Google cracked the code to make that recommendation algorithm for content on the wider internet.
The whole point of a search recommendation is “the algorithm thinks these are the most relevant bits of content for you.”
The next generation of the internet was content in various silos. Some of those were user-generated silos of content, such as Facebook and YouTube. And some of them were professional content, like Netflix or iTunes. But, once again, it wasn’t long before users felt overwhelmed with the sheer amount of content at their fingertips. Again, they sought out recommendation algorithms to help them find the relevant or “good” content, and to avoid the less relevant “bad” content. Netflix’s algorithm isn’t very different from Google’s recommendation engine. It’s just that, rather than “here’s what’s most relevant for your search keywords,” it’s “here’s what’s most relevant based on your past viewing history.”
Indeed, Netflix somewhat famously perfected the content recommendation algorithm in those years, even offering up a $1 million prize to anyone who could build a better version. Years later, a team of researchers won the award, but Netflix never implemented it, saying that the marginal gains in quality were not worth the expense.
Either way, though, it was clearly established that the benefit and the curse of the larger internet is that in enabling anyone to create and access content, too much content is created for anyone to deal with. Thus, curation and recommendation is absolutely necessary. And handling both at scale requires some sort of algorithms. Yes, some personal curation is great, but it does not scale well, and the internet is all about scale.
People also seem to forget that recommendation algorithms aren’t just telling you what content they think you’ll want to see. They’re also helping to minimize the content you probably don’t want to see. Search engines choosing which links show up first are also choosing which links they won’t show you. My email is only readable because of the recommendation engines I run against it (more than just a spam filter, I also run algorithms that automatically put emails into different folders based on likely importance and priority).
Algorithms aren’t just a necessary part of making the internet usable today. They’re a key part of improving our experiences.
Yes, sometimes algorithms get things wrong. They could recommend something you don’t want. Or demote something you do. Or maybe they recommend some problematic information. But sometimes people get things wrong too. Part of internet literacy is recognizing that what an algorithm presents to you is just a suggestion and not wholly outsourcing your brain to the algorithm. If the problem is people outsourcing their brain to the algorithm, it won’t be solved by outlawing algorithms or adding liability to them.
It being just a suggestion or a recommendation is also important from a legal standpoint: because recommendation algorithms are simply opinions. They are opinions of what content that algorithm thinks is most relevant to you at the time based on what information it has at that time.
And opinions are protected free speech under the First Amendment.
If we held anyone liable for opinions or recommendations, we’d have a massive speech problem on our hands. If I go into a bookstore, and the guy behind the counter recommends a book to me that makes me sad, I have no legal recourse, because no law has been broken. If we say that tech company algorithms mean they should be liable for their recommendations, we’ll create a huge mess: spammers will be able to sue if email is filtered to spam. Terrible websites will be able to sue search engines for downranking their nonsense.
On top of that, First Amendment precedent has long been clear that the only way a distributor can be held liable for even harmful recommendation is if the distributor had actual knowledge of the law-violating nature of the recommendation.
I know I’ve discussed this case before, but it always gets lost in the mix. In Winter v. GP Putnam, the Ninth Circuit said a publisher was not liable for publishing a mushroom encyclopedia that literally “recommended” people eat poisonous mushrooms. The issue was that the publisher had no way to know that the mushroom was, in fact, inedible.
We conclude that the defendants have no duty to investigate the accuracy of the contents of the books it publishes. A publisher may of course assume such a burden, but there is nothing inherent in the role of publisher or the surrounding legal doctrines to suggest that such a duty should be imposed on publishers. Indeed the cases uniformly refuse to impose such a duty. Were we tempted to create this duty, the gentle tug of the First Amendment and the values embodied therein would remind us of the social costs.
It’s not hard to transpose this to the internet. If Google recommends a link that causes someone to poison themselves, precedent says we can hold the author liable, but not the distributor/recommender unless they have actual knowledge of the illegal nature of the content. Absent that, there is nothing to actually sue over.
And, that’s good. Because you can’t demand that anyone recommending anything know with certainty whether or not the content they are recommending is good or bad. That puts way too much of a burden on the recommender, and makes the mere process of recommending anything a legal minefield.
Note that the issue of Section 230 does not come up even once in this history lesson. All that Section 230 does is say that websites and users (that’s important!) are immune from their editorial choices for third party content. That doesn’t change the underlying First Amendment protections for their editorial discretion, it just allows them to get cases tossed out earlier (at the very earliest motion to dismiss stage) rather than having to go through expensive discovery/summary judgment and possibly even all the way to trial.
Section 230 isn’t the issue here:
Now back to Angwin’s piece. She starts out by complaining about Mark Zuckerberg talking up Meta’s supposedly improved algorithms. Then she takes the trite and easy route of dunking on that by pointing out that Facebook is full of AI slop and clickbait. That’s true! But… that’s got nothing to do with legal liability. That simply has to do with… how Facebook works and how you use Facebook? My Facebook feed has no AI slop or clickbait, perhaps because I don’t click on that stuff (and I barely use Facebook). If there was no 230 and Facebook were somehow incentivized to do less algorithmic recommendation, feeds would still be full of nonsense. That’s why the algorithms were created in the first place. Indeed, studies have shown that when you remove algorithms, feeds are filled with more nonsense, because the algorithms don’t filter out the crap any more.
But Angwin is sure that Section 230 is to blame and thinks that if we change it, it will magically make the algorithms better.
Our legal system is starting to recognize this shift and hold tech giants responsible for the effects of their algorithms — a significant, and even possibly transformative, development that over the next few years could finally force social media platforms to be answerable for the societal consequences of their choices.
Let’s back up and start with the problem.Section 230, a snippet of lawembedded in the 1996 Communications Decency Act, was initially intended to protect tech companies from defamation claims related to posts made by users. That protection made sense in the early days of social media, when we largely chose the content we saw, based on whom we “friended” on sites such as Facebook. Since we selected those relationships, it was relatively easy for the companies to argue they should not be blamed if your Uncle Bob insulted your strawberry pie on Instagram.
So, again, this is wrong. From the earliest days of the internet, we always relied on recommendation systems and moderation, as noted above. And “social media” didn’t even come into existence until years after Section 230 was created. So, it’s not just wrong to say that Section 230’s protections made sense for early social media, it’s backwards.
Also, it is somewhat misleading to call Section 230 “a snippet of law embedded in the 1995 Communications Decency Act.” Section 230 was an entirely different law, designed to be a replacement for the CDA. It was the Internet Freedom and Family Empowerment Act and was put forth by then Reps. Cox and Wyden as an alternative to the CDA. Then, Congress, in its infinite stupidity, took both bills and merged them.
But it was also intended to help protect companies from being sued for recommendations. Indeed, two years ago, Cox and Wyden explained this to the Supreme Court in a case about recommendations:
At the same time, Congress drafted Section 230 in a technology-neutral manner that would enable the provision to apply to subsequently developed methods of presenting and moderating user-generated content. The targeted recommendations at issue in this case are an example of a more contemporary method of content presentation. Those recommendations, according to the parties, involve the display of certain videos based on the output of an algorithm designed and trained to analyze data about users and present content that may be of interest to them.Recommending systems that rely on such algorithms are the direct descendants of the early content curation efforts that Congress had in mind when enacting Section 230. And because Section 230 is agnostic as to the underlying technology used by the online platform, a platform is eligible for immunity under Section 230 for its targeted recommendations to the same extent as any other content presentation or moderation activities.
So the idea that 230 wasn’t meant for recommendation systems is wrong and ahistorical. It’s strange that Angwin would just claim otherwise, without backing up that statement.
Then, Angwin presents a very misleading history of court cases around 230, pointing out cases where Section 230 has been successful in getting bad cases dismissed at an early stage, but in a way that makes it sound like the cases would have succeeded absent 230:
But again, these links misrepresent and misunderstand how Section 230 functions under the umbrella of the First Amendment. None of those cases would have succeeded under the First Amendment, again because the companies had no actual knowledge of the underlying issues, and thus could not be held liable. All Section 230 did was speed up the resolution of those cases, without stopping the plaintiffs from taking legal action against those actually responsible for the harms.
And, similarly, we could point to another list of cases where Section 230 “shielded tech firms from consequences” for things we want them shielded from consequences on, like spam filters, kicking Nazis off your platform, fact-checking vaccine misinformation and election denial disinformation, removing hateful content and much much more. Remove 230 and you lose that ability as well. And those two functions are tied together at the hip. You can’t get rid of the protections for the stuff Julia Angwin says is bad without also losing the protections for things we want to protect. At least not without violating the First Amendment.
This is the part that 230 haters refuse to understand. Platforms rely on the immunity from liability that Section 230 gives them to make editorial decisions on all sorts of content. Yet, somehow, they think that taking away Section 230 would magically lead to more removals of “bad” content. That’s the opposite of true. Remove 230 and things like removing hateful information, putting in place spam filters, and stopping medical and election misinfo becomes a bigger challenge, since it will cost much more to defend (even if you’d win on First Amendment grounds years later).
Angwin’s issue (as is the issue with so many Section 230 haters) is that she wants to blame tech companies for harms created by users of those technologies. At its simplest level, Section 230 is just putting the liability on the party actually responsible. Angwin’s mad because she’d rather blame tech companies than the people actually selling drugs, sexually harassing people, selling illegal arms or engaging in human trafficking. And I get the instinct. Big tech companies suck. But pinning liability on them won’t fix that. It’ll just allow them to get out of having important editorial discretion (making everything worse) while simultaneously building up a bigger legal team, making sure competitors can never enter the space.
That’s the underlying issue.
Because if you blame the tech companies, you don’t get less of those underlying activities. You get companies who won’t even look to moderate such content, because that would be used in lawsuits against them as a sign of “knowledge.” Or if the companies do decide to more aggressively moderate, you would get any attempt to speak out about sexual harassment blocked (goodbye to the #MeToo movement… is that what Angwin really wants?)
Changing 230 would make things worse, not better:
From there, Angwin takes the absolutely batshit crazy 3rd Circuit opinion in Anderson v. TikTok, which explicitly ignored a long list of other cases based on misreading a non-binding throwaway line in a Supreme Court ruling, and gave no other justification for its ruling, as being a good thing?
If the court holds platforms liable for their algorithmic amplifications, it could prompt them to limit the distribution of noxious content such as nonconsensual nude images and dangerous lies intended to incite violence. It could force companies, including TikTok, to ensure they are not algorithmically promoting harmful or discriminatory products. And, to be fair, it could also lead to some overreach in the other direction, with platforms having a greater incentive to censor speech.
Except, it won’t do that. Because of the First Amendment it does the opposite. The First Amendment requires actual knowledge of the violative actions and content, so doing this will mean two things: companies taking either a much less proactive stance or, alternatively, taking one that will be much quicker to remove any controversial content (so goodbye #MeToo, #BlackLivesMatter or protests against the political class).
Even worse, Angwin seems to have spoken to no one with actual expertise on this if she thinks this is the end result:
My hope is that the erection of new legal guardrails would create incentives to build platforms that give control back to users. It could be a win-win: We get to decide what we see, and they get to limit their liability.
As someone who is actively working to help create systems that give control back to users, I will say flat out that Angwin gets this backwards. Without Section 230 it becomes way more difficult to do so. Because the users themselves would now face much greater liability, and unlike the big companies, the users won’t have buildings full of lawyers willing and able to fight such bogus legal threats.
If you face liability for giving users more control, users get less control.
And, I mean, it’s incredible to say we need legal guardrails and less 230 and then say this:
In the meantime, there are alternatives. I’ve already moved most of my social networking toBluesky, a platform that allows me to managemy content moderation settings. I also subscribe to severalother feeds— including one that provides news from verified news organizations and another that shows me what posts are popular with my friends.
Of course, controlling our own feeds is a bit more work than passive viewing. But it’s also educational. It requires us to be intentional about what we are looking for — just as we decide which channel to watch or which publication to subscribe to.
As a board member of Bluesky, I can say that those content moderation settings and the ability for others to make feeds and for them to be available for Angwin to choose what she wants are possible in large part due to Section 230. Without Section 230 to protect both Bluesky and its users, it makes it much more difficult to defend lawsuits over those feeds.
Angwin literally has this backwards. Without Section 230, is Bluesky as open to offering up third-party feeds? Are they as open to allowing users to create their own feeds? Under the world that Angwin claims to want, where platforms have to crack down on “bad” content, it would be a lot more legally risky to allow user control and third-party feeds. Not because providing the feeds would lead to legal losses, but without 230 it would encourage more bogus lawsuits, and cost way more to get those lawsuits tossed out under the First Amendment.
Bluesky doesn’t have a building full of lawyers like Meta has. If Angwin got her way, Bluesky would need that if it wanted to continue offering the features Angwin claims she finds so encouraging.
This is certainly not the first time that the NY Times has directly misled the public about how Section 230 works. But Angwin certainly knows many of the 230 experts in the field. It appears she spoke to none of them and wrote a piece that gets almost everything backwards. Angwin is a powerful and important voice towards fixing many of the downstream problems of tech companies. I just wish that she would spend some time understanding the nuances of 230 and the First Amendment to be more accurate in her recommendations.
I’m quite happy that Angwin likes Bluesky’s approach to giving power to end users. I only wish she wasn’t advocating for something that would make that way more difficult.
Professor Eric Goldmancontinues to be the best at tracking any and all developments regarding internet regulations. He recently covered a series of cases in which the contours of Section 230’s liability immunity is getting chipped away in all sorts of dangerous ways. As it’s unlikely that I would have the time to cover any of these cases myself, Eric has agreed to let me repost it here. That said, his post is written for an audience that already understands Section 230 and nuances related to it, so be aware that it doesn’t go as deep into the details. If you’re just starting to understand Section 230, here’s a good place to start, though, as Eric notes, the old knowledge may be increasingly less important.
Section 230 cases are coming faster than I can blog them. This long blog post rounds up five defense losses, riddled with bad judicial errors. Given the tenor of these opinions, how are any plaintiffs NOT getting around Section 230 at this point?
District of Columbia v. Meta Platforms, Inc., 2024 D.C. Super. LEXIS 27 (D.C. Superior Ct. Sept. 9, 2024)
The lawsuit alleges Meta addicts teens and thus violates DC’s consumer protection act. Like other cases in this genre, it goes poorly for Facebook.
Section 230
The court distills and summarizes the conflicting precedent: “The immunity created by Section 230 is thus properly understood as protection for social media companies and other providers from “intermediary” liability—liability based on their role as mere intermediaries between harmful content and persons harmed by it…. But-for causation, however, is not sufficient to implicate Section 230 immunity….Section 230 provides immunity only for claims based on the publication of particular third-party content.”
I don’t know what “particular” third-party content means, but the statute doesn’t support any distinction based on “particular” and “non-particular” third-party content. It refers to information provided by another information content provider, which divides the world into first-party content and third-party content. Section 230 applies to all claims based on third-party content, whether that’s an individual item or the entire class.
Having manufactured the requirement of that the claim must be based on “particular” content to trigger Section 230, the court says none of the claims do that.
With respect to the deceptive omissions claims, Section 230 doesn’t apply because “Meta can simply stop making affirmative misrepresentations about the nature of the third-party content it publishes, or it can disclose the material facts within its possession to ensure that its representations are not misleading or deceptive within the meaning of the CPPA.”
With respect to a different deceptive omissions claim, the court says Facebook “could avoid liability for such claims in the future without engaging in content moderation. It could disclose the information it has about the prevalence of sexual predators operating on its platforms, and it could take steps to block adult strangers from contacting minors over its apps.” I’d love for the court to explain how blocking users from contacting each other on apps differs from “content moderation.”
With respect to yet other deceptive omissions claims, the court says “If the claim seeks to hold Meta liable for omissions that make its statements about eating disorders misleading, then, as with the omissions regarding the prevalence of harmful third-party content on Meta’s platforms, the claim seeks to hold Meta liable for its own false, incomplete, and otherwise misleading representations, not for its publication of any particular third-party content. If the claim seeks to hold Meta liable for breaching a duty to disclose the harms of its platforms’ features, including the plastic surgery filter, then the claim is based on Meta’s own conduct, not on any third-party content published on its platforms.”
First Amendment
“Meta’s counsel was unable to articulate any message expressed or intended through Meta’s implementation and use of the challenged design features.” The court distinguishes a long list of precedents that it says don’t apply because they “involved state action that interfered with messaging or other expressive conduct—a critical element that is not present in the case before this court.” I don’t see how the court could possibly say that a government agency suing Facebook for not complying with government rules about the design of speech venues isn’t state action that interferes with expressive conduct. (Also, the “expressive conduct” phrase doesn’t apply here. It’s called “publishing”).
Deprioritizing content relates to “the organizing and presenting” of content, as do the design features at issue here. But the reason deprioritizing specific content or content providers can be expressive is not that it affects the way content is displayed; it can be expressive because it indicates the provider’s relative approval or disapproval of certain messages.
I don’t understand how the court can acknowledge that Facebook’s design features relate to the “organizing and presenting” of content and still not say those features are not expressive.
The court continues with its odd reading of Moody:
The Supreme Court, moreover, expressly limited the reach of its holding in Moody to algorithms and other features that broadly prioritize or deprioritize content based on the provider’s preferences, and it emphasized that it was not deciding whether the First Amendment applies to algorithms that display content based on the user’s preferences
Huh? Every algorithm encodes the “provider’s preferences.” If the court is trying to say that Facebook didn’t intend to preference harmful content, that ignores the inevitability that the algorithm will make Type I/Type II errors. The court sidesteps this:
the District’s unfair trade practice claims challenge Meta’s use of addictive design features without regard to the content Meta provides, and Meta has failed to articulate even a broad or vague message it seeks to convey through the implementation of its design features. So although regulations of community norms and standards sometimes implicate expressive choices, the design features at issue here do not.
Every “design feature” implicates expressive choices. Perhaps Facebook should have done a better job articulating this, but the judge was far too eager to disrespect the editorial function.
The court adds that if the First Amendment applied, the enforcement action will be subject to, and survive, intermediate scrutiny. “The District’s stated interest in prosecuting its claims is the protection of children from the significant adverse effects of the addictive design features on Meta’s social media platforms. The District’s interest has nothing to do with the subject matter or viewpoint of the content displayed on Meta’s platforms; indeed, the complaint alleges that the harms arise without regard to the content served to any individual user. ”
It’s impossible to say with a straight face that the district is uninterested in the subject matter or viewpoint of the content displayed on Meta’s platforms. Literally, other parts of the complaint target specific subject matters.
Prima Facie Elements
The court says that the provision of Internet services constitutes a “transfer” for purposes of the consumer protection statute, “even though Meta does not charge a fee for the use of its social media platforms.”
The court says that the alleged health injuries caused by the services are sufficient harm for statutory purposes, even if no one lost money or property.
The court says some of Meta’s public statements may have been puffery, and other statements may not have been issued publicly, but “many of the statements attributed to Meta and its top officials in the complaint are not so patently hyperbolic that it would be implausible for a reasonable consumer to be misled by them. Others are sufficiently detailed, quantifiable, and capable of verification that, if proven false, they could support a deceptive trade practice claim.”
State v. Meta Platforms, Inc., 2024 Vt. Super. LEXIS 146 (Vt. Superior Ct. July 29, 2024)
Similar to the DC case, the lawsuit alleges Meta addicts teens and thus violates Vermont’s consumer protection act. This goes as well for Facebook as it did in DC.
With respect to Section 230, the court says:
Meta may well be insulated from liability for injuries resulting from bullying or sexually inappropriate posts by Instagram users, but the State at oral argument made clear that it asserts no claims on those grounds….
The State is not seeking to hold Meta liable for any content provided by another entity. Instead, it seeks to hold the company liable for intentionally leading Young Users to spend too much time on-line. Whether they are watching porn or puppies, the claim is that they are harmed by the time spent, not by what they are seeing. The State’s claims do not turn on content, and thus are not barred by Section 230.
The State’s deception claim is also not barred by Section 230 for the same reason—it does not depend on third party content or traditional editorial functions. The State alleges that Meta has failed to disclose to consumers its own internal research and findings about Instagram’s harms to youth, including “compulsive and excessive platform use.” The alleged failure to warn is not “inextricably linked to [Meta’s] alleged failure to edit, monitor, or remove [] offensive content.”
Facebook’s First Amendment defense fails because it “fails to distinguish between Meta’s role as an editor of content and its alleged role as a manipulator of Young Users’ ability to stop using the product. The First Amendment does not apply to the latter.” Thus, the court characterizes the claims as targeting conduct, not content, which only get rational basis scrutiny. “Unlike Moody, where the issue was government restrictions on content…it is not the substance of the speech that is at issue here.”
This is an extremely long (116 pages), tendentious, and very troubling opinion. The case involves a minor, TV, who used Grindr’s services to match with sexual abusers and then committed suicide. The estate sued Grindr for the standard tort claims plus a FOSTA claim. The court dismisses the FOSTA claim but rejects Grindr’s Section 230 defense for the remaining claims. It’s a rough ruling for Grindr and for the Internet generally, twisting many standard industry practices and statements into reasons to impose liability and doing a TAFS-judge-style reimagining of Section 230. Perhaps this ruling will be fixed in further proceedings, or perhaps this is more evidence we are nearing the end of the UGC era.
FOSTA
The court dismissed the FOSTA claim:
T.V., like the plaintiffs in Red Roof Inns, fails to allege facts to make Grindr’s participation in a sex trafficking venture plausible. T.V. alleges in a conclusory manner that the venture consisted of recruiting, enticing, harboring, transporting, providing, or obtaining by other means minors to engage in sex acts, without providing plausible factual allegations that Grindr “took part in the common undertaking of sex trafficking.”…, the allegations that Grindr knows minors use Grindr, knows adults target minors on Grindr, and knows about the resulting harms are insufficient.
This is the high-water mark of the opinion for Grindr. It’s downhill from here.
Causation
The court says the plaintiff adequately alleged that Grindr was the proximate cause of TV’s suicide:
reasonable persons could differ on whether Grindr’s conduct was a substantial factor in producing A.V.’s injuries or suicide or both and whether the likelihood adults would engage in sexual relations with A.V. and other minors using Grindr was a hazard caused by Grindr’s conduct
Strict Liability
The court doesn’t dismiss the strict liability claim because the Grinder “service” was a “product.” (The plaintiff literally called Grindr a service). The court says:
Like Lyft in Brookes, Grindr designed the Grindr app for its business; made design choices for the Grindr app; placed the Grindr app into the stream of commerce; distributed the Grindr app in the global marketplace; marketed the Grindr app; and generated revenue and profits from the Grindr app….
Grindr designed and distributed the Grindr app, making Grindr’s role different from a mere service provider, putting Grindr in the best position to control the risk of harm associated with the Grindr app, and rendering Grindr responsible for any harm caused by its design choices in the same way designers of physically defective products are responsible
This is not a good ruling for virtually every Internet service. You can see how problematic this is from this passage:
T.V. is not trying to hold Grindr liable for “users’ communications,” about which the pleading says nothing. T.V. is trying to hold Grindr liable for Grindr’s design choices, like Grindr’s choice to forego age detection tools, and Grindr’s choice to provide an interface displaying the nearest users first
These “design choices” are Grindr’s speech, and they facilitate user-to-user speech. The court’s anodyne treatment of the speech considerations doesn’t bode well for Grindr.
The court says TV adequately pleaded that Grindr’s design choices were “unreasonably dangerous”:
Grindr designed its app so anyone using it can determine who is nearby and communicate with them; to allow the narrowing of results to users who are minors; and to forego age detection tools in favor of a minor-based niche market and resultant increased market share and profitability, despite the publicized danger, risk of harm, and actual harm to minors. At a minimum, those allegations make it plausible that the risk of danger in the design outweighs the benefits.
Remember, this is a strict liability claim, and these alleged “defects” could apply to many UGC services. In other words, the court’s analysis raises the spectre of industry-wide strict liability–an unmanageable risk that will necessarily drive most or all players out of the industry. Uh oh.
Also, every time I see the argument that services didn’t deploy age authentication tools, when the legal compulsion to do so has been in conflict with the First Amendment for over a quarter-century, I wonder how we got to the point where the courts so casually disregard the constitutional limits on their authority.
Grindr tried a risky argument that everyone knows it’s a dangerous app, so basically, caveat user. Having flipped the argument around on the court, all of the sudden, the court doesn’t find the offline analogies so persuasive:
Grindr fails to offer convincing reasons why this Court should liken the Grindr app to alcohol and tobacco—products used for thousands of years—and rule that, as a matter of Florida law, there is widespread public knowledge and acceptance of the dangers associated with the Grindr app or that the benefits of the Grindr app outweigh the risk to minors.
Duty of Care
The court says TV adequately alleged that Grindr violated its duty of care:
Grindr’s alleged conduct created a foreseeable zone of risk of harm to A.V. and other minors. That alleged conduct, some affirmative in nature, includes launching the Grindr app “designed to facilitate the coupling of gay and bisexual men in their geographic area”; publicizing users’ geographic locations; displaying the image of the geographically nearest users first; representing itself as a “safe space”; introducing the “Daddy” “Tribe,” as well as the “Twink” “Tribe,” allowing users to “more efficiently identify” users who are minors; knowing through publications that minors are exposed to danger from using the Grindr app; and having the ability to prevent minors from using Grindr Services but failing to take action to prevent minors from using Grindr Services. These allegations describe a situation in which “the actor”—Grindr—”as a reasonable [entity], is required to anticipate and guard against the intentional, or even criminal, misconduct of others….
considering the vulnerabilities of the potential victims, the ubiquitousness of smartphones and apps, and the potential for extreme mental and physical suffering of minors from the abuse of sexual predators, the Florida Supreme Court likely would rule that public policy “lead[s] the law to say that [A.V. was] entitled to protection,” and that Grindr “should bear [the] given loss, as opposed to distributing the loss among the general public.”…Were Grindr a physical place people could enter to find others to initiate contact for sexual or other mature relationships, the answer to the question of duty of care would be obvious. That Grindr is a virtual place does not make the answer less so.
That last sentence is so painful. There are many reasons why a “virtual” place may have different affordances and warrant different legal treatment than “physical” space. For example, every aspect of a virtual space is defined by editorial choices about speech, which isn’t true in the offline world. The court’s statement implicates Internet Law Exceptionalism 101, and this judge–who was so thorough in other discussions–oddly chose to ignore this critical question.
IIED/NIED
It’s almost never IIED, and here there’s no way Grindr intended to inflict emotional distress on its users…right?
Wrong. The court says Grindr engaged in outrageous conduct based on the allegation that Grindr “served [minors] up on a silver platter to the adult users of Grindr Services intentionally seeking to sexually groom or engage in sexual activity with persons under eighteen.” I understand the court was making all inferences in favor of the plaintiff, but “silver platter”–seriously? The court ought to push back on such rhetorical overclaims rather than rubberstamp them to discovery.
The court also says that Grindr directed the emotional distress at TV and never discusses Grindr’s intent at all. I’m not sure how it can be IIED without that intent, but the court didn’t seem perturbed.
The NIED claim isn’t dismissed because of the assailants’ physical contact with TV, however distant that is from Grindr.
Negligent Misrepresentations
The court says that Grindr’s statement that it “provides a safe space where users can discover, navigate, and interact with others in the Grindr Community” isn’t puffery, especially when combined with Grindr’s express “right to remove content.” Naturally, this is a troubling legal conclusion when every TOS reserves the right to remove content, and the First Amendment provides that right as well, while the word “safe” has no well-accepted definition and could mean pretty much anything–and certainly doesn’t act as a guarantee that no harm will ever befall a Grindr user. Grindr’s TOS also expressly said that it didn’t verify users, and the court said it was still justifiable to rely on the word “safe” over the express statements why the site might not be safe.
Section 230
The prior discussion shows just how impossible it will be for Internet services to survive their tort exposure without Section 230 protection. If Section 230 doesn’t apply, then plaintiffs’ lawyers can always find a range of legal doctrines that might apply, with existential damages at stake if any of the claims stick. Because services can never plaintiff-proof their offerings to the plaintiff lawyers’ satisfaction, they have to settle up quickly to prevent those existential damages, or they have to exit the industry because any profit will be turned over to the plaintiffs’ lawyers.
Given the tenor of the court’s discussion about the prima facie claims, any guess how the Section 230 analysis goes?
The court starts with the premise that it’s not bound by any prior decisions:
The undersigned asked T.V. to state whether binding precedent exists on the scope of § 230(c)(1). T.V. responded, “This appears to be an issue of first impression in the Eleventh Circuit[.]” Grindr does not dispute that response.
The court is playing word games here. The court is discounting a well-known precedential case, Almeida v. Amazon from 2006. The court says Almeida’s 230(c)(1) discussion–precisely on point–was dicta. That ruling focused primarily on 230(e)(2), the IP exception to 230, but the case only reaches that issue based on the initial applicability of 230(c)(1). In addition, there are at least three non-precedential 11th Circuit cases interpreting Section 230(c)(1), including McCall v. Zotos, Dowbenko v. Google, and Whitney v. Xcentric (the court acknowledges the first two and ignores the Whitney case). These rulings may not be precedential, but they are indicators of how the 11th Circuit thinks of Section 230 and deserved some engagement rather than being ignored. The Florida federal court might also apply Florida state law, which includes the old Doe v. AOL decision from the Florida Supreme Court and numerous Florida intermediate appellate court rulings.
The court acknowledges an almost identical case from a Florida district court case, Doe v. Grindr, where Grindr prevailed on Section 230 grounds. This court says that judge relied on “non-binding cases”–but if there are no binding 11th Circuit rulings, what else was that court supposed to do? And this court has already established that it will also rely on non-binding cases, so doesn’t pointing this out also undercut the court’s own opinion? The court also acknowledges MH v. Omegle, not quite identical to Grindr but pretty close and also a 230 defense-side win. This court also disregards it because it relied on “non-binding cases.”
This explains how the court treats ALL precedent as presumptively irrelevant so that it can treat Section 230 as a blank interpretative slate despite hundreds of precedent cases. The court thus forges its own path, redoes 230 analyses that have been done in superior fashion previously dozens of times, and cherrypicks precedent that supports its predetermined conclusion–a surefire recipe for problematic decisions. So unfortunate.
The court says “The meaning of § 230(c)(1) is plain. The provision, therefore, must be enforced according to its terms.” Because the language is so plain 🙄, the court uses dictionary definitions of “publisher” and “speaker” (seriously). It says that the CDA “sought to protect minors and other users from offensive content and internet-based crimes” (basically ignoring the legislative history), and because the CDA exhibited schizophrenia about its goals (something fully explained in the literature–extensively–but the court didn’t look), the court thinks it should “avoid the predominance of some congressional purposes over others, the provision should be interpreted neither broadly nor narrowly.”
Reminder: the Almeida opinion, in language this court chooses to ignore, said “The majority of federal circuits have interpreted the CDA to establish broad ‘federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service’” (citing Zeran, emphasis added).
Having gone deeply rogue, the court says none of the plaintiff’s common law claims treat Grindr as the publisher of third-party content. “Grindr is responsible, in whole or in part, for the “Daddy” “Tribe,” the “Twink” “Tribe,” the filtering code, the “safe space” language, and the geolocation interface. To the extent the responsible persons or entities are unclear, discovery, not dismissal, comes next.”
The court acknowledges that “Grindr brings to the Court’s attention many cases” supporting Grindr’s Section 230 arguments, including the Fifth Circuit’s old Doe v. MySpace case. To “explain” why these “many cases” don’t count, the court marshals up the following citations: Justice Thomas’ statement in Malwarebytes, Justice Thomas’ statement in Doe v. Snap, Judge Katzmann’s dissent in Force v. Facebook, Judge Gould’s concurrence/dissent in Gonzalez v. Google (which was likely rendered moot by the Supreme Court’s punt on the case), and randomly, a single district court case from Oregon (AM v. Omegle). Notice a theme here? The court is relying exclusively on non-binding precedent–indeed, other than the Omegle ruling, not even “precedent” at all.
With zero trace of irony, after this dubious stack of citations, the court says it can ignore Grindr’s citations because “MySpace and the other cases on which Grindr relies are non-binding and rely on non-binding precedent.” Hey judge…the call is coming from inside the house…
(I could have sworn this was the work of a TAFS judge, especially with the shoutouts to Justice Thomas’ non-binding statements, the poorly researched conclusions, and cherrypicked citations. But no, Magistrate Judge Barksdale appears to be an Obama appointee).
Because this is a magistrate report, it will be reviewed by the supervising judge. For all of its prolixity, it’s shockingly poorly constructed and has many sharp edges. Grindr has unsurprisingly filed objections to the report. I’m sure this case will be appealed to the 11th Circuit regardless of what the supervising judge says.
Another FOSTA sex trafficking case against Salesforce for providing services to Backpage. The court previously rejected the Section 230 defense in a factually identical case (SMA v. Salesforce) and summarily rejects it this time.
In yet another baroque and complex opinion that’s typical for FOSTA cases, the court greenlights one claim of tertiary liability against Salesforce but rejects a different tertiary liability claim. If I thought there was value to trying to reconcile those conclusions, I would do it to benefit my readers. Instead I was baffled by the court’s razor-thin distinctions about the various ecosystem players’ mens rea and actus rea (another common attribute of FOSTA decisions).
The plaintiffs were heavy Twitter advertisers, spending over $1M promoting their accounts. Twitter suspended all of the accounts in 2022 (pre-Musk) for alleged manipulation and spam. The plaintiffs claim they were targeted by a brigading attack, but allegedly Twitter disregarded their evidence of that. Eventually, the brigading attack took out the plaintiffs’ personal accounts too. The plaintiffs claim Twitter breached its implied covenant of good faith and fair dealing. Twitter filed an anti-SLAPP motion to strike.
The court says that Twitter’s actions related to a matter of public interest. However, the court says the plaintiffs’ claims have enough merit to overcome the anti-SLAPP motion.
Twitter argued that Section 230 protected its decisions. The court disagrees: “the duty Twitter allegedly violated derives from its Advertising Contracts with plaintiffs, not from Twitter’s status as a publisher of plaintiffs’ content.”
Twitter cited directly relevant California state court decisions in Murphy and Prager that said Section 230 could apply to contract-based claims that would override the service’s editorial discretion, but the court distinguishes them: “These cases, however, do not address claims that a provider breached a separate enforceable agreement for which consideration was paid, like the Advertising Contracts here.” This makes no sense. Whether or not cash was involved, the Murphy and Prager cases involved mutual promises supported by contract consideration. In other words, in each case, the defendant had a contract agreeing to provide services to the plaintiff that the plaintiff valued, so I don’t see any basis to distinguish among these cases. The court might have found better support by citing the also-on-point Calise and YOLO Ninth Circuit cases, but neither case was cited.
Beyond the Section 230 argument, Twitter said that its contracts reserved the unrestricted discretion to deny services. The court says that the unrestricted discretion might still be subject to the implied covenant of good faith and fair dealing: “the purpose of the Advertising Contracts here was not to give Twitter discretion—its purpose, as alleged in plaintiffs’ complaint, was to buy advertising for plaintiffs’ accounts on Twitter’s platform.” In other words, the court effectively reads the reservation of discretion out of the contract entirely.
How bad a loss is this? The plaintiffs had moved to voluntarily dismiss the case while it was on appeal, so they no-showed at the appeal and the court ruled on uncontested papers filed only by Twitter. Ouch. The voluntary dismissal also makes this decision into something of an advisory opinion, and I’m surprised the court decided to issue it rather than deem the appeal moot.
BONUS: Corner Computing Solutions v. Google LLC, 2024 WL 4290764 (W.D. Wash. Sept. 25, 2024). This is also an implied covenant of good faith and fair dealing case. The plaintiff thinks Google should have removed some allegedly fake reviews. The court says the TOS never promised the removal of those reviews in its TOS, but some ancillary disclosures might have implied that Google would. Thus, despite dismissing the case, the court has some sharp words for Google:
It may be misleading for Defendant to state in a policy that fake engagement will be removed while admitting in its briefing that its policies are merely aspirational. But that does not make Defendant’s actions here a breach of contract.