Mike Masnick's Techdirt Profile

Mike Masnick

About Mike Masnick Techdirt Insider

Mike is the founder and CEO of Floor64 and editor of the Techdirt blog.

He can be found on Bluesky at bsky.app/profile/masnick.com on Mastodon at mastodon.social/@mmasnick and still a little bit (but less and less) on Twitter at www.twitter.com/mmasnick

Posted on Techdirt - 13 February 2026 @ 11:57am

News Publishers Are Now Blocking The Internet Archive, And We May All Regret It

Last fall, I wrote about how the fear of AI was leading us to wall off the open internet in ways that would hurt everyone. At the time, I was worried about how companies were conflating legitimate concerns about bulk AI training with basic web accessibility. Not surprisingly, the situation has gotten worse. Now major news publishers are actively blocking the Internet Archive—one of the most important cultural preservation projects on the internet—because they’re worried AI companies might use it as a sneaky “backdoor” to access their content.

This is a mistake we’re going to regret for generations.

Nieman Lab reports that The Guardian, The New York Times, and others are now limiting what the Internet Archive can crawl and preserve:

When The Guardian took a look at who was trying to extract its content, access logs revealed that the Internet Archive was a frequent crawler, said Robert Hahn, head of business affairs and licensing. The publisher decided to limit the Internet Archive’s access to published articles, minimizing the chance that AI companies might scrape its content via the nonprofit’s repository of over one trillion webpage snapshots.

Specifically, Hahn said The Guardian has taken steps to exclude itself from the Internet Archive’s APIs and filter out its article pages from the Wayback Machine’s URLs interface. The Guardian’s regional homepages, topic pages, and other landing pages will continue to appear in the Wayback Machine.

The Times has gone even further:

The New York Times confirmed to Nieman Lab that it’s actively “hard blocking” the Internet Archive’s crawlers. At the end of 2025, the Times also added one of those crawlers — archive.org_bot — to its robots.txt file, disallowing access to its content.

“We believe in the value of The New York Times’s human-led journalism and always want to ensure that our IP is being accessed and used lawfully,” said a Times spokesperson. “We are blocking the Internet Archive’s bot from accessing the Times because the Wayback Machine provides unfettered access to Times content — including by AI companies — without authorization.”

I understand the concern here. I really do. News publishers are struggling, and watching AI companies hoover up their content to train models that might then, in some ways, compete with them for readers is genuinely frustrating. I run a publication myself, remember.

But blocking the Internet Archive isn’t going to stop AI training. What it will do is ensure that significant chunks of our journalistic record and historical cultural context simply… disappear.

And that’s bad.

The Internet Archive is the most famous nonprofit digital library, and has been operating for nearly three decades. It isn’t some fly-by-night operation looking to profit off publisher content. It’s trying to preserve the historical record of the internet—which is way more fragile than most people comprehend. When websites disappear—and they disappear constantly—the Wayback Machine is often the only place that content still exists. Researchers, historians, journalists, and ordinary citizens rely on it to understand what actually happened, what was actually said, what the world actually looked like at a given moment.

In a digital era when few things end up printed on paper, the Internet Archive’s efforts to permanently preserve our digital culture are essential infrastructure for anyone who cares about historical memory.

And now we’re telling them they can’t preserve the work of our most trusted publications.

Think about what this could mean in practice. Future historians trying to understand 2025 will have access to archived versions of random blogs, sketchy content farms, and conspiracy sites—but not The New York Times. Not The Guardian. Not the publications that we consider the most reliable record of what’s happening in the world. We’re creating a historical record that’s systematically biased against quality journalism.

Yes, I’m sure some will argue that the NY Times and The Guardian will never go away. Tell that to the readers of the Rocky Mountain News, which published for 150 years before shutting down in 2009, or to the 2,100+ newspapers that have closed since 2004. Institutions—even big, prominent, established ones—don’t necessarily last.

As one computer scientist quoted in the Nieman piece put it:

“Common Crawl and Internet Archive are widely considered to be the ‘good guys’ and are used by ‘the bad guys’ like OpenAI,” said Michael Nelson, a computer scientist and professor at Old Dominion University. “In everyone’s aversion to not be controlled by LLMs, I think the good guys are collateral damage.”

That’s exactly right. In our rush to punish AI companies, we’re destroying public goods that serve everyone.

The most frustrating bit of all of this: The Guardian admits they haven’t actually documented AI companies scraping their content through the Wayback Machine. This is purely precautionary and theoretical. They’re breaking historical preservation based on a hypothetical threat:

The Guardian hasn’t documented specific instances of its webpages being scraped by AI companies via the Wayback Machine. Instead, it’s taking these measures proactively and is working directly with the Internet Archive to implement the changes.

And, of course, as one of the “good guys” of the internet, the Internet Archive is willing to do exactly what these publishers want. They’ve always been good about removing content or not scraping content that people don’t want in the archive. Sometimes to a fault. But you can never (legitimately) accuse them of malicious archiving (even if music labels and book publishers have).

Either way, we’re sacrificing the historical record not because of proven harm, but because publishers are worried about what might happen. That’s a hell of a tradeoff.

This isn’t even new, of course. Last year, Reddit announced it would block the Internet Archive from archiving its forums—decades of human conversation and cultural history—because Reddit wanted to monetize that content through AI licensing deals. The reasoning was the same: can’t let the Wayback Machine become a backdoor for AI companies to access content Reddit is now selling. But once you start going down that path, it leads to bad places.

The Nieman piece notes that, in the case of USA Today/Gannett, it appears that there was a company-wide decision to tell the Internet Archive to get lost:

In total, 241 news sites from nine countries explicitly disallow at least one out of the four Internet Archive crawling bots.

Most of those sites (87%) are owned by USA Today Co., the largest newspaper conglomerate in the United States formerly known as Gannett. (Gannett sites only make up 18% of Welsh’s original publishers list.) Each Gannett-owned outlet in our dataset disallows the same two bots: “archive.org_bot” and “ia_archiver-web.archive.org”. These bots were added to the robots.txt files of Gannett-owned publications in 2025.

Some Gannett sites have also taken stronger measures to guard their contents from Internet Archive crawlers. URL searches for the Des Moines Register in the Wayback Machine return a message that says, “Sorry. This URL has been excluded from the Wayback Machine.”

A Gannett spokesperson told NiemanLab that it was about “safeguarding our intellectual property” but that’s nonsense. The whole point of libraries and archives is to preserve such content, and they’ve always preserved materials that were protected by copyright law. The claim that they have to be blocked to safeguard such content is both technologically and historically illiterate.

And here’s the extra irony: blocking these crawlers may not even serve publishers’ long-term interests. As I noted in my earlier piece, as more search becomes AI-mediated (whether you like it or not), being absent from training datasets increasingly means being absent from results. It’s a bit crazy to think about how much effort publishers put into “search engine optimization” over the years, only to now block the crawlers that feed the systems a growing number of people are using for search. Publishers blocking archival crawlers aren’t just sacrificing the historical record—they may be making themselves invisible in the systems that increasingly determine how people discover content in the first place.

The Internet Archive’s founder, Brewster Kahle, has been trying to sound the alarm:

“If publishers limit libraries, like the Internet Archive, then the public will have less access to the historical record.”

But that warning doesn’t seem to be getting through. The panic about AI has become so intense that people are willing to sacrifice core internet infrastructure to address it.

What makes this particularly frustrating is that the internet’s openness was never supposed to have asterisks. The fundamental promise wasn’t “publish something and it’s accessible to all, except for technologies we decide we don’t like.” It was just… open. You put something on the public web, people can access it. That simplicity is what made the web transformative.

Now we’re carving out exceptions based on who might access content and what they might do with it. And once you start making those exceptions, where do they end? If the Internet Archive can be blocked because AI companies might use it, what about research databases? What about accessibility tools that help visually impaired users? What about the next technology we haven’t invented yet?

This is a real concern. People say “oh well, blocking machines is different from blocking humans,” but that’s exactly why I mention assistive tech for the visually impaired. Machines accessing content are frequently tools that help humans—including me. I use an AI tool to help fact check my articles, and part of that process involves feeding it the source links. But increasingly, the tool tells me it can’t access those articles to verify whether my coverage accurately reflects them.

I don’t have a clean answer here. Publishers genuinely need to find sustainable business models, and watching their work get ingested by AI systems without compensation is a legitimate grievance—especially when you see how much traffic some of these (usually less scrupulous) crawlers dump on sites. But the solution can’t be to break the historical record of the internet. It can’t be to ensure that our most trusted sources of information are the ones that disappear from archives while the least trustworthy ones remain.

We need to find ways to address AI training concerns that don’t require us to abandon the principle of an open, preservable web. Because right now, we’re building a future where historians, researchers, and citizens can’t access the journalism that documented our era. And that’s not a tradeoff any of us should be comfortable with.

Posted on Techdirt - 13 February 2026 @ 09:27am

Judge Accuses DOJ Of Telling Court To “Pound Sand,” In Case Over Venezuelans Sent To Salvadoran Concentration Camp

Judge Boasberg got his vindication in the frivolous “complaint” the DOJ filed against him, and now he’s calling out the DOJ’s bullshit in the long-running case that caused them to file the complaint against him in the first place: the JGG v. Trump case regarding the group of Venezuelans the US government shipped off to CECOT, the notorious Salvadoran concentration camp.

Boasberg, who until last year was generally seen as a fairly generic “law and order” type judge who was extremely deferential to any “national security” claims from the DOJ (John Roberts had him lead the FISA Court, for goodness’ sake!), has clearly had enough of this DOJ and the games they’ve been playing in his court.

In a short but quite incredible ruling, he calls out the DOJ for deciding to effectively ignore the case while telling the court to “pound sand.”

On December 22, 2025, this Court issued a Memorandum Opinion finding that the Government had denied due process to a class of Venezuelans it deported to El Salvador last March in defiance of this Court’s Order. See J.G.G. v. Trump, 2025 WL 3706685, at *19 (D.D.C. Dec. 22, 2025). The Court offered the Government the opportunity to propose steps that would facilitate hearings for the class members on their habeas corpus claims so that they could “challenge their designations under the [Alien Enemies Act] and the validity of the [President’s] Proclamation.” Id. Apparently not interested in participating in this process, the Government’s responses essentially told the Court to pound sand.

From a former FISC judge—someone who spent years giving national security claims every benefit of the doubt—”pound sand” is practically a primal scream.

Due to this, he orders the government to work to “facilitate the return” of these people it illegally shipped to a foreign concentration camp (that is, assuming any of them actually want to come back).

Believing that other courses would be both more productive and in line with the Supreme Court’s requirements outlined in Noem v. Abrego Garcia, 145 S. Ct. 1017 (2025), the Court will now order the Government to facilitate the return from third countries of those Plaintiffs who so desire. It will also permit other Plaintiffs to file their habeas supplements from abroad.

Boasberg references the Donald Trump-led invasion of Venezuela and the unsettled situation there for many of the plaintiffs. He points out that the lawyers for the plaintiffs have been thoughtful and cautious in how they approach this case. That is in contrast to the US government.

Plaintiffs’ prudent approach has not been replicated by their Government counterparts. Although the Supreme Court in Abrego Garcia upheld Judge Paula Xinis’s order directing the Government “to facilitate and effectuate the return of” that deportee, see 145 S. Ct. at 1018, Defendants at every turn have objected to Plaintiffs’ legitimate proposals without offering a single option for remedying the injury that they inflicted upon the deportees or fulfilling their duty as articulated by the Supreme Court.

Boasberg points to the Supreme Court’s ruling regarding Kilmar Abrego Garcia, saying that it’s ridiculous that the DOJ is pretending that case doesn’t exist or doesn’t say what it says. Then he points out that the DOJ keeps “flagrantly” disobeying courts.

Against this backdrop, and mindful of the flagrancy of the Government’s violations of the deportees’ due-process rights that landed Plaintiffs in this situation, the Court refuses to let them languish in the solution-less mire Defendants propose. The Court will thus order Defendants to take several discrete actions that will begin the remedial process for at least some Plaintiffs, as the Supreme Court has required in similar circumstances. It does so while treading lightly, as it must, in the area of foreign affairs. See Abrego Garcia, 145 S. Ct. at 1018 (recognizing “deference owed to the Executive Branch in the conduct of foreign affairs”)

Even given all this, the specific remedy is not one that many of the plaintiffs are likely to accept: he orders that the US government facilitate the return of any of those who want it among those… not in Venezuela. But, since most of them were eventually released from CECOT into Venezuela, that may mean that this ruling doesn’t really apply to many men. On top of that Boasberg points out that anyone who does qualify and takes up the offer will likely be detained by immigration officials upon getting here. But, if they want, the US government has to pay for their plane flights back to the US. And, in theory, the plaintiffs should then be given the due process they were denied last year.

Plaintiffs also request that such boarding letter include Government payment of the cost of the air travel. Given that the Court has already found that their removal was unlawful — as opposed to the situation contemplated by the cited Directive, which notes that “[f]acilitating an alien’s return does not necessarily include funding the alien’s travel,” Directive 11061.1, ¶ 3.1 (emphasis added) — the Court deems that a reasonable request. It is unclear why Plaintiffs should bear the financial cost of their return in such an instance. See Ms. L. v. U.S. Immig. & Customs Enf’t (“ICE”), 2026 WL 313340, at *4 (S.D. Cal. Feb. 5, 2026) (requiring Government to “bear the expense of returning these family units to the United States” given that “[e]ach of the removals was unlawful, and absent the removals, these families would still be in the United States”). It is worth emphasizing that this situation would never have arisen had the Government simply afforded Plaintiffs their constitutional rights before initially deporting them.

I’m guessing not many are eager to re-enter the US and face deportation again. Of course, many of these people left Venezuela for the US in the first place for a reason, so perhaps some will take their chances on coming back. Even against a very vindictive US government.

The frustrating coda here is the lack of any real consequences for DOJ officials who treated this entire proceeding as a joke—declining to seriously participate and essentially daring the court to do something about it. Boasberg could have ordered sanctions. He didn’t. And that’s probably fine with this DOJ, which has learned that contempt for the courts carries no real cost.

Unfortunately, that may be the real story here. Judge gets fed up, once again, with a DOJ that thumbs its nose at the court, says extraordinary things in a ruling that calls out the DOJ’s behavior… but does little that will lead to actual accountability for those involved, beyond having them “lose” the case. We’ve seen a lot of this, and it’s only going to continue until judges figure out how to impose real consequences for DOJ lawyers for treating the court with literal contempt.

Posted on Techdirt - 12 February 2026 @ 04:03pm

Ctrl-Alt-Speech: Panic! At The Discord

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Ben is joined by Dr. Blake Hallinan, Professor of Platform Studies in the Department of Media & Journalism Studies at Aarhus University. Together, they discuss:

Play along with Ctrl-Alt-Speech’s 2026 Bingo Card and get in touch if you win!

Posted on Techdirt - 12 February 2026 @ 11:55am

Bondi Spying On Congressional Epstein Searches Should Be A Major Scandal

Yesterday, Attorney General Pam Bondi appeared before the House Judiciary Committee. Among the more notable exchanges was when Rep. Pramila Jayapal asked some of Jeffrey Epstein’s victims who were in the audience to stand up and indicate whether Bondi’s DOJ had ever contacted them about their experiences. None of them had heard from the Justice Department. Bondi wouldn’t even look at the victims as she frantically flipped through her prepared notes.

And that’s when news organizations, including Reuters, caught something alarming: one of the pages Bondi held up clearly showed searches that Jayapal herself had done of the Epstein files:

A Reuters photographer captured this image of a page from Pam Bondi's "burn book," which she used to counter any questions from Democratic lawmakers during an unhinged hearing today.It looks like the DOJ monitored members of Congress’s searches of the unredacted Epstein files.Just wow.

Christopher Wiggins (@cwnewser.bsky.social) 2026-02-11T23:06:45.578Z

The Department of Justice—led by an Attorney General who is supposed to serve the public but has made clear her only role is protecting Donald Trump’s personal interests—is actively surveilling what members of Congress are searching in the Epstein files. And then bringing that surveillance data to a congressional hearing to use as political ammunition.

This should be front-page news. It should be a major scandal. Honestly, it should be impeachable.

There is no legitimate investigative purpose here. No subpoena. Nothing at all. Just the executive branch tracking the oversight activities of the legislative branch, then weaponizing that information for political culture war point-scoring. The DOJ has no business whatsoever surveilling what members of Congress—who have oversight authority over the Justice Department—are searching.

Jayapal is rightly furious:

Pam Bondi brought a document to the Judiciary Committee today that had my search history of the Epstein files on it. The DOJ is spying on members of Congress. It’s a disgrace and I won’t stand for it.

Congresswoman Pramila Jayapal (@jayapal.house.gov) 2026-02-12T01:14:57.174494904Z

We’ve been here before. Way back in 2014, the CIA illegally spied on searches by Senate staffers who were investigating the CIA’s torture program. It was considered a scandal at the time—because it was one. The executive branch surveilling congressional oversight is a fundamental violation of separation of powers. It’s the kind of thing that, when it happens, should trigger immediate consequences.

And yet.

Just a few days ago, Senator Lindsey Graham—who has been one of the foremost defenders of government surveillance for years—blew up at a Verizon executive for complying with a subpoena that revealed Graham’s call records (not the contents, just the metadata) from around January 6th, 2021.

“If the shoe were on the other foot, it’d be front-page news all over the world that Republicans went after sitting Democratic senators’ phone records,” said Republican Sen. Lindsey Graham of South Carolina, who was among the Republicans in Congress whose records were accessed by prosecutors as they examined contacts between the president and allies on Capitol Hill.

“I just want to let you know,” he added, “I don’t think I deserve what happened to me.”

This is the same Lindsey Graham who, over a decade ago, said he was “glad” that the NSA was collecting his phone records because it magically kept him safe from terrorists. But now he’s demanding hundreds of thousands of dollars for being “spied” on (he wasn’t—a company complied with a valid subpoena in a legitimate investigation, which is how the legal system is supposed to work).

So here’s the contrast: Graham is demanding money and media attention because a company followed the law. Meanwhile, the Attorney General is actually surveilling a Democratic member of Congress’s oversight activities—with no legal basis whatsoever—and using that surveillance for political theater in a manner clearly designed as a warning shot to congressional reps investigating the Epstein Files. Pam Bondi wants you to know she’s watching you.

Graham claimed that if the shoe were on the other foot, it would be “front-page news all over the world.” Well, Senator, here’s your chance. The shoe is very much on the other foot. It’s worse than what happened to you, because what happened to you was legal and appropriate, and what’s happening to Jayapal is neither.

But we all know Graham won’t speak out against this administration. He’s had nearly a decade to show whether or not the version of Lindsey Graham who said “if we elected Donald Trump, we will get destroyed… and we will deserve it” still exists, and it’s clear that Lindsey Graham is long gone. This one only serves Donald Trump and himself, not the American people.

But this actually matters: if the DOJ can surveil what members of Congress search in oversight files—and then use that surveillance as a weapon in public hearings—congressional oversight of the executive branch is dead. That’s the whole point of separation of powers. The people who are supposed to watch the watchmen can’t do their jobs if the watchmen are surveilling them.

And remember: Bondi didn’t hide this. She brought it to the hearing. She held it up when she knew cameras would catch what was going on. She wanted Jayapal—and every other member of Congress—to see exactly what she’s doing.

This administration doesn’t fear consequences for this kind of vast abuse of power because there haven’t been any. And the longer that remains true, the worse it’s going to get.

Posted on Techdirt - 12 February 2026 @ 09:26am

Joseph Gordon-Levitt Goes To Washington DC, Gets Section 230 Completely Backwards

You may have heard last week that actor Joseph Gordon-Levitt went to Washington DC and gave a short speech at an event put on by Senator Dick Durbin calling for the sunsetting of Section 230. It’s a short speech, and it gets almost everything wrong about Section 230. Watch it here:

Let me first say that, while I’m sure some will rush to jump in and say “oh, it’s just some Hollywood actor guy, jumping into something he doesn’t understand,” I actually think that’s a little unfair about JGL. Very early on he started his own (very interesting, very creative) user-generated content platform called HitRecord, and over the years I’ve followed many of his takes on copyright and internet policy and while I don’t always agree, I do believe that he does legitimately take this stuff seriously and actually wants to understand the nuances (unlike some).

But it appears he’s fallen for some not just bad advice, but blatantly incorrect advice about this. He’s also posted a followup video where he claims to explain his position in more detail, but it only makes things worse, because it compounds the blatant factual errors that underpin his entire argument.

First let’s look at the major problems with his speech in DC:

So I understand what Section 230 did to bring about the birth of the internet. That was 30 years ago. And I also understand how the internet has changed since then because back then message boards and other websites with user-generated content, they really were more like telephone carriers. They were neutral platforms. That’s not how things work anymore.

So, that’s literally incorrect. If JGL is really interested in the actual history here, I did a whole podcast series where I spoke to the people behind Section 230, including those involved in the early internet and the various lawsuits at the time.

Section 230 was never meant for “neutral” websites. As the authors (and the text of the law itself!) make clear: it was created so that websites did not need to be neutral. It literally was written in response to the Stratton Oakmont v. Prodigy case (for JGL’s benefit: Stratton Oakmont is the company portrayed in Wolf of Wall Street), where the boiler room operation sued Prodigy because someone posted in their forums claims about how sketchy Stratton Oakmont was (which, you know, was true).

But Stratton sued, and the judge said that because Prodigy moderated, that because they wanted to have a family friendly site, that is because they were not neutral, they were liable for anything they decided to leave up. In the judge’s ruling he effectively said “because you’re not neutral, and because you moderate, you are effectively endorsing this content, and thus if it’s defamatory you’re liable for defamation.”

Section 230 (originally the “Internet Freedom and Family Empowerment Act”) was never about protecting platforms for being neutral. It was literally the opposite of that. It was about making sure that platforms felt comfortable making editorial decisions. It was about letting companies decide what to share, what not to share, what to amplify, and what not to amplify, without being held liable as a publisher of that content.

This is important, but it’s a point that a bunch of bad faith people, starting with Ted Cruz, have been lying about for about a decade, pretending that the intent of 230 was to protect sites that are “neutral.” It’s literally the opposite of that. And it’s disappointing that JGL would repeat this myth as if it’s fact. Courts have said this explicitly—I’ll get to the Ninth Circuit’s Barnes decision later, where the court said Section 230’s entire purpose is to protect companies because they act as publishers—but first, let’s go through the rest of what JGL got wrong.

He then goes on to talk about legitimate problems with internet giants having too much power, but falsely attributes that to Section 230.

Today, the internet is dominated by a small handful of these gigantic businesses that are not at all neutral, but instead algorithmically amplify whatever gets the most attention and maximizes ad revenue. And we know what happens when we let these engagement optimization algorithms be the lens that we see the world through. We get a mental health crisis, especially amongst young people. We get a rise in extremism and a rise in conspiracy theories. And then of course we get these echo chambers. These algorithms, they amplify the demonization of the other side so badly that we can’t even have a civil conversation. It seems like we can’t agree on anything.

So, first of all, I know that the common wisdom is that all of this is true, but as we’ve detailed, actual experts have been unable to find any support for a causal connection. Studies on “echo chambers” have found that the internet decreases echo chambers, rather than increases them. The studies on mental health show the opposite of what JGL (and Jonathan Haidt) claim. Even the claims about algorithms focused solely on engagement don’t seem to have held up (or, generally, it was true early on, but the companies found that maximizing solely on engagement burned people out quickly and was actually bad for business, and so most social media adjusted the algorithms away from just that).

So, again, almost every assertion there is false (or, at the very least, much more nuanced that he makes it out to be).

But the biggest myth of all is the idea that getting rid of 230 will somehow tame the internet giants. Once again, the exact opposite is true. As we’ve discussed hundreds of times, the big internet companies don’t need Section 230.

The real benefit of 230 is that it gets vexatious lawsuits tossed out early. That matters a lot for smaller companies. To put it in real terms: with 230, companies can get vexatious lawsuits dismissed for around $100,000 to $200,000 dollars (I used to say $50k, but my lawyer friends tell me it’s getting more expensive). That is a lot of money. But it’s generally survivable. To get the same cases dismissed on First Amendment grounds (as almost all of them would be), you’re talking $5 million and up.

That’s pocket change for Meta and Google who have buildings full of lawyers. It’s existential for smaller competitive sites.

So the end result of getting rid of 230 is not getting rid of the internet giants. It’s locking them in and giving them more power. It’s why Meta literally has run ads telling Congress it’s time to ditch 230.

What is Mark Zuckerberg’s biggest problem right now? Competition from smaller upstarts chipping away at his userbase. Getting rid of 230 makes it harder for smaller providers to survive, and limits the drain from Meta.

On top of that, getting rid of 230 gives them less reason to moderate. Because, under the First Amendment, the only way they can possibly be held liable is if they had actual knowledge of content that violates the law. And the best way to avoid having knowledge is not to look.

It means not doing any research on harms caused by your site, because that will be used as evidence of “knowledge.” It means limiting how much moderation you do so that (a la Prodigy three decades ago) you’re not seen to be “endorsing” any content you leave up.

Getting rid of Section 230 literally makes Every Single Problem JGL discussed in his speech worse! He got every single thing backwards.

And he closes out with quite the rhetorical flourish:

I have a message for all the other senators out there: [Yells]: I WANT TO SEE THIS THING PASS 100 TO 0. There should be nobody voting to give any more impunity to these tech companies. Nobody. It’s time for a change. Let’s make it happen. Thank you.

Except it’s not voting to give anyone “more impunity.” It’s a vote to say “stop moderating, and unleash a flood of vexatious lawsuits that will destroy smaller competitors.”

The Follow-Up Makes It Worse

Yesterday, JGL posted a longer video, noting that he’d heard a bunch of criticism about his speech and he wanted to respond to it. Frankly, it’s a bizarre video, but go ahead and watch it too:

It starts out with him saying he actually agrees with a lot of his critics, because he wants an “internet that has vibrant, free, and productive public discourse.” Except… that’s literally what Section 230 enables. Because without it, you don’t have intermediaries willing to host public discourse. You ONLY have giant companies with buildings full of lawyers who will set the rules of public discourse.

Again, his entire argument is backwards.

Then… he does this weird half backdown, where he says he doesn’t really want the end of Section 230, but he just wants “reform.”

 Here’s the first thing I’ll say. I’m in favor of reforming section 230. I’m not in favor of eliminating all of the protections that it affords. I’m going to repeat that because it’s it’s really the crux of this. I’m in favor of reforming, upgrading, modernizing section 230 because it was passed 30 years ago. I am not in favor of eliminating all of the protections that it affords.

Buddy, you literally went to Washington DC, got up in front of Senators, and told everyone you wanted the bill that literally takes away every one of those protections to pass 100 to 0. Don’t then say “oh I just want to reform it.” Bullshit. You said get rid of the damn thing.

But… let’s go through this, because it’s a frequent thing we hear from people. “Oh, let’s reform it, not get rid of it.” As our very own First Amendment lawyer Cathy Gellis has explained over and over again, every proposed reform to date is really repeal.

The reason for this is the procedural benefit we discussed above. Because every single kind of “reform” requires long, expensive lawsuits to determine if the company is liable. In the end, those companies will still win, because of the First Amendment. Just like how one of the most famous 230 “losses” ended up. Roommates.com lost its Section 230 protections, which resulted in many, many years in court… and then they eventually won anyway. All 230 does is make it so you don’t have to pay lawyers nearly as much to reach the same result.

So, every single reform proposal basically resets the clock in a way that old court precedents go out the window, and all you’re doing is allowing vexatious lawsuits to cost a lot more for companies. This will mean some won’t even start. Others will go out of business.

Or, worse, many companies will just enable a hecklers veto. Donald Trump doesn’t like what people are saying on a platform? Threaten to sue. The cost without 230 (even a reformed 230 where a court can’t rely on precedent) means it’s cheaper to just remove the content that upsets Donald Trump. Or your landlord. Or some internet troll.

You basically are giving everyone a veto by the mere threat of a lawsuit. I’m sorry, but that is not the recipe for a “vibrant, free, and productive public discourse.”

Calling for reform of 230 is, in every case we’ve seen to date, really a call for repeal, whether the reformers recognize that or not. Is there a possibility that you could reform it in a way that isn’t that? Maybe? But I’ve yet to see any proposal, and the only ones I can think of would be going in the other direction (e.g., expanding 230’s protections to include intellectual property, or rolling back FOSTA).

JGL then talks about small businesses and agrees that sites like HitRecord require 230. Which sure makes it odd that he’s supporting repeal. However, he seems to have bought in to the logic of the argument memeified by internet law professor Eric Goldman—who has catalogued basically every single Section 230 lawsuit as well as every single “reform” proposal ever made and found them all wanting—that “if you don’t amend 230 in unspecified ways, we’ll kill this internet.”

That is… generally not a good way to make policy. But it’s how JGL thinks it should be done:

Well, there have been lots of efforts to reform section 230 in the past and they keep getting killed uh by the big tech lobbyists. So, this section 230 sunset act is as far as I understand it a strategy towards reform. It’ll force the tech companies to the negotiating table. That’s why I supported it.

Again, this is wrong. Big tech is always at the freaking negotiating table. You don’t think they’re there? Come on. As I noted, Zuck has been willing to ditch 230 for almost a decade now. It makes him seem “cooperative” to Congress while at the same time destroying the ability of competitors to survive.

The reason 230 reform bills fail is because enough grassroots folks actually show up and scream at Congress. It ain’t the freaking “big tech lobbyists.” It’s people like the ACLU and the EFF and Fight for the Future and Demand Progress speaking up and sending calls and emails to Congress.

Also, talking about these “efforts at reform” getting “killed by big tech lobbyists.” This is FOSTA erasure, JGL. In 2018 (with the explicit support of Meta) Congress passed FOSTA, which was a Section 230 reform bill. Remember?

And how did that work out? Did it make Meta and Google better? No.

But did it destroy online spaces used by sex workers? Did it lead to real world harm for sex workers? Did it make it harder for law enforcement to capture actual human traffickers? Did it destroy online communities? Did it hide historical LGBTQ content because of legal threats?

Yes to literally all of those things.

So, yeah, I’m freaking worried about “reform” to 230, because we saw it already. And many of us warned about the harms, while “big tech” supported the law. And we were right. The harms did occur. But it took away competitive online communities and suppressed sex positive and LGBTQ content.

Is that what you want to support JGL? No? Then maybe speak to some of the people who actually work on this stuff, who understand the nuances, not the slogans.

Speaking of which, JGL then doubles down on his exactly backwards Ted Cruz-inspired version of Section 230:

Section 230 as it’s currently written or as it was written 30 years ago distinguishes between what it calls publishers and carriers. So a publisher would be, you, a person, saying something or a company saying something like the New York Times say or you know the Walt Disney Company publishers. Then carriers would be somebody like AT&T or Verizon, you know, the the the companies that make your phone or or your telephone service. So basically what Section 230 said is that these platforms for user-generated content are not publishers. They are carriers. They are as neutral as the telephone company. And if someone uses the telephone to commit a crime, the telephone company shouldn’t be held liable. And that’s true about a telephone company. But again, there’s a third category that we need to add to really reflect how the internet works today. And that third category is amplification.

Again, I need to stress that this is literally wrong. Like, fundamentally, literally he has it backwards and inside out. This is a pretty big factual error.

First, Section 230 does not, in any way, distinguish between “what it calls publishers and carriers.” This is the “publisher/platform” myth all over again.

I mean, you can look at the law. It makes no such distinction at all. The only distinction it makes is between “interactive computer services” and “information content providers.” Now some (perhaps JGL) will claim that’s the same thing as “publishers” and “carriers.” But it’s literally not.

Carriers (as in, common carrier law) implies the neutrality that JGL mentioned earlier. And perhaps that’s why he’s confused. But the purpose of 230 was to enable “interactive computer services” to act as publishers, without being held liable as publishers. It was NOT saying “don’t be a publisher.” It was saying “we want you to be a publisher, not a neutral carrier, but we know that if you face liability as a publisher, you won’t agree to publish. So, for third party content, we won’t hold you liable for your publishing actions.”

Again, go back to the Stratton Oakmont case. Prodigy “acted as a publisher” in trying to filter out non-family friendly content. And the judge said “okay now you’re liable.” The entire point of 230 was to say “don’t be neutral, act as a publisher, but since it’s all 3rd party content, we won’t hold you liable as the publisher.”

In the Barnes case in the Ninth Circuit, the court was quite clear about this. The entire purpose of Section 230 is to encourage interactive computing services to act like a publisher by removing liability for being a publisher. Here’s a key part in which the court explains why Yahoo deserves 230 protections for 3rd party content because it acted as the publisher:

In other words, the duty that Barnes claims Yahoo violated derives from Yahoo’s conduct as a publisher—the steps it allegedly took, but later supposedly abandoned, to de-publish the offensive profiles. It is because such conduct is publishing conduct that we have insisted that section 230 protects from liability….

So let me repeat this again: the point of Section 230 is not to say “you’re a carrier, not a publisher.” It’s literally to say “you can safely act as a publisher because you won’t face liability for content you had no part in its creation.”

JGL has it backwards.

He then goes on to make a weird and meaningless distinction between “free speech” and “commercial amplification” as if it’s legally meaningful.

At the crux of their article is a really important distinction and that distinction is between free speech and commercial amplification. Free speech meaning what a human being says. commercial amplification, meaning when a platform like Instagram or YouTube or Tik Tok or whatever uses an algorithm to uh maximize engagement and ad revenue to hook you, keep you and serve you ads. And this is a really important difference that section 230 does not appreciate.

The article he’s talking about is this very, very, very, very, very badly confused piece in ACM. It’s written by Jaron Lanier, Allison Stanger, and Audrey Tang. If those names sound familiar, it’s because they’ve been publishing similar pieces that are just fundamentally wrong for years. Here’s one piece I wrote picking apart one, here’s another picking apart another.

None of those three individuals understands Section 230 at all. Stanger gave testimony to Congress that was so wrong on basic facts it should have been retracted. I truly do not understand why Audrey Tang sullies her own reputation by continuing to sign on to pieces with Lanier and Stanger. I have tremendous respect for Audrey, who I’ve learned a ton from over the years. But she is not a legal expert. She was Digital Minister in Taiwan (where she did some amazing work!) and has worked at tech companies.

But she doesn’t know 230.

I’m not going to do another full breakdown of everything wrong with the ACM piece, but just look at the second paragraph:

Much of the public’s criticism of Section 230 centers on the fact that it shields platforms from liability even when they host content such as online harassment of marginalized groups or child sexual abuse material (CSAM).

What? CSAM is inherently unprotected speech. Section 230 does not protect CSAM. Section 230 literally has section (e)(1) that says “no effect on criminal law.” CSAM, as you might know, is a violation of criminal law. Websites all have strong incentives to deal with CSAM to avoid criminal liability, and they tend to take that pretty seriously. The additional civil liability that might come from a change in the law isn’t going to have much, if any, impact on that.

And “online harassment of marginalized groups” is mostly protected by the First Amendment anyway—so if 230 didn’t cover it, companies would still win on First Amendment grounds. But here’s the thing: most of us think that harassment is bad and want platforms to stop it. You know what lets them do that? Section 230. Take it away and companies have less incentive to moderate. Indeed, in Lanier and Stanger’s original piece in Wired, they argued platforms should be required to use the First Amendment as the basis for moderation—which would forbid removing most harassment of marginalized groups.

These are not serious critiques.

I could almost forgive Lanier/Stanger/Tang if this were the first time they were writing about this subject, but they have now written this same factually incorrect thing multiple times, and each time I’ve written a response pointing out the flaws.

I can understand that a well meaning person like JGL can be taken in by it. He mentions having talked to Audrey Tang about it. But, again, as much as I respect Tang’s work in Taiwan, she is not a US legal expert, and she has this stuff entirely backwards.

I do believe that JGL legitimately wants a free and open internet. I believe that he legitimately would like to see more upstart competitors and less power and control from the biggest providers. In that we agree.

But he has been convinced by some people who are either lying to him or simply do not understand the details, and thus he has become a useful tool for enabling greater power for the internet giants, and greater online censorship. The exact opposite of what he claims to support.

I hope he realizes that he’s been misled—and I’d be happy to talk this through with him, or put him in touch with actual experts on Section 230. Because right now, he’s lending his star power to one of the most dangerous ideas around for the open internet.

Posted on Techdirt - 11 February 2026 @ 01:28pm

Peter Mandelson Invokes Press Harassment Protections To Dodge Questions About His Support Of Jeffrey Epstein

Peter Mandelson—the former UK cabinet minister who was just sacked as Britain’s ambassador to the United States over newly revealed emails with Jeffrey Epstein—has found a novel way to avoid answering questions about why he told a convicted sex offender “your friends stay with you and love you” and urged him to “fight for early release.” He got the UK press regulator to send a memo to all UK media essentially telling them to leave him alone.

The National published what they describe as the “secret notice” that went out:

CONFIDENTIAL – STRICTLY NOT FOR PUBLICATION: Ipso has asked us to circulate the following advisory:

Ipso has today been contacted by a representative acting on behalf of Peter Mandelson.

Mr Mandelson’s representatives state that he does not wish to speak to the media at this time. He requests that the press do not take photos or film, approach, or contact him via phone, email, or in-person. His representatives ask that any requests for his comment are directed to [REDACTED]

We are happy to make editors aware of his request. We note the terms of Clause 2 (Privacy) and 3 (Harassment) of the Editors’ Code, and in particular that Clause 3 states that journalists must not persist in questioning, telephoning, pursuing or photographing individuals once asked to desist, unless justified in the public interest.

Clauses 2 and 3 of the UK Editor’s Code—the privacy and harassment provisions—exist primarily to protect genuinely vulnerable people from press intrusion. Grieving families. Crime victims. People suffering genuine harassment.

Mandelson is invoking them to avoid answering questions about his documented friendship with one of history’s most notorious pedophiles—a friendship so extensive and problematic that it just cost him his job as ambassador to the United States, days before a presidential state visit.

According to Politico, the UK Foreign Office withdrew Mandelson “with immediate effect” after emails showed the relationship was far deeper than previously known:

In a statement the U.K. Foreign Office said Mandelson had been withdrawn as ambassador “with immediate effect” after emails showed “the depth and extent” of his relationship with Epstein was “materially different from that known at the time of his appointment.”

“In particular Peter Mandelson’s suggestion that Jeffrey Epstein’s first conviction was wrongful and should be challenged is new information,” the statement added.

So we have a senior political figure who just got fired over revelations that he told a convicted sex offender his prosecution was “wrongful” and should be challenged, who maintained this friendship for years longer than he’d admitted, and his response is to invoke press harassment protections?

The notice does include the important qualifier “unless justified in the public interest.” And it’s hard to imagine a clearer case of public interest: a senior diplomat, just sacked from his post, over previously undisclosed communications with a convicted pedophile, in which he expressed support for challenging that pedophile’s conviction. If that’s not public interest, the term has no meaning.

But the mere act of circulating this notice creates a chilling effect. It puts journalists on notice that pursuing this story could result in complaints to the regulator. It’s using the machinery of press regulation as a shield against legitimate accountability journalism.

Now, to be fair, one could imagine scenarios where even a disgraced public figure might legitimately invoke harassment protections—it wasn’t that long ago there was a whole scandal in the UK with journalists hacking the voicemails of famous people. But that’s not what’s happening here. Mandelson is invoking these provisions to avoid being asked questions at all. “Please don’t inquire about why I told a convicted pedophile his prosecution was wrongful” is not the kind of harm these rules were designed to prevent.

This is who Mandelson has always been: someone who sees regulatory and governmental machinery as tools to be deployed on behalf of whoever he’s serving at the moment. Back in 2009, we covered how he returned from a vacation with entertainment industry mogul David Geffen and almost immediately started pushing for aggressive new copyright enforcement measures, including kicking people off the internet for file sharing. As we wrote at the time, he had what we called a “sudden conversion” to Hollywood’s position on internet enforcement that happened to coincide suspiciously with his socializing with entertainment industry executives.

Back then, the machinery was deployed to serve entertainment executives who wanted harsher copyright enforcement. Now it’s being deployed to serve Mandelson himself.

There’s a broader pattern here that goes beyond one UK politician. The Epstein revelations have been remarkable not just for what they’ve revealed about who associated with him, but for how consistently the response from the powerful has been to deflect, deny, and deploy every available mechanism to avoid genuine accountability. Some have used their media platforms to try to reshape the narrative. Some have simply refused to comment.

Mandelson is trying to use the press regulatory system itself.

It’s worth noting that The National chose to publish the “confidential – strictly not for publication” memo anyway, explicitly citing the public interest. Good for them. Because if there’s one thing that absolutely serves the public interest, it’s shining a light on attempts by the powerful to use the systems meant to protect the vulnerable as shields for their own accountability.

Mandelson’s representatives say he “does not wish to speak to the media at this time.” That’s his right to request—but no media should have to agree to his terms. Weaponizing press regulation to create a cone of silence around questions of obvious public interest is something else entirely. It’s elite impunity dressed up in the language of press ethics.

Posted on Techdirt - 11 February 2026 @ 09:27am

An 18-Million-Subscriber YouTuber Just Explained Section 230 Better Than Every Politician In Washington

Over the years, we’ve written approximately one million words explaining why Section 230 of the Communications Decency Act is essential to how the internet functions. We’ve corrected politicians who lie about it. We’ve debunked myths spread by mainstream media outlets that should know better. We’ve explained, re-explained, and then explained again why gutting this law would be catastrophic for online speech.

And now I find myself in the somewhat surreal position of saying: you know who nailed this explanation better than most policy experts, pundits, and certainly better than any sitting member of Congress? A YouTuber named Cr1TiKaL.

If you’re not familiar with Charles “Cr1TiKaL” White Jr., he runs the penguinz0 YouTube channel with nearly 18 million subscribers and over 12 billion total views. He’s known for deadpan commentary on internet culture and video games. He’s not a policy wonk. He’s not a lawyer. He’s just a guy who apparently bothered to actually understand what Section 230 says and does—something that puts him leagues ahead of the United States Congress.

In this 13-minute video responding to actor Joseph Gordon-Levitt’s call to “sunset” Section 230, Cr1TiKaL laid out the case for why 230 matters with a clarity that most mainstream coverage hasn’t managed in a decade:

Dismantling section 230 would fundamentally change the internet as you know it. And that’s not an exaggeration to say it. Put it even more simply, section 230 allows goobers like me to post whatever they want, saying whatever they want, and the platform itself is not liable for whatever I’ve made or said. 

That is on me personally. 

The platform isn’t going to be, you know, fucking dragged through the streets with legs spread like a goddamn Thanksgiving turkey for it and getting blasted by lawsuits or whatever. Now, of course, there are limitations in place when it comes to illegal content, things that actually break the law. That is, of course, a very different set of circumstances. That’s a different can of worms, and that’s handled differently. But it should be obvious why section 230 is so important because if these platforms were held liable for every single thing people post on their platforms, they would get into a lot of hot water and they would just not allow people to post things. Full stop. because it would be too dangerous to do so. They would need to micromanage and control every single thing that hits the platform in order to protect themselves. No matter how you spin it, this would ruin the internet. It’s a pile of dogshit. No matter how much perfume gets sprayed on it or how they want to repackage it, it still stinks. 

Yes, the metaphors are colorful. But the underlying point is exactly correct. Section 230 places liability where it belongs: on the person who actually created the content. Not on the platform that hosts it. This is how the entire internet works. Every comment section, every social media post, every forum—all of it depends on this basic principle.

Also, he actually reads the 26 words in the video! This is something that so many other critics of 230 skip over, because then they can pretend it says things it doesn’t say.

And unlike the politicians who keep pretending this is some kind of special gift to “Big Tech,” Cr1TiKaL correctly notes that 230 protects everyone:

This would affect literally every platform that has anything user submitted in any capacity at all. 

Every. Single. One. Your local newspaper’s comment section. The neighborhood Facebook group. The subreddit for your favorite hobby. The Discord server where you talk about video games. The email you forward. All of it.

He’s also refreshingly clear-eyed about why politicians from both parties keep attacking 230:

Since the advent of the internet, section 230 has been a target for people that want to control your speech and infringe on your First Amendment rights.

This observation tracks with what we’ve pointed out repeatedly: the bipartisan hatred of Section 230 is one of the most remarkable examples of political unity in modern American governance—and it’s driven largely by politicians who want platforms to moderate content in ways that favor their particular political preferences.

Democrats have attacked 230 claiming it enables “misinformation” and hate speech. Republicans have attacked it claiming it enables “censorship” of conservative voices. Both cannot simultaneously be true, and yet both parties have introduced legislation to gut the law. Cr1TiKaL captures this perfectly:

When Democrats were in charge, it caught a lot of scrutiny, claiming that it was enabling the spread of racism and harming children. With Republicans in power, they’re claiming that it’s spreading misinformation and anti-semitism. This is a bipartisan punching bag that they desperately want to just beat down.

The critics always trot out the same tired arguments about algorithms and echo chambers and extremism. As if removing 230 would somehow make speech better rather than making it disappear entirely or become heavily controlled by whoever has the most money and lawyers. Cr1TiKaL cuts right through this:

There are people that are paying a lot of money to try and plant this idea in your brain that section 230 is a bad thing. It only leads to things like extremism and conspiracy theories and demonization and that kind of thing. That’s not true. 

Anyone who stops and thinks about this for even just a moment, firing on a few neurons, should be able to recognize how outrageous this proposal is. How would shutting down conversation and shutting down the ability to express thoughts and opinions somehow help combat the rise of extremism and conspiracies? that would only exacerbate the problem. Censorship doesn’t solve these issues. It makes them worse. 

He even anticipates the point we’ve made countless times about what the internet would look like without 230:

Platforms would not allow just completely unfiltered usage of normal people expressing their thoughts because those thoughts might go against the official narrative from the curated source and then the curated source might go after the platform saying this is defamatory. These people have just said something hosted on your platform and we’re coming after you with lawsuits. So they just wouldn’t allow it. 

This is a point we keep repeating and you never hear in the actual policy debates, because supporters of a 230 repeal have no answer for it beyond “nuh-uh.”

The people who most want to control online speech are exactly the people you’d expect: governments and powerful interests who don’t like being criticized. Section 230 is one of the things standing in their way.

And when critics inevitably dust off the “think of the children” argument, Cr1TiKaL delivers the response that shouldn’t be controversial but apparently is:

Be a parent. It is not the internet’s job to cater to your lack of parenting by just letting your kid online. Fucking lazy trash ass parents just sit a kid in front of a computer or an iPad and then are stunned when apparently they find bad shit. Be a parent. Be involved in your kids’ life. Raise your children. Don’t make it the internet’s job to do that for you. 

Is this delivered with the diplomatic nuance of a congressional hearing? No. Is it correct? Absolutely. The “protect the children” argument for dismantling 230 has always been a dodge—a way to make critics of the bill seem heartless while ignoring that Section 230 doesn’t protect illegal content and maybe, just maybe, the primary responsibility for what media children consume should rest with the adults responsible for those children.

We’ve been writing about Section 230 for years, trying to explain to policymakers and the general public why it matters. And most of the time, it feels like shouting into the void. Politicians keep lying about it. Journalists keep getting it wrong. The mythology around 230 persists no matter how many times it gets corrected.

And we’ve heard from plenty of younger people who now believe that 230 is bad. I recently guest taught a college class where students were split into two groups—one to argue in favor of 230 and one against—and I was genuinely dismayed when the group told to argue in favor of 230 argue that 230 “once made sense” but doesn’t any more.

So there’s something genuinely hopeful about seeing a young creator with an audience of nearly 18 million people—an audience that skews young and is probably not spending a lot of time reading policy papers—get it right. Not just right in a general sense, but right in the specifics. He read the law. He understood what it does. He correctly identified why it matters and who benefits from dismantling it.

Maybe the generation that grew up on the internet actually understands what’s at stake when politicians threaten to fundamentally reshape how it works. Maybe they’re not buying the moral panic narratives that have been trotted out to justify every bad piece of tech legislation for the past decade.

Or maybe I’m being optimistic. Either way, Cr1TiKaL’s video is worth watching. It’s profane, it’s casual, and it’s more correct about Section 230 than anything you’ll hear from the halls of Congress.

Posted on Techdirt - 10 February 2026 @ 12:03pm

How To Think About AI: Is It The Tool, Or Are You?

We live in a stupidly polarizing world where nuance is apparently not allowed. Everyone wants you to be for or against something—and nowhere is this more exhausting than with AI. There are those who insist that it’s all bad and there is nothing of value in it. And there are those who think it’s all powerful, the greatest thing ever, and will replace basically every job with AI bots who can work better and faster.

I think both are wrong, but it’s important to understand why.

So let me lay out how I actually think about it. When it’s used properly, as a tool to assist a human being in accomplishing a goal, it can be incredibly powerful and valuable. When it’s used in a way where the human’s input and thinking are replaced, it tends to do very badly.

And that difference matters.

I think back to a post from Cory Doctorow a couple months ago where he tried to make the same point using a different kind of analogy: centaurs and reverse-centaurs.

Start with what a reverse centaur is. In automation theory, a “centaur” is a person who is assisted by a machine. You’re a human head being carried around on a tireless robot body. Driving a car makes you a centaur, and so does using autocomplete.

And obviously, a reverse centaur is a machine head on a human body, a person who is serving as a squishy meat appendage for an uncaring machine.

Like an Amazon delivery driver, who sits in a cabin surrounded by AI cameras, that monitor the driver’s eyes and take points off if the driver looks in a proscribed direction, and monitors the driver’s mouth because singing isn’t allowed on the job, and rats the driver out to the boss if they don’t make quota.

The driver is in that van because the van can’t drive itself and can’t get a parcel from the curb to your porch. The driver is a peripheral for a van, and the van drives the driver, at superhuman speed, demanding superhuman endurance. But the driver is human, so the van doesn’t just use the driver. The van uses the driver up.

Obviously, it’s nice to be a centaur, and it’s horrible to be a reverse centaur.

As Doctorow notes in his piece, some of the companies embracing AI tech are doing so with the goal of building reverse-centaurs. Those are the ones that people are, quite understandably, uncomfortable with and should be mocked. But the reality is, also, it seems quite likely those efforts will fail.

And they’ll fail not just because they’re dehumanizing—though they are—but because the output is garbage. Hallucinations, slop, confidently wrong answers: that’s what happens when nobody with actual knowledge is checking whether any of it makes sense. When AI works well, it’s because a human is providing the knowledge and the creativity.

The reverse-centaur doesn’t just burn out the human. It produces worse work, because it assumes that the AI can provide the knowledge or the creativity. It can’t. That requires a human. The power of AI tools is in enabling a human to take their own knowledge, and their own creativity and enhance it, to do more with it, based on what the person actually wants.

To me it’s a simple question of “what’s the tool?” Is it the AI, used thoughtfully by a human to do more than they otherwise could have? If so, that’s a good and potentially positive use of AI. It’s the centaur in Doctorow’s analogy.

Or is the human the tool? Is it a “reverse centaur”? I think nearly all of those are destined to fail.

This is why I tend not to get particularly worked up by those who claim that AI is going to destroy jobs and wipe out the workforce, who will be replaced by bots. It just… doesn’t work that way.

At the same time, I find it ridiculous to see people still claiming that the technology itself is no good and does nothing of value. That’s just empirically false. Plenty of people—including myself—get tremendous use out of the technology. I am using it regularly in all different ways. It’s been two years since I wrote about how I used it to help as a first pass editor.

The tech has gotten dramatically better since then, but the key insight to me is what it takes to make it useful: context is everything. My AI editor doesn’t just get my draft writeup and give me advice based on that and its training—it also has a sampling of the best Techdirt articles, a custom style guide with details about how I write, a deeply customized system prompt (the part of AI tools that are often hidden from public view) and a deeply customized starting prompt. It also often includes the source articles I’m writing about. With all that context, it’s an astoundingly good editor. Sometimes it points out weak arguments I missed entirely. Sometimes it has nothing to say.

(As an aside, in this article, it suggested I went on way too long explaining all the context I give it to give me better suggestions, and thus I shortened it to just the paragraph above this one).

It’s not always right. Its suggestions are not always good. But that’s okay, because I’m not outsourcing my brain to it. It’s a tool. And way more often than not, it pushes me to be a better writer.

This is why I get frustrated every time people point out a single AI fail or hallucination without context.

The problem only comes in when people outsource their brains. When they become reverse centaurs. When they are the tool instead of using AI as the tool. That’s when hallucinations or bad info matter.

But if the human is in control, if they’re using their own brain, if they’re evaluating what the tool is suggesting or recommending and making the final decision, then it can be used wisely and can be incredibly helpful.

And this gets at something most people miss entirely: when they think about AI, they’re still imagining a chatbot. They think every AI tool is ChatGPT. A thing you talk to. A thing that generates text or images for you to copy-paste somewhere else.

That’s increasingly not where the action is. The more powerful shift is toward agentic AI—tools that don’t just generate content, but actually do things. They write code and run it. They browse the web and synthesize what they find. They execute multi-step tasks with minimal hand-holding. This is a fundamentally different model than “ask a chatbot a question and get an answer.”

I’ve been using Claude Code recently, and this distinction matters. It’s an agent that can plan, execute, and iterate on actual software projects, rather than just a tool talking to me about what to do. But, again, that doesn’t mean I just outsource my brain to it.

I often put Claude Code into plan mode, where it tries to work out a plan, but then I spend quite a lot of time exploring why it was making certain decisions, and asking it to explore the pros and cons of those decisions, and even to provide me with alternative sources to understand the trade-offs of some of the decisions it is recommending. That back and forth has been both educational for me, but also makes me have a better understanding and be comfortable with the eventual projects I use Claude Code to build.

I am using it as a tool, and part of that is making sure I understand what it’s doing. I am not outsourcing my brain to it. I am using it, carefully, to do things that I simply could not have done before.

And that’s powerful and valuable.

Yes, there are so many bad uses of AI tools. And yes, there is a concerted, industrial-scale effort, to convince the public they need to use AI in ways that they probably shouldn’t, or in ways that is actively harmful. And yes, there are real questions about what it costs to train and run the foundation models. And we should discuss those and call those out for what they are.

But the people who insist the tools are useless and provide nothing of value, that’s just wrong. Similarly, anyone who thinks the tech is going to go away are entirely wrong. There likely is a funding bubble. And some companies will absolutely suffer as it deflates. But it won’t make the tech go away.

When used properly, it’s just too useful.

As Cory notes in his centaur piece, AI can absolutely help you do your job, but the industry’s entire focus is on convincing people it can replace your job. That’s the con. The tech doesn’t replace people. But it can make them dramatically more capable—if they stay in the driver’s seat.

The key to understanding the good and the bad of the AI hype is understanding that distinction. Cory explains this in reference to AI coding:

Think of AI software generation: there are plenty of coders who love using AI, and almost without exception, they are senior, experienced coders, who get to decide how they will use these tools. For example, you might ask the AI to generate a set of CSS files to faithfully render a web-page across multiple versions of multiple browsers. This is a notoriously fiddly thing to do, and it’s pretty easy to verify if the code works – just eyeball it in a bunch of browsers. Or maybe the coder has a single data file they need to import and they don’t want to write a whole utility to convert it.

Tasks like these can genuinely make coders more efficient and give them more time to do the fun part of coding, namely, solving really gnarly, abstract puzzles. But when you listen to business leaders talk about their AI plans for coders, it’s clear they’re not looking to make some centaurs.

They want to fire a lot of tech workers – they’ve fired 500,000 over the past three years – and make the rest pick up their work with coding, which is only possible if you let the AI do all the gnarly, creative problem solving, and then you do the most boring, soul-crushing part of the job: reviewing the AIs’ code.

Criticize the hype. Mock the replace-your-workforce promises. Call out the slop factories and the gray goo doomsaying. But don’t mistake the bad uses for the technology itself. When a human stays in control—thinking, evaluating, deciding—it’s a genuinely powerful tool. The important question is just whether you’re using it, or it’s using you.

Posted on Techdirt - 10 February 2026 @ 09:28am

Hey Rep. Gonzales, Finish The Thought: What About That Five-Year-Old US Citizen?

Republican Rep. Tony Gonzales from Texas went on Face the Nation on Sunday and said a lot of silly things, doing his best as a loyal Trump foot soldier to defend the indefensible, to make sense of the nonsensical, and to lie about all the rest.

However, I wanted to focus on one bit of the clip that I’ve watched over a dozen times, and still can’t figure out what Rep. Gonzales meant. And I’m writing this in hopes that some DC or Texas reporter asks Gonzales to explain. Here’s the clip:

Gonzales on Liam Ramos and his family: "They're not gonna qualify for asylum. So what do you do with all the people that go through the process and do not qualify for asylum? You deport them. I understand that 5-year-old and it breaks my heart. I also think, what about that 5-year-old US citizen?"

Aaron Rupar (@atrupar.com) 2026-02-08T16:09:49.039Z

And here’s the transcript from CBS. I’m including a bit more than is in the clip just to get the full context of what he’s saying:

MARGARET BRENNAN: You have this facility, though, in your district, Dilley, and that is for family detentions. That’s where little five-year-old Liam Ramos from Minnesota was held before a judge, that’s the picture of him there, ordered him released. He was ordered released because his family has a pending asylum claim, a legal process. He had entered with U.S. government permission through a process that the Biden administration had deemed legal. The current administration does not. The CBPOne app. Liam’s father gave an interview to Telemundo and you read the transcript, he’s talking about this five-year-old. He’s not okay. He’s waking up at night crying. He’s worried he’s going to be taken again. It’s psychological trauma, according to the father. And the administration is still trying to deport him. Do you understand why they are so focused on this five-year-old and his dad if they did come in through the front door with U.S. government permission? 

REP. GONZALES: Well, the front door was via an app that Biden knew exactly what he was doing, and he created this huge mess, and now President Trump is there to clean up.

MARGARET BRENNAN: –but he came in the front door, he wasn’t–

REP. GONZALES: –through an app–

MARGARET BRENNAN: –across the border–

REP. GONZALES: –through an app that wasn’t vetted. And bottom line is, he’s likely- they’re not going to qualify for asylum. So what do you do with all the people that go through the process and do not qualify for asylum? You deport them. I understand the five-year-old and it, you know, it breaks my heart. I have a five year old at home. I also think, what about that five-year-old U.S. citizen–

MARGARET BRENNAN: –You feel comfortable defending that? 

REP. GONZALES: I feel comfortable- we have to have a nation of laws. If we don’t have a nation of laws–

MARGARET BRENNAN: –They were following the- the law that is- that is that’s the rub, is that a new administration deemed the last administration’s regulation not to be legal.

Again, there’s a lot of nonsense in there, including Gonzales trying to pretend that Liam Ramos and his father had not entered the right way and following the laws of the US for those seeking to come here just because it was “through an app.” That app was the legal process. They followed the law. They did it the right way. To magically make that out to be violating the law because the next administration no longer wants to support that path doesn’t change the underlying fact that they were doing things the legal way.

But, again, let’s leave that aside. I simply want to focus in on the question of what the fuck Gonzales meant when he said:

I understand the five-year-old and it, you know, it breaks my heart. I have a five year old at home. I also think, what about that five-year-old U.S. citizen–

What about them? Under what scenario, process, or idea is that hypothetical five-year-old US citizen harmed? I’ve been unable to think or a single possible scenario in which the US citizen five-year-old could be harmed by allowing Liam Ramos to go through the asylum process.

Perhaps Rep. Gonzales can enlighten us by completing his thought and explaining.

Seriously: what is the scenario here? Is pre-kindergarten a zero-sum game now? Does Liam Ramos’s presence in a classroom somehow harm the US citizen in the next seat?

Brennan cut him off before he could finish the thought, and nobody followed up. So we don’t know. But I’d really like someone in the DC or Texas press corps to ask him to complete that sentence. Because I can think of one very obvious way that five-year-old US citizens are being harmed right now—and it’s not by Liam Ramos.

It’s by watching their government kidnap their classmates.

Nicholas Grossman talked about how his own child is distraught because some of his classmates can no longer come to school for fear their parents may be kidnapped by ICE:

My first grader (a US citizen) came home from school crying because a friend from class (also a US citizen) hasn’t been coming to school because his parents (one of whom is not a citizen) are afraid of ICE.Little kids don’t have concepts of racism and xenophobia. That has to be taught. Or imposed.

Nicholas Grossman (@nicholasgrossman.bsky.social) 2026-02-08T17:11:41.156Z

Indeed, the NY Times went and actually spoke with Liam Ramos’ classmates, and they seem legitimately distraught that government agents kidnapped their friend and sent him halfway across the country to a dangerous concentration camp. The video on that page is absolutely heartbreaking. I don’t see how anyone with a soul could possibly support or justify what is being done to Ramos. And to claim it’s in the name of his US citizen classmates is even more obnoxious. Just a couple of the quotes from five year olds:

“You are scaring schools, people, and the world. You should be kind, helpful, and caring like normal police. Not dangerous, scary, and stealing people. I think you should make friends with the world.”

“You, right now, you’re making people really sad because you’re just taking them away without them doing anything.”

So, please, Rep. Gonazales, tell us what you were thinking. What about those five-year-olds? What about kidnapping their classmate makes them better off? What about any of this makes sense? They’re not criminals. They followed the official legal process. They came in through “the front door” following the official process of the government at the time.

At no point have they done anything wrong.

So please, Rep. Gonzales: finish the thought. What about that five-year-old US citizen?

Because those five-year-old US citizens have already given their answer. They’re not being harmed by Liam Ramos. They’re being harmed by a government that just taught them their friends can disappear without warning.

That’s “what about” them.

Posted on Techdirt - 9 February 2026 @ 01:32pm

NBC Hid The Boos For JD Vance. Where’s Trump’s ‘Unfair Editing’ Lawsuit?

If you watched NBC’s prime time broadcast of the Winter Olympics opening ceremony on Friday, you saw Vice President JD Vance in the stands at San Siro Stadium in Milan with his wife, Usha. The commentary team said “JD Vance” and moved on. Pleasant enough.

But if you were watching literally any other country’s broadcast—or were actually in the stadium—you heard something else: the crowd booing. Loudly. Jeering. Whistling. CBC’s commentator captured the moment awkwardly:

There is the vice-president JD Vance and his wife Usha – oops, those are not … uh … those are a lot of boos for him. Whistling, jeering, some applause.

Multiple journalists on the ground reported the same thing. The Guardian’s Sean Ingle noted the boos. USA Today’s Christine Brennan noted the boos. The boos were, by all accounts, quite audible to anyone actually present in the stadium.

Timothy Burke put together clips of many other countries broadcasts, many of which called out the boos or discussed criticism of the Trump admin:

JD Vance getting booed, as called around the world (auto transcribed & translated, mostly):

Timothy Burke (@bubbaprog.xyz) 2026-02-08T06:33:29.885Z

Mexico’s broadcast went on at length, including discussing how the US had to change the name of their Olympic village from “ice house” to “winter house” knowing how it would be perceived.

I didn't forget Mexico, BTW, it's just that I had to make Mexico as its own separate video because they were talking about Vance and ICE through the entire U.S. arrival at each of the locations and WELL INTO FRANCETWO AND A HALF MINUTES

Timothy Burke (@bubbaprog.xyz) 2026-02-08T17:17:53.411Z

But if you were watching NBC’s broadcast in the United States? Crickets. As the Guardian reported:

However, on the NBC broadcast the boos were not heard or remarked upon when Vance appeared on screen, with the commentary team simply saying “JD Vance”. That didn’t stop footage of the boos being circulated and shared on social media in the US. The White House posted a clip of Vance applauding on NBC’s broadcast without any boos.

For what it’s worth, NBC denies that it “edited” the crowd booing the Vances. But the analysis on that page by the folks at Awful Announcing show pretty clearly that NBC (which ran a live feed of the opening ceremony as well as a prime time version) turned up the sound of music at the moment the Vances were shown on the screen.

Now, look. As a technical and legal matter, NBC has every right to make that editorial choice. Broadcasters exercise editorial discretion over their coverage all the time. They choose camera angles, they choose what to amplify and what to downplay, they shape narratives. That’s not illegal. It’s not even unusual. It’s called being a media company. The First Amendment protects editorial discretion—including editorial discretion that results in coverage that makes politicians look better than reality would suggest.

Of course, that principle cuts both ways. Or at least it should.

We’ve now spent months watching Donald Trump file lawsuit after lawsuit against news organizations for what he claims is “unfair” editing. The theory in these cases is that editing footage in ways that make Trump or his allies look bad is somehow actionable defamation or election interference. It’s a theory that, if accepted, would basically mean the president gets veto power over how he’s portrayed in any news coverage.

Remember, Trump sued CBS over a “60 Minutes” interview with Kamala Harris, claiming that the way the interview was edited amounted to “election and voter interference.” That lawsuit was, to put it charitably, legally incoherent nonsense. We covered it at the time, noting that Trump’s supposed smoking gun was that CBS edited an answer for time—you know, the thing every television program in history does, including cutting out the bits that make Trump look bad.

Then there was the $10 billion lawsuit against the BBC over a documentary that didn’t even air in the United States. Trump’s legal team actually cited VPN download statistics as evidence of damages, apparently believing that Americans who went out of their way to circumvent geographic restrictions to watch a documentary they weren’t supposed to see somehow constitutes harm to Trump.

Of course, as you already know, CBS, facing the Trump lawsuit while also trying to get FCC approval for the Paramount merger, decided to just… pay up. We called it what it was at the time: a $16 million bribe. Not because CBS thought Trump had a valid legal claim—the lawsuit was obviously baseless—but because CBS was terrified that an angry Trump administration would tank its merger if it didn’t make the lawsuit go away.

And that’s the point. The lawsuits aren’t really about winning in court. They’re about establishing a new norm: favorable coverage or else.

So now we have NBC, which happens to have a rather large interest in staying on the good side of this administration (what with the LA Olympics coming up in 2028 and all the broadcast rights that entails, and you already have Trump and FCC boss Brendan Carr threatening NBC’s late-night comedy hosts), making an editorial choice to mute crowd boos directed at the vice president. And I will bet you every meager dollar I have that no one in Trump’s orbit will say a single word about NBC’s “unfair” editing. No tweets from Trump about “fake news NBC” cutting audio to misrepresent crowd reactions. No lawsuits alleging that NBC’s editorial choices constitute fraud on the American public.

Because the “unfair editing” complaints were never actually about editing. They were about whether the editing made Trump look good or bad. Editing that cuts out boos? That’s just good production values. Editing that makes Harris’s answer seem more coherent? That’s election interference worthy of billions in damages.

This is what an attack on press freedom looks like. It’s not a single dramatic moment. It’s a slow accretion of pressure—lawsuits that are expensive to fight even when you win, regulatory approvals that get held hostage, implicit threats that keep executives up at night—until media companies internalize the lesson. The lesson isn’t “be accurate” or “be fair.” The lesson is: make us look good, or face the consequences.

And NBC appears to have learned the lesson well.

More posts from Mike Masnick >>