Mike Masnick's Techdirt Profile

Mike Masnick

About Mike Masnick Techdirt Insider

Mike is the founder and CEO of Floor64 and editor of the Techdirt blog.

He can be found on Bluesky at bsky.app/profile/masnick.com on Mastodon at mastodon.social/@mmasnick and still a little bit (but less and less) on Twitter at www.twitter.com/mmasnick

Posted on Techdirt - 3 April 2026 @ 11:02am

The Social Media Addiction Verdicts Are Built On A Scientific Premise That Experts Keep Telling Us Is Wrong

Last week, I wrote about why the social media addiction verdicts against Meta and YouTube should worry anyone who cares about the open internet. The short version: plaintiffs’ lawyers found a clever way to recharacterize editorial decisions about third-party content as “product design defects,” effectively gutting Section 230 without anyone having to repeal it. The legal theory will be weaponized against every platform on the internet, not just the ones you hate. And the encryption implications of the New Mexico decision alone should terrify everyone. You can read that post for more details on the legal arguments.

But there’s a separate question lurking underneath the legal one that deserves its own attention: is the scientific premise behind all of this even right? Are these platforms actually causing widespread harm to kids? Is “social media addiction” a real thing that justifies treating Instagram like a pack of Marlboros? We’ve covered versions of this debate in the past, mostly looking at studies. But there are other forms of expert analysis as well.

Long-time Techdirt reader and commenter Leah Abram pointed us to a newsletter from Dr. Katelyn Jetelina and Dr. Jacqueline Nesi that digs into exactly this question with the kind of nuance that’s been almost entirely absent from the mainstream coverage. Jetelina runs the widely read “Your Local Epidemiologist” newsletter, and Nesi is a clinical psychologist and professor at Brown who studies technology’s effects on young people.

And what they’re saying lines up almost perfectly with what we’ve been saying here at Techdirt for years, often to enormous pushback: social media does not appear to be inherently harmful to children. What appears to be true is that there is a small group of kids for whom it’s genuinely problematic. And the interventions that would actually help those kids look nothing like the blanket bans and sweeping product liability lawsuits that politicians and trial lawyers are currently pursuing. And those broad interventions do real harm to many more people, especially those who are directly helped by social media.

Let’s start with the “addiction” question, since that’s the framework on which these verdicts were built. Here’s Nesi:

There is much debate in psychology about whether social media use (or, really, any non-substance-using behavior outside of gambling) can be called an “addiction.” There is no clear neurological or diagnostic criteria, like a blood test, to make this easy, so it’s up for debate:

  • On one hand, some researchers argue that compulsive social media use shares enough features (loss of control, withdrawal-like symptoms, continued use despite harm) to warrant the diagnosis for treatment.
  • Others say the evidence for true neurological dependency is still weak and inconsistent because research relies on self-reported data, findings haven’t been replicated, and many heavy users don’t show true clinical impairment without pre-existing issues.

Her bottom line is measured and careful in a way that you almost never hear from the politicians and lawyers who claim to be acting on behalf of children:

Here’s my current take: There are a small number of people whose social media use is so extreme that it causes significant impairment in their lives, and they are unable to stop using it despite that impairment. And for those people, maybe addiction is the right word.

For the vast majority of people (and kids) using social media, though, I do not think addiction is the right word to use.

That’s a leading expert on technology and adolescent mental health, someone who has personally worked with hospitalized suicidal teenagers, telling you that for the vast majority of kids, “addiction” is the wrong word. And she has a specific, evidence-based reason for why that distinction matters — one that should be of particular interest to anyone who actually wants platforms held accountable for the kids who are being harmed.

Nesi argues that overusing the addiction label doesn’t just lack scientific precision. It actively weakens the case for meaningful platform accountability:

Preserving the precision of the addiction label — reserving it for the small number of kids whose use is genuinely compulsive and impairing — actually strengthens the case for platform accountability, rather than weakening it. It’s that targeted claim that has driven legal action and regulatory pressure. Expanding it to average use shifts focus from systemic design fixes to individual diagnosis, and dilutes the very argument that holds platforms responsible.

This is a vital point that runs counter to the knee-jerk reactions of both the trial lawyers and the moral panic crowd. If you say every kid using social media is an addict, you’ve made the concept of addiction meaningless, and you’ve made it dramatically harder to identify and help the kids who are actually suffering. You’ve also given platforms an easy out: if everyone’s addicted, then it’s just a feature of how humans interact with technology, and nobody is specifically responsible for anything. Precision is what creates accountability. Vagueness destroys it.

We highlighted something similar back in January, when a study published in Nature’s Scientific Reports found that simply priming people to think about their social media use in addiction terms — such as using language from the U.S. Surgeon General’s report — reduced their own perceived control, increased their self-blame, and made them recall more failed attempts to change their behavior. The addiction framing itself was creating a feeling of helplessness that made it harder for people to change their habits. As the researchers in that study put it:

It is impressive that even the two-minute exposure to addiction framing in our research was sufficient to produce a statistically significant negative impact on users. This effect is aligned with past literature showing that merely seeing addiction scales can negatively impact feelings of well-being. Presumably, continued exposure to the broader media narrative around social media addiction has even larger and more profound effects.

So we’re stuck with a situation where the dominant public narrative — “social media is addicting our children” — appears to be both scientifically imprecise and actively counterproductive for the people it claims to help. That’s a real problem. And it would be nice if the moral panic crowd would start to recognize the damage they’re doing.

None of this means there are no risks. Nesi is quite clear about that, drawing on her own clinical work:

A few years ago, I ran a study with adolescents experiencing suicidal thoughts in an inpatient hospital unit. Many of the patients I spoke to had complex histories of abuse, neglect, bullying, poverty, and other major stressors. Some of these patients used social media in totally benign, unremarkable ways. A few of them, though, were served with an endless feed of suicide-related posts and memes, some romanticizing or minimizing suicide. For those patients, it would be very hard to argue that social media did not contribute to their symptoms, even with everything else going on in their lives.

Nobody who has paid serious attention to this issue disputes that. There are kids for whom social media is a contributing factor in genuine mental health crises. The question has always been whether that reality justifies treating social media as an inherently dangerous product that harms all children — the premise on which these lawsuits and legislative bans are built.

The evidence consistently says no. When it comes to whether social media actually causes mental health issues, the newsletter is direct:

The scientific community has substantial correlational evidence and some, but not much, causal evidence of harm. Studies that randomly assigned people to stop using social media show mixed results, depending on how long they stopped, whether they quit entirely or just reduced use, and what they were using it for.

And:

It is still the case that if you take an average, healthy teen and give them social media, this is highly unlikely to create a mental illness.

This is consistent with what we’ve been reporting on for years, including two massive studies covering 125,000 kids that found either a U-shaped relationship (where moderate use was associated with the best outcomes and no use was sometimes worse than heavy use) or flat-out zero causal effect on mental health. Every time serious researchers go looking for the inherent-harm story that politicians keep telling, they come up empty.

One of the most fascinating details in the newsletter is the Costa Rica comparison. Costa Rica ranks #4 in the 2026 World Happiness Report. Its residents use just as much social media as Americans. And yet:

It doesn’t necessarily have fewer mental illnesses. And it certainly doesn’t have less social media use. What it has is a deep social fabric, and that may mean social media use reinforces real-world connections in Costa Rica, whereas in English-speaking countries, it may be replacing them.

In other words, cultural factors appear to be protective. The underlying challenges to social foundations — trust, connection, belonging, and safety — are what drive happiness. Friendships, being known by someone, the sense that you belong somewhere: these are the actual load-bearing pillars of mental health, more predictive of wellbeing than income, and more protective against mental illness than almost any intervention we have.

If social media were inherently harmful — if the “addictive design” of infinite scroll and autoplay and algorithmic recommendations were the core problem — Costa Rica would be suffering the same outcomes as the United States. They have the same platforms, same features, and same engagement mechanics. What actually differs is the strength of the social fabric, not the tools themselves.

This is a similar point I raised in my review of Jonathan Haidt’s book two years ago. If you go past his cherry-picked data, you can find tons of countries with high social media use where rates of depression and suicide have gone down. There are clearly many other factors at work here, and little evidence that social media is a key factor at all.

That realization completely changes how we should think about policy. If the problem is weak social foundations — not enough connection, not enough belonging, not enough adults showing up for kids — then banning social media or suing platforms into submission won’t fix it. You’ll have addressed the wrong variable. And in the process, you’ll have made the platforms worse for the many kids (including LGBTQ+ teens in hostile communities, kids with rare diseases, teens in rural areas) who rely on them for the connection and community that their physical environment doesn’t provide.

Nesi’s column has some practical advice that is pretty different than what that best selling book might tell you:

If you know your teen is vulnerable, perhaps due to existing mental health challenges or social struggles, you may want to be extra careful.

If your teen is using social media in moderation, and it does not seem to be affecting them negatively, it probably isn’t.

That sounds so obvious it feels almost silly to type out. And yet it is the exact opposite of the approach we see in the lawsuits and bans currently dominating the policy landscape, which assume social media is a universally dangerous product requiring universal restrictions.

The newsletter closes with a key line that highlights the nuance that so many people ignore:

Social media may be one piece of the puzzle, but it’s certainly not the whole thing.

We’ve been making this point at Techdirt for a long time now, often in the face of considerable hostility from people who are deeply invested in the simpler narrative. I’ve written about Danah Boyd’s useful framework of understanding the differences between risks and harms, and how moral panics confuse those two things. I’ve covered so many studies that find no causal link that I’ve lost count. I’ve pointed out how the “addiction” framing may be doing more damage than the platforms themselves.

That’s why it’s encouraging to see credentialed, independent researchers — people who work directly with the most vulnerable kids — end up in the same place through their own work. Because this conversation desperately needs more voices willing to acknowledge both realities: that some kids are genuinely harmed and need targeted help, and that the sweeping narrative of universal social media harm is not supported by the science and leads to policy responses that may hurt far more people than they help.

The kids who are in that small, genuinely vulnerable group deserve interventions designed for them — better mental health funding and access along with better identification of at-risk youth. What they don’t deserve is to have their suffering used as a blunt instrument and a prop to reshape the entire internet through lawsuits built on a scientific premise that the actual scientists keep telling us is wrong.

Posted on Techdirt - 2 April 2026 @ 03:20pm

Ctrl-Alt-Speech: Age Old Questions

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

If you’ve got Elon Musk in your Ctrl-Alt-Speech 2026 Bingo Card this week, you’re in luck.

Posted on Techdirt - 2 April 2026 @ 11:04am

Meta Caves To The MPAA Over Instagram’s Use Of ‘PG-13,’ Ending A Dispute That Was Silly From The Start

Back in October, Meta announced that its new Instagram Teen Accounts would feature content moderation “guided by the PG-13 rating.” On its face, this made a certain kind of sense as a communication strategy: parents know what PG-13 means (or at least think they do), and Meta was clearly trying to borrow that cultural familiarity to signal that it was taking teen safety seriously.

The Motion Picture Association, however, was not amused. Within hours of the announcement, MPA Chairman Charles Rivkin fired off a statement. Then came a cease-and-desist letter. Then a Washington Post op-ed whining about the threat to its precious brand. The MPA was very protective of its trademark, and very unhappy that Meta was freeloading off the supposed credibility of its widely mocked rating system.

And now, this week, the two sides have announced a formal resolution in which Meta has agreed to “substantially reduce” its references to PG-13 and include a rather remarkable disclaimer:

“There are lots of differences between social media and movies. We didn’t work with the MPA when updating our content settings, and they’re not rating any content on Instagram, and they’re not endorsing or approving our content settings in any way. Rather, we drew inspiration from the MPA’s public guidelines, which are already familiar to parents. Our content moderation systems are not the same as a movie ratings board, so the experience may not be exactly the same.”

In Meta’s official response, you can practically hear the PR team gritting their teeth:

“We’re pleased to have reached an agreement with the MPA. By taking inspiration from a framework families know, our goal was to help parents better understand our teen content policies. We rigorously reviewed those policies against 13+ movie ratings criteria and parent feedback, updated them, and applied them to Teen Accounts by default. While that’s not changing, we’ve taken the MPA’s feedback on how we talk about that work. We’ll keep working to support parents and provide age-appropriate experiences for teens,” said a Meta spokesperson.

Translation: we’re still doing the same thing, we’re just no longer allowed to call it what we were calling it.

There are several layers of nonsense worth unpacking here. First, there’s the MPA getting all high and mighty about its rating system. Let’s remember how the MPA’s film rating system came into existence in the first place: it was a voluntary self-regulation scheme created in the late 1960s specifically to head off government regulation after the government started making noises about the harm Hollywood was doing to children with the content it platformed. Sound familiar? The studios decided that if they rated their own content, maybe Congress would leave them alone. As the MPA explains in their own boilerplate:

For nearly 60 years, the MPA’s Classification and Rating Administration’s (CARA) voluntary film rating system has helped American parents make informed decisions about what movies their children can watch… CARA does not rate user-generated content. CARA-rated films are professionally produced and reviewed under a human-centered system, while user-generated posts on platforms like Instagram are not subject to the same rating process.

Sure, there’s a trademark issue here, but let’s be real: no one thought Instagram was letting a panel of Hollywood parents rate the latest influencer videos.

Next, the PG-13 analogy never actually made much sense for social media. As we discussed on Ctrl-Alt-Speech back when this whole thing started, the context and scale are just completely different. At the time, I pointed out that a system designed to rate a 90-minute professionally produced film — reviewed in its entirety by a panel of parents — is a wholly different beast than moderating hundreds of millions of short-form posts generated by individuals (and AI) every single day.

So, yes, calling the system “PG-13” was a marketing gimmick, meant to trade on a familiar brand while obscuring how differently social media actually works — but the idea that this somehow dilutes the MPA’s marks is still pretty silly.

Then there’s the rating system’s well-documented arbitrariness. The MPA’s ratings have been criticized for decades for their seemingly incoherent standards. On that same podcast, I noted that the rating system is famous for its selective prudishness — nudity gets you an R rating, but two hours of violence can skate by with a PG-13.

There was a whole documentary about this — This Film Is Not Yet Rated — that exposed just how subjective and inconsistent the whole process was. Meta was effectively borrowing credibility from a system that was itself created as a regulatory dodge, is famously inconsistent, and was designed for an entirely different medium. And the MPA’s response was essentially: “Hey, that’s our famously inconsistent regulatory dodge, and you can’t have it.”

The whole thing was silly. And now it’s been formally resolved with Meta agreeing to stop doing the thing it had already mostly stopped doing back in December. So even the resolution is anticlimactic.

But there’s a more substantive point buried under all this trademark squabbling: the whole approach reflects a flawed assumption that one company can set a universal standard for every teen on the planet.

As I argued on the podcast, the deeper issue is that the whole framework is wrong for the medium. The MPA’s rating system was built to evaluate a single 90-minute film, reviewed in its entirety by a panel of parents. Applying that logic to hundreds of millions of short-form posts generated by people across wildly different cultural contexts — a kid in rural Kansas, a teenager in Berlin, a twelve-year-old in Lagos — was never going to produce anything coherent. Different kids, different families, different communities have different standards, and no single company should be setting a universal threshold for all of them. The smarter approach is giving parents and users real controls with customizable defaults, rather than having Zuckerberg (or a Hollywood trade association) decide what counts as age-appropriate for every teenager on the planet.

This whole dispute was silly from start to finish.

Posted on Techdirt - 1 April 2026 @ 12:58pm

The EU Killed Voluntary CSAM Scanning. West Virginia Is Trying To Compel It. Both Cause Problems.

Last week, the European Parliament voted to let a temporary exemption lapse that had allowed tech companies to scan their services for child sexual abuse material (CSAM) without running afoul of strict EU privacy regulations. Meanwhile, here in the US, West Virginia’s Attorney General continues to press forward with a lawsuit designed to force Apple to scan iCloud for CSAM, apparently oblivious to the fact that succeeding would hand defense attorneys the best gift they’ve ever received.

Two different jurisdictions. Two diametrically opposed approaches, both claiming to protect children, and both making it harder to actually do so.

I’ll be generous and assume people pushing both of these views genuinely think they’re doing what’s best for children. This is a genuinely complex topic with real, painful tradeoffs, and reasonable people can weigh them differently. What’s frustrating is watching policymakers on both sides of the Atlantic charge forward with approaches that seem driven more by vibes than by any serious engagement with how the current system actually works — or why it was built the way it was.

The European Parliament just voted against extending a temporary regulation that had exempted tech platforms from GDPR-style privacy rules when they voluntarily scanned for CSAM. This exemption had been in place (and repeatedly extended) for years while Parliament tried to negotiate a permanent framework. Those negotiations have been going on since November 2023 without resolution, and on Thursday MEPs decided they were done extending the stopgap.

To be clear, Parliament didn’t pass a law banning CSAM scanning. Companies can still technically scan if they want to. But without the exemption, they’re now exposed to massive privacy liability under EU law for doing so. Scanning private messages and stored content to look for CSAM is, after all, mass surveillance — and European privacy law treats mass surveillance seriously (which, in most cases, it should!). So the practical effect is a chilling one: companies that were voluntarily scanning now face significant legal risk if they continue.

The digital rights organization eDRI framed the issue in stark terms:

“This is actually just enabling big tech companies to scan all of our private messages, our most intimate details, all our private chats so it constitutes a really, really serious interference with our right to privacy. It’s not targeted against people that are suspected of child abuse — It’s just targeting everyone, potentially all of the time.”

And that argument is compelling. Hash-matching systems that compare uploaded images against databases of known CSAM are more targeted than, say, keyword scanning of every message, but they still fundamentally involve examining every unencrypted piece of content that passes through the system. When eDRI says it targets “everyone, potentially all of the time,” that’s an accurate description of how the technology works.

But… the technology also works to find and catch CSAM. Europol’s executive director, Catherine De Bolle, pointed to concrete numbers:

Last year alone, Europol processed around 1.1 million of so-called CyberTips, originating from the National Center for Missing & Exploited Children (NCMEC), of relevance to 24 European countries. CyberTips contain multiple entities (files, videos, photos etc.) supporting criminal investigation efforts into child sexual abuse online.

If the current legal basis for voluntary detection by online platforms were to be removed, this is expected to result in a serious reduction of CyberTip referrals. This would undermine the capability to detect relevant investigative leads on CSAM, which in turn will severely impair the EU’s security interests of identifying victims and safeguarding children.

The companies that have been doing this scanning — Google, Microsoft, Meta, Snapchat, TikTok — released a joint statement saying they are “deeply concerned” and warning that the lapse will leave “children across Europe and around the world with fewer protections than they had before.”

So the EU’s privacy advocates aren’t wrong about the surveillance problem. Europol isn’t wrong about the child safety consequences. Both things are true — which is what makes this genuinely tricky rather than a case of one side being obviously right.

Now flip to the United States, where the problem is precisely inverted.

In the US, the existing system has been carefully constructed around a single, critical principle: companies voluntarily choose to scan for CSAM, and when they find it, they’re legally required to report it to NCMEC. The word “voluntarily” is doing enormous load-bearing work in that sentence — and most of the people currently shouting about CSAM don’t seem to know it. As Stanford’s Riana Pfefferkorn explained in detail on Techdirt when a private class action lawsuit against Apple tried to compel CSAM scanning:

While the Fourth Amendment applies only to the government and not to private actors, the government can’t use a private actor to carry out a search it couldn’t constitutionally do itself. If the government compels or pressures a private actor to search, or the private actor searches primarily to serve the government’s interests rather than its own, then the private actor counts as a government agent for purposes of the search, which must then abide by the Fourth Amendment, otherwise the remedy is exclusion.

If the government – legislative, executive, or judiciary – forces a cloud storage provider to scan users’ files for CSAM, that makes the provider a government agent, meaning the scans require a warrant, which a cloud services company has no power to get, making those scans unconstitutional searches. Any CSAM they find (plus any other downstream evidence stemming from the initial unlawful scan) will probably get excluded, but it’s hard to convict people for CSAM without using the CSAM as evidence, making acquittals likelier. Which defeats the purpose of compelling the scans in the first place.

In the US, if the government forces Apple to scan, that makes Apple a government agent. Government agents need warrants. Apple can’t get warrants. So the scans are unconstitutional. So the evidence gets thrown out. So the predators walk free. All because someone thought “just make them scan!” was a simple solution to a complex problem.

Congress apparently understood this when it wrote the federal reporting statute — that’s why the law explicitly disclaims any requirement that providers proactively search for CSAM. The voluntariness of the scanning is what preserves its legal viability. Everyone involved in the actual work of combating CSAM — prosecutors, investigators, NCMEC, trust and safety teams — understands this and takes great care to preserve it.

Everyone, apparently, except the Attorney General of West Virginia. As we discussed recently, West Virginia just filed a lawsuit demanding that a court order Apple to “implement effective CSAM detection measures” on iCloud. The remedy West Virginia seeks — a court order compelling scanning — would spring the constitutional trap that everyone who actually works on this issue has been carefully avoiding for years.

As Pfefferkorn put it:

Any competent plaintiff’s counsel should have figured this out before filing a lawsuit asking a federal court to make Apple start scanning iCloud for CSAM, thereby making Apple a government agent, thereby turning the compelled iCloud scans into unconstitutional searches, thereby making it likelier for any iCloud user who gets caught to walk free, thereby shooting themselves in the foot, doing a disservice to their client, making the situation worse than the status quo, and causing a major setback in the fight for child safety online.

The reason nobody’s filed a lawsuit like this against Apple to date, despite years of complaints from left, right, and center about Apple’s ostensibly lackadaisical approach to CSAM detection in iCloud, isn’t because nobody’s thought of it before. It’s because they thought of it and they did their fucking legal research first. And then they backed away slowly from the computer, grateful to have narrowly avoided turning themselves into useful idiots for pedophiles.

The West Virginia complaint also treats Apple’s abandoned NeuralHash client-side scanning project as evidence that Apple could scan but simply chose not to. What it skips over is why the security community reacted so strongly to NeuralHash in the first place. Apple’s own director of user privacy and child safety laid out the problem:

Scanning every user’s privately stored iCloud content would in our estimation pose serious unintended consequences for our users… Scanning for one type of content, for instance, opens the door for bulk surveillance and could create a desire to search other encrypted messaging systems across content types (such as images, videos, text, or audio) and content categories. How can users be assured that a tool for one type of surveillance has not been reconfigured to surveil for other content such as political activity or religious persecution? Tools of mass surveillance have widespread negative implications for freedom of speech and, by extension, democracy as a whole.

Once you create infrastructure capable of scanning every user’s private content for one category of material, you’ve created infrastructure capable of scanning for anything. The pipe doesn’t care what flows through it. Governments around the world — some of them not exactly champions of human rights — have a well-documented habit of demanding expanded use of existing surveillance capabilities. This connects directly to the perennial fights over end-to-end encryption backdoors, where the same argument applies: you cannot build a door that only the good guys can walk through.

And then there’s the scale problem. Even the best hash-matching systems can produce false positives, and at the scale of major platforms, even tiny error rates translate into enormous numbers of wrongly flagged users.

This is one of those frustrating stories where you can… kinda see all sides, and there’s no easy or obvious answer:

Scanning works, at least somewhat. 1.1 million CyberTips from Europol in a single year. Some number of children identified and rescued because platforms voluntarily detected CSAM and reported it. The system produces real results.

Scanning is mass surveillance. Every image, every message gets examined (algorithmically), not just those belonging to suspected offenders. The privacy intrusion is real, not hypothetical, and it falls on everyone.

Compelled scanning breaks prosecutions. In the US, the Fourth Amendment means that government-ordered scanning creates a get-out-of-jail card for the very predators everyone claims to be targeting. The voluntariness of the system is what makes it legally functional.

Scanning infrastructure is repurposable. A system built to detect CSAM can be retooled to detect political speech, religious content, or anything else. This concern is not paranoid; it’s an engineering reality.

False positives at scale are inevitable. Even highly accurate systems will flag innocent content when processing billions of items, and the consequences for wrongly accused individuals are severe.

People can and will weigh these tradeoffs differently, and that’s legitimate. The tension described in all this is real and doesn’t resolve neatly.

But what both the EU Parliament’s vote and West Virginia’s lawsuit share is an unwillingness to sit with that tension. The EU stripped legal cover from the voluntary system that was actually producing results, without having a workable replacement ready. West Virginia is trying to compel what must remain voluntary, apparently without bothering to read the constitutional case law that makes compelled scanning self-defeating. From opposite directions, both approaches attack the same fragile voluntary architecture that currently threads the needle between these competing interests.

The status quo in the United States — voluntary scanning, mandatory reporting, no government compulsion to search — is far from perfect. But the system functions: it produces leads, preserves prosecutorial viability, and does so precisely because it was designed by people who understood the tradeoffs and built accordingly.

It would be nice if more policymakers engaged with why the system works the way it does before trying to blow it up from either direction. In tech policy, the loudest voices in the room are rarely the ones who’ve done the reading.

Posted on Techdirt - 31 March 2026 @ 03:34pm

Free Speech Experts: Jonathan Haidt’s Moral Panic Is As Old As Democracy Itself

We’ve been saying for years now that Jonathan Haidt’s crusade against social media and kids is a moral panic dressed up in academic robes, and that the evidence simply does not support the sweeping claims he’s been making. A new piece in the Wall Street Journal by Jacob Mchangama and Jeff Kosseff drives that point home with a framing that cuts straight to the absurdity of it all: this fear of new ideas “corrupting the youth” is literally as old as democracy itself.

In 399 BCE, Socrates was put on trial before a jury of some 500 of his fellow Athenians. The indictment accused him of impiety and added, “Socrates is…also guilty of corrupting the youth.” Despite the Athenian democracy’s commitment to free and equal speech, Socrates was found guilty and sentenced to death.

Two and a half millennia later, democracies are still deeply concerned about dangerous ideas corrupting the youth. This time, the target isn’t dangerous philosophy but an increase in teen mental-health issues blamed on social media.

Mchangama and Kosseff are particularly well-positioned to make this argument (and are both former Techdirt podcast guests). Mchangama’s prior book, Free Speech: A History from Socrates to Social Media, traced the full arc of free speech battles across civilizations, and the two of them have a forthcoming co-authored book, The Future of Free Speech, on the global decline of free speech protections. Meanwhile Kosseff’s three previous books all cover related free speech territory: The Twenty-Six Words that Created the Internet, Liar in a Crowded Theater, and The United States of Anonymous. These are people who have spent their careers studying exactly these patterns — the recurring cycle of moral panic, political opportunism, and the quiet erosion of rights that tends to follow.

Their piece walks through the problems with both the evidence and the policy responses that have sprung from Haidt’s work. On the evidence:

In 2024, a review of the scientific literature by a committee at the National Academies of Sciences, Engineering, and Medicine had found that despite some “potential harms,” the review “did not support the conclusion that social media causes changes in adolescent health at the population level.” A 2026 longitudinal study in the Journal of Public Health reached a similar conclusion. 

We covered these studies at the time, noting that they were far from the only such studies to go hunting for the alleged evidence of inherent harms to children using social media — and coming up empty. It is amazing how little attention these studies get compared to Haidt’s book. So it’s good to see Mchangama and Kosseff call them out.

They also highlight what gets lost when you reduce this to a simple “social media = bad” story:

“Social media has the potential to connect friends and family. It may also be valuable to teens who otherwise feel excluded or lack offline support,” according to the National Academies of Science report. It also highlights the possible benefits of online access for “young people coping with serious illness, bereavement, and mental health problems” as well as opportunities for learning and developing interests. 

That point is especially important for vulnerable teenagers whose offline environments may be isolating or hostile. This is why comparing social media to tobacco is questionable: The scientific consensus on smoking’s harms is unanimous and no one claims smoking has benefits. Neither is true for social media.

This is consistent with what experts told TES Magazine last fall — actual researchers in the field described Haidt’s work as “fear” rather than science, said they couldn’t believe a fellow academic wrote it, and pointed out basic logical flaws in his causal claims. It’s also consistent with what I found in my own detailed review of the book when it came out two years ago, where the cherry-picked data, the ignored contrary evidence, and the policy proposals based on gut feelings rather than research were all on full display.

What makes this even worse than a standard “well-meaning but wrong” situation is a study we wrote about earlier this year showing that the social media “addiction” narrative itself may be more harmful than social media. Researchers found that very few people show signs consistent with actual addiction, but every time the media amplifies stories about social media addiction, more people claim they’re addicted. And that belief makes them feel helpless — convincing them they have a pathological condition rather than habits they could simply change.

In other words, the moral panic is doing the exact same thing it accuses social media of doing: making people anxious, helpless, and convinced they can’t control their own behavior.

The cost of being wrong here is that parents, politicians, and schools ignore the real causes of teen mental health struggles: poverty, the closure of youth services, reduced access to mental health care, and the erasure of community support systems. And the cost is that kids who genuinely rely on online communities — LGBTQ+ youth, kids with chronic illnesses, kids in hostile home environments — lose a lifeline. Mchangama and Kosseff make the same point, and now we can see the policy consequences playing out in real time.

And it goes even further. As Mchangama and Kosseff note, authoritarian governments are already using the “protect the children” framework as cover for broader censorship:

Authoritarian and illiberal states provide a grim window into how the protection of children can be weaponized to suppress dissent. In 2012, Russia enacted an internet blacklist law, with the stated intention of protecting children from harmful content. The law laid the groundwork for Russia’s heavily censored “Red Web” that now entirely prohibits many foreign social-media platforms.

The same goes in Indonesia which this month announced a ban on social media for those under 16. But Indonesia is also a country that has used the pretext of child protection to block and censor gay social networking apps and content.  

It’s a remarkable blind spot for those pushing Haidt’s arguments. They never seem to consider that these are the exact same tools authoritarian governments use to silence marginalized voices. You would think that politicians championing this book — particularly Democrats who claim to care about civil liberties and LGBTQ rights — might pause when they see Russia and Indonesia deploying identical justifications.

And yet politicians across the spectrum continue to treat Haidt’s book like scripture, despite an overwhelming expert consensus that his claims don’t hold up.

Mchangama and Kosseff close with what should be obvious, but apparently still needs to be said:

Democracies have always worried about dangerous ideas corrupting the young. Intellectuals and lawmakers should absolutely be concerned about how and when our children navigate social media. But they should also be concerned about whether, in our rush to protect our children, we are building an infrastructure of surveillance and censorship that will ultimately threaten the hard-won freedoms we want future generations to enjoy.

Speech is powerful. Ideas have consequences. But we protect such speech from legal liability for that very reason. The power of speech to change minds and influence people is exactly why those in power are so often afraid of it and looking to tamp it down. It’s also why Mchangama and Kosseff can tie the urge back all the way to Socrates.

Every generation gets its moral panic. Every time, someone insists “this time it’s different.” Every time, the evidence eventually catches up and the panic looks ridiculous in retrospect. The tragedy is how much damage gets done in the meantime — to kids who lose a real lifeline, to free expression, to privacy, and to the actual causes of teen suffering that never get addressed because everyone was too busy blaming the latest app.

The verdict from the people who actually study this stuff has been clear for a while now. Maybe it’s time for politicians to put down Haidt’s book and pick up the actual research.

Posted on Techdirt - 31 March 2026 @ 11:09am

Weeks After Denouncing Government Censorship On Rogan, Zuckerberg Texted Elon Musk Offering To Take Down Content For DOGE

On January 10th, 2025, Mark Zuckerberg sat down with Joe Rogan and put on quite a performance. He talked about how the Biden administration had pressured Meta to take down content. He detailed how the Biden administration had apparently pressured Meta to take down content — how officials called and screamed and cursed — and how, going forward, he was a changed man. A champion of free expression, done forever with government demands to remove content. And a whole bunch of people (especially MAGA folks) cheered all this on. Zuckerberg was a protector of free speech against government suppression!

Twenty-four days later, he texted Elon Musk — a senior government official at the time — to volunteer to remove content the government wouldn’t like. Unprompted.

As I wrote at the time, the whole Rogan interview was an exercise in misdirection. The “pressure” Zuck kept describing was the kind of thing the Supreme Court explicitly found, in the Murthy case, was standard-issue government communication — the kind of thing Justice Kagan said happens “literally thousands of times a day in the federal government.” The Court called the lower court’s findings of “censorship” clearly erroneous. And Zuck himself kept admitting, over and over, that Meta’s response to the Biden administration was to tell them no. He said so explicitly:

And basically it just got to this point where we were like, no we’re not going to. We’re not going to take down things that are true. That’s ridiculous…

In other words, the Biden administration asked, Meta said “nah,” and that was that. The Supreme Court agreed this fell well short of coercion. Indeed, the only documented instance of the Biden administration making an actual specific takedown request to a social media platform was to flag an account impersonating one of Biden’s grandchildren. That was it. That was the “massive government censorship operation.”

But Zuck milked it beautifully on the podcast, and Rogan ate it up. The narrative was established: Zuckerberg, defender of free expression, standing tall against the censorious government, vowing to never again let officials dictate what stays up and what comes down on his platforms.

That was January 10th.

On February 3rd, Zuckerberg texted Elon Musk:

Looks like DOGE is making progress. I’ve got our teams on alert to take down content doxxing or threatening the people on your team. Let me know if there’s anything else I can do to help.

So the man who spent three hours performing righteous indignation about government censorship proactively reached out to a senior government official to let him know Meta was already taking action to remove content on behalf of that official’s government operation — including truthful information like the names of public servants working for the federal government.

“Let me know if there’s anything else I can do to help.”

A guy who spent three hours on the biggest podcast in the world performing righteous indignation about government censorship pressure — then, weeks later, volunteered exactly that kind of service, unprompted, to the same government. Just with a different party in power.

The Biden administration’s alleged “coercion” amounted to strongly worded emails that Meta freely ignored, and its only documented specific takedown request was for an account literally pretending to be the president’s grandchild. Zuckerberg’s response to that: three hours on the world’s biggest podcast denouncing government censorship. His response to Musk’s DOGE operation: a proactive late-night text offering to suppress information identifying the federal employees doing the dismantling.

And Zuck’s framing of “doxxing” is doing a lot of work here. The DOGE staffers whose identities were being shared on social media were federal employees exercising enormous government power — canceling grants, accessing sensitive government databases, making decisions that affected millions of Americans. The administration went to great lengths to hide who these people were, precisely because what they were doing was controversial and, in many cases, potentially illegal. Identifying who is wielding government power on your behalf has a name, and that name is accountability, not “doxxing.”

Notably, the Zuckerberg text came the day after Wired started naming DOGE bros. Which is reporting. Not doxxing. Doxxing is revealing private info, such as an address. A federal employee’s name is not private info. It’s just journalism.

Also notice how Zuckerberg bundles “doxxing or threatening” — conflating two very different things. Removing credible threats of violence is something every platform already does; it’s in every terms of service. But by packaging the identification of public servants alongside actual threats, Zuck makes the whole thing sound like a routine trust-and-safety operation rather than what it actually was: volunteering to help the government hide its own employees from public scrutiny.

Compare the two scenarios directly. The Biden administration flagged a fake account impersonating a minor family member of the president — a clear-cut case of impersonation that every platform’s rules already cover. In other cases, they simply asked Facebook to explain its policies for dealing with potential health misinformation in the middle of a pandemic. Zuckerberg’s response, per his Rogan narrative, was to tell them to pound sand, and then go on a podcast to brag about it. Meanwhile, when it came to Musk and DOGE, it looks like Zuck didn’t wait to be asked. He texted Elon Musk at 10 PM on a Monday night to let him know the teams were already mobilized. He closed with “let me know if there’s anything else I can do to help,” which is really more “eager intern” energy than “principled defender of free expression” energy.

It’s also worth noting the broader context of the relationship here. These two were, at least publicly, supposed to be rivals. Remember the whole cage match fiasco? The very public trash-talking? And yet here’s Zuck texting Musk late at night, opening with flattery (“Looks like DOGE is making progress”), offering content suppression as a gift, and then — in literally the next breath in the text exchange — Musk pivots to asking Zuck if he wants to join a bid to buy OpenAI’s intellectual property.

“Are you open to the idea of bidding on the OpenAI IP with me and some others?” Musk asked. Zuck suggested they discuss it live. Just a couple of billionaires doing billionaire things at 10:30 PM after one of them volunteered censorship services to the other’s government operation.

We only know about any of this, by the way, because of Musk’s quixotic lawsuit against OpenAI. These texts were designated as a trial exhibit by OpenAI’s lawyers. Musk’s team is now trying to get them excluded from evidence. The motion seeking to suppress this evidence opens with one of the more entertaining paragraphs you’ll find in a legal filing:

President Trump. Burning Man. Rhino ketamine. These are all inflammatory and highly irrelevant topics that Defendants are trying to improperly make the subject of this litigation. Throughout fact discovery, Defendants have gratuitously probed these topics, and their trial evidence disclosures make clear that they intend to use the same scandalizing tactics at trial. Defendants should not be allowed to exploit Musk’s political involvement, social or recreational choices, or gratuitous details of his personal life at trial. As detailed below, Musk is the subject of daily, often-fabricated media scrutiny.

The filing goes on to argue that the Zuckerberg text exchange has “nothing to do with Musk’s claims” and amounts to an attempt to “stoke negative sentiments toward Musk because of his association with Zuckerberg.” Which is a fun way to describe a text message in which a tech CEO volunteers content moderation favors to a government official. Musk’s lawyers aren’t wrong that it’s embarrassing — just not for the reasons they think.

The hypocrisy, though, is almost beside the point. The entire Rogan performance was designed to establish a narrative: that the Biden administration engaged in some kind of unprecedented censorship campaign, and that Zuckerberg was bravely standing up to it. That narrative was then used to justify Meta’s decision to end its fact-checking programs and loosen its content policies — framed as a return to “free expression” principles.

But the Zuck-Musk texts show what those “free expression” principles actually look like in practice. Zuck is more than happy to suppress speech when he supports the person in the White House. It’s only when he doesn’t like the person in the White House that he gets to pretend he’s a free speech warrior.

This has nothing to do with free expression. It’s about power. Who has it, who Zuckerberg thinks he needs to stay on the right side of, and who he thinks he can safely perform outrage against. The Biden administration was on its way out the door when Zuck did the Rogan interview, making them a perfectly safe target for his “never again” act. Musk was ascendant, running a government operation backed by a president who had directly threatened to throw Zuckerberg in prison.

So the principled free speech stance lasted less than a month before Zuck was back to volunteering content suppression — this time without even being asked, for the people who actually had the power to hurt him. And that’s just the text message that surfaced in an unrelated lawsuit. The rest of the ledger isn’t public.

Some defender of free expression.

Posted on Techdirt - 30 March 2026 @ 11:00am

The White House App’s Propaganda Is The Least Alarming Thing About It

Call me crazy, but I don’t think an official government app should be loading executable code from a random person’s GitHub account. Or tracking your GPS location in the background. Or silently stripping privacy consent dialogs from every website you visit through its built-in browser. And yet here we are.

The White House released a new app last week for iOS and Android, promising “unparalleled access to the Trump Administration.” A security researcher, who goes by Thereallo, pulled the APKs and decompiled them — extracting the actual compiled code and examining what’s really going on under the hood. The propaganda stuff — cherry-picked news, a one-tap button to report your neighbors to ICE, a text that auto-populates “Greatest President Ever!” — which Engadget covered, is embarrassing enough. The code underneath is something else entirely.

Let’s start with the most alarming behavior. Every time you open a link in the app’s built-in browser, the app silently injects JavaScript and CSS into the page. Here’s what it does:

It hides:

  • Cookie banners
  • GDPR consent dialogs
  • OneTrust popups
  • Privacy banners
  • Login walls
  • Signup walls
  • Upsell prompts
  • Paywall elements
  • CMP (Consent Management Platform) boxes

It forces body { overflow: auto !important } to re-enable scrolling on pages where consent dialogs lock the scroll. Then it sets up a MutationObserver to continuously nuke any consent elements that get dynamically added.

An official United States government app is injecting CSS and JavaScript into third-party websites to strip away their cookie consent dialogs, GDPR banners, login gates, and paywalls.

Yiiiiiiiiiiiiikes.

And, yes, I can already hear a certain subset of readers thinking: “Sounds great, actually. Cookie banners are annoying.” And sure, there are good reasons why millions of people use browser extensions like uBlock Origin to do exactly this kind of thing. In fact, if you don’t use tools like that, you probably should. Those consent dialogs are frequently implemented as obnoxious dark patterns, and stripping them out is a perfectly reasonable personal choice.

But the key word there is choice. When you install an ad blocker or a consent-banner nuker, you’re making an informed decision about your own browsing experience. When the White House app does it silently, on every page load, without telling you — that’s the government making that decision for you in a deceptive and technically concerning way. And those consent dialogs exist in the first place because of legal requirements, in many cases requirements that governments themselves have enacted and enforce. There’s something almost comically stupid about the executive branch of the United States shipping code that silently destroys the legal compliance infrastructure of every website you visit through its app.

Then there’s the location tracking. The researcher found that OneSignal’s full GPS tracking pipeline is compiled into the app:

Latitude, longitude, accuracy, timestamp, whether the app was in the foreground or background, and whether it was fine (GPS) or coarse (network). All of it gets written into OneSignal’s PropertiesModel, which syncs to their backend.

The White House app. Tracking your location. Synced to a commercial third-party server. For press releases.

Oh and:

There’s also a background service that keeps capturing location even when the app isn’t active.

To be clear — and the researcher is careful to be precise about this — there are several gates before this tracking activates. The user has to grant location permissions, and a flag called _isShared has to be set to true in the code. Whether the JavaScript bundle currently flips that flag is something that can’t be determined from the decompiled native code alone. What can be determined is that, as the researcher puts it:

the entire pipeline including permission strings, interval constants, fused location requests, capture logic, background scheduling, and the sync to OneSignal’s API, all of them are fully compiled in and one setLocationShared(true) call away from activating. The withNoLocation Expo plugin clearly did not strip any of this.

So at best, the people who built this app tried to disable location tracking and failed. At worst, they have it set up to actually use. The plumbing is all there, fully functional, waiting to be turned on. And this is detailed, accurate GPS data, collected every four and a half minutes when you’re using the app and every nine and a half minutes when you’re not, synced to OneSignal’s commercial servers. For a government app. That’s supposed to show you press releases.

While it’s true that the continued lack of a federal privacy law probably means this is all technically legal, it’s still a wild thing for an app from the federal government to do.

And it gets better. Or worse, depending on your perspective. The app embeds YouTube videos by loading player HTML from… a random person’s GitHub Pages account:

The app embeds YouTube videos using the react-native-youtube-iframe library. This library loads its player HTML from:

https://lonelycpp.github.io/react-native-youtube-iframe/iframe_v2.html

That’s a personal GitHub Pages site. If the lonelycpp GitHub account gets compromised, whoever controls it can serve arbitrary HTML and JavaScript to every user of this app, executing inside the WebView context.

This is a government app loading code from a random person’s GitHub Pages.

Cool, cool. Totally normal dependency for critical government infrastructure.

It also loads JavaScript from Elfsight, a commercial SaaS widget company, with no sandboxing. It sends email addresses to Mailchimp. It hosts images on Uploadcare. It has a hardcoded Truth Social embed pulling from static CDN URLs. None of this is government-controlled infrastructure. The list goes on and on and on.

There’s way more in the full breakdown by Thereallo — this is just the highlights. The app is a toxic waste dump of code you should not trust.

Each of these findings individually might have a charitable explanation. Libraries ship with unused code all the time. Lots of apps use third-party services. Dev artifacts occasionally slip through. But stack them all together — the silent consent stripping, the fully compiled location tracking pipeline, the random GitHub dependency, the commercial third-party data flows, the dev artifacts in production, the zero certificate pinning — and the picture is software built by people who either don’t know or don’t care about the standards government software is supposed to meet.

Which brings us to the part that makes all of this even more inexcusable. The United States government used to have people whose entire job was to prevent exactly this kind of thing.

The U.S. Digital Service was created after the Healthcare.gov disaster during the Obama administration, specifically to bring real software engineering talent into the federal government. For over a decade, across three administrations — including Trump’s first term — USDS and its sibling organization 18F recruited experienced engineers, designers, and product managers from the private sector to build government technology that actually worked. These were people who would have caught a full GPS tracking pipeline sitting one function call from activation in what is supposed to be a press release reader, and who would never have loaded executable code from a random person’s GitHub account.

DOGE fired them. Elon Musk’s “Department of Government Efficiency” gutted USDS and 18F — the organizations that were actually doing what DOGE claimed to be doing — and replaced their expertise with… whatever this is. An app built by an outfit called “forty-five-press” according to the Expo config, running on WordPress, with “Greatest President Ever!” hardcoded in the source, loading code from some random person’s GitHub Pages, and shipping the developer’s home IP address to the public.

This is what you get when you fire the people who know what they’re doing and replace them with loyalists: a government app that strips privacy consent dialogs, has a GPS tracking pipeline ready to flip on, depends on infrastructure the government doesn’t control, and ships with the digital equivalent of leaving your house keys taped to the front door. But hey, at least it makes it easy to report your neighbors to ICE.

Posted on Techdirt - 27 March 2026 @ 12:56pm

Turns Out That Advertisers Not Wanting To Fund Neo-Nazi-Adjacent Content Isn’t An Antitrust Violation

Remember when Elon Musk told advertisers to “go fuck” themselves and then sued them for the crime of taking his advice? A federal judge has now dismissed that lawsuit — with prejudice — confirming what anyone with a passing familiarity with antitrust law already knew: companies deciding they don’t want their brands plastered next to extremist content aren’t engaged in an illegal conspiracy. They’re just making basic (probably pretty smart) business decisions.

When X Corp filed this case back in August of 2024, we walked through in great detail why the legal theory was fundamentally broken. Not broken in a “they pleaded it badly” kind of way, but broken in a “this theory does not describe an antitrust violation no matter how many drugs you’re taking or how convinced you are that the world owes you advertising dollars” kind of way. Judge Jane Boyle of the Northern District of Texas has now agreed, and the key section of her ruling is worth reading in full, because it says what we said at the outset: X has not suffered antitrust injury.

The court laid out the standard, quoting the Fifth Circuit, channeling the Supreme Court, on what counts as an antitrust injury:

The Supreme Court has distilled antitrust injury as being “injury of the type the antitrust laws were intended to prevent and that flows from that which makes defendants’ acts unlawful.” … “The antitrust laws … were enacted for ‘the protection of competition not competitors.'” … “Typical” antitrust injury thus “include[s] increased prices and decreased output.” … “This circuit has narrowly interpreted the meaning of antitrust injury, excluding from it the threat of decreased competition.” … “Loss from competition itself—that is, loss in the form of customers[] choosing the competitor’s goods and services over the plaintiff’s—does not constitute an antitrust injury.” … In short, the question underlying antitrust injury is whether consumers—not competitors—have been harmed.

Antitrust law protects competition, not competitors. X’s entire argument boiled down to: “advertisers chose to spend their money somewhere other than our platform, and that hurt us.” But that’s just… the market. That’s how markets work. Customers choosing not to buy from you because they don’t like what you’re selling has never been an antitrust violation, and the court made short work of explaining why.

Amusingly, the GOP — whose campaigns Musk has bankrolled extensively — spent decades pushing for exactly this narrow definition of antitrust injury, precisely to make cases like this harder to bring. Perhaps one of those politicians could have mentioned that before Elon filed.

But this case was never actually about winning an antitrust case. It was a warning shot at advertisers: give Elon your money or we’ll drag you through an expensive court process. A shakedown dressed up in legal filings. Indeed, after the lawsuit was filed, it was reported that part of X’s “sales” process was to threaten companies that they’d be added to the lawsuit if they didn’t advertise on the platform.

The court examined X’s theory from two different angles, and it failed both times. First, if the conspiracy was supposed to benefit competing social media platforms (like Pinterest, one of the defendants), X hadn’t alleged that any competitor was actually behind the boycott or pressuring advertisers to exclude X so the competitor could corner the market:

X has not alleged that the advertisers chose to do business with Pinterest—or any other social media company—as part of an agreement not to do business with X. Unlike the large hospital in Doctor’s Hospital, Pinterest is not alleged to be X’s competitor that wanted to exclude X from the market so that it could charge higher prices. In turn, unlike the network in Doctor’s Hospital, the advertisers did not decide to boycott X at Pinterest’s—or any other X competitor’s—behest to secure the competitor’s business. Instead, X alleges a conspiracy driven by advertisers not to further X-competitor social media companies’ interests but to pursue their own collective interests as to where they place their advertisements.

Second, if the conspiracy was supposed to eliminate competition at the advertiser level, the court found that GARM wasn’t acting as some kind of gatekeeper blocking X from accessing customers. It was just… advertisers deciding for themselves:

GARM is not an economic intermediary like the retailers in Eastern States. GARM did not buy advertising space from X to sell to advertisers nor did it, in such an arrangement, tell X not to sell directly to GARM’s customers. Rather, GARM was organized by advertisers and reflected their “avowed commitment to furthering [their] economic interests . . . as a group.” … Thus, if GARM is the obstacle to X reaching its advertiser-customers directly, then it is the equivalent of the advertiser-customers themselves deciding not to deal.

That’s the ballgame. Advertisers collectively deciding they don’t want to spend money on your platform — especially after you’ve told them to go fuck themselves and your platform has become a haven for content that damages their brands — just doesn’t state an antitrust claim. Imagine being so entitled that when the marketplace rejects your offering, you insist that it must be an antitrust attack on your rights to their money?

The court was so confident in this conclusion that it dismissed the case with prejudice and denied X the opportunity to replead, noting that the 165-paragraph complaint was already plenty detailed:

The 165-paragraph Second Amended Complaint contains no dearth of detail: if facts existed that GARM operated at an X competitor’s behest to put X out of business or that GARM advertisers sought to unfairly exclude competing advertisers from doing business, X would have pleaded those facts. The very nature of the alleged conspiracy does not state an antitrust claim, and the Court therefore has no qualm dismissing with prejudice.

When a court tells you the nature of your theory doesn’t work, that’s about as definitive a loss as you can get.

As we noted when the case was filed, the evidence X submitted in its own complaint actually undermined the case. One of X’s own exhibits showed GARM’s lead, Rob Rakowitz, explicitly telling an advertiser that GARM doesn’t make recommendations and that advertising decisions are “completely within the sphere of each member and subject to their own discretion.” Another email showed Rakowitz telling an advertiser asking about Twitter that “you may want to connect with Twitter directly to understand their progress on brand safety and make your own decisions.” This is the supposedly nefarious conspiracy that X spent years and untold legal fees litigating.

Separately, I have to mention the blatant forum shopping: X filed this case in the Wichita Falls Division of the Northern District of Texas, which was widely understood as a transparent attempt to land in front of Judge Reed O’Connor, known for partisan rulings and already presiding over Elon’s SLAPP suit against Media Matters. That didn’t work out — O’Connor recused himself, not because of his ownership of Tesla stock, but rather his ownership of some advertising firms who were defendants. The case got reassigned to Judge Boyle, and X still lost. In an ironic twist, X then tried to transfer the case to the Southern District of New York, only to have the court deny that motion because X couldn’t even prove they did business in that specific district. So X handpicked a forum, lost its judge, and then couldn’t escape to a different one. Great lawyering.

But the legal dismissal, satisfying as it is, doesn’t capture the most important part of what actually happened here. Because while the court correctly found that X suffered no antitrust injury, GARM itself suffered a very real injury: it was killed.

GARM shut down within days of the lawsuit being filed, following Rep. Jim Jordan’s misleading congressional investigation that painted the organization as some kind of anti-conservative censorship machine. Jordan’s pressure campaign, combined with the threat of expensive litigation from the world’s richest man, made it untenable for GARM to continue operating. The organization that existed to help advertisers make informed decisions about brand safety — a fundamentally expressive activity, protected by the First Amendment — was destroyed through government jawboning and litigation threats.

There was only one attack on free speech involved here and it came from Jim Jordan and Elon Musk, not GARM or its advertiser members.

X filed this lawsuit wrapped in the language of free speech. Former X CEO Linda Yaccarino literally wore a necklace that said “free speech” while announcing the case, claiming that advertisers not giving X money was somehow an attack on users’ ability to express themselves. The actual speech suppression ran the other direction entirely. A private organization exercising its speech rights to help its members make informed business decisions was bullied out of existence through a combination of congressional intimidation and frivolous litigation.

Jordan celebrated GARM’s dissolution as a victory for free speech — par for the course for the censorial MAGA GOP. A congressman used the weight of his office to pressure a private organization into shutting down, and called that free speech. Meanwhile, the lawsuit that was part of that same ecosystem of intimidation has now been found to have no legal merit whatsoever.

This is what actual jawboning looks like in practice. The lawsuit didn’t need to succeed to accomplish its goal. GARM is gone. The organization that facilitated conversations among advertisers about how to protect their brands has been silenced. The chilling effect on any future organization that might want to do something similar is obvious and intentional. Any industry group that tries to coordinate around brand safety now knows that it might face a billionaire-funded lawsuit and a congressional investigation for its trouble.

The court’s ruling is a vindication of basic antitrust law. But the more important point is about what the actual free speech dynamics were in this whole saga.

X can appeal, of course, and given that this falls within the Fifth Circuit, stranger things have happened. But the fundamental problem remains what it’s always been: the theory that advertisers owe you their business because you exist, and that organizing around brand safety is a criminal conspiracy, has never been a viable legal argument. The court said so plainly. Dismissed with prejudice. Nothing to fix, because the whole premise was broken from the start.

Posted on Techdirt - 27 March 2026 @ 09:27am

The Missouri v. Biden ‘Settlement’ Is A Fake Victory For A Case They Lost

Last week, Senator Eric Schmitt of Missouri got into a heated exchange during a Senate hearing with Stanford’s Daphne Keller. Schmitt, who, as Missouri’s Attorney General, originally filed the Missouri v. Biden lawsuit, was berating Keller over Stanford’s supposed role in helping the Biden administration censor social media during the 2020 election (see if you can spot the time-space continuum problem with that sentence). When Keller pushed back on his characterization of events, Schmitt got increasingly agitated and told her she could “read all about it in Missouri v. Biden.” Keller’s response was instant and devastating: “The one you lost?

He did not take it well, immediately throwing an embarrassing Senatorial temper tantrum.

And so maybe it’s not surprising that just a week later, Schmitt was doing a victory lap over a “settlement” that his friends in the Trump administration very conveniently worked out with the remaining plaintiffs in the case. The framing, of course, was triumphant. From his post on social media:

Shorter version:

We just won Missouri v. Biden.

As Missouri’s Attorney General, I sued the Biden regime for brazenly colluding with Big Tech to silence Missouri families — censoring the truth about COVID, the Hunter Biden laptop, the open border, and the 2020 election. They tried to turn Facebook, X, YouTube, and the rest into their private speech police, labeling dissent “misinformation” while they pushed their narrative on the American people.

Missouri struck first—and Missouri won big.

And the New Civil Liberties Alliance, which represented many of the plaintiffs, was even more grandiose in its description of the settlement:

The federal government’s social media censorship was the most massive suppression of speech in the nation’s history, it was profoundly important to resist it.

Even the Washington Post editorial board got taken in, writing about the settlement as a “forceful affirmation of First Amendment principles.” Reclaim the Net went even further, claiming the decree represented a “formal, court-enforceable admission: the federal government pressured social media platforms to silence protected speech.”

There’s just one fairly big problem. None of this is true. The case was a dud. While it is true that the district court hyped it up as (what the NCLA repeated) “the most massive attack against free speech in United States’ history,” literally no one else found the same. The Fifth Circuit saw that most of the claims were flimsy and cut back nearly the entire injunction, and the Supreme Court threw it out completely (“the one that you lost”) not only pointing out five separate times that there was “no evidence” to support the claims of censorship, but also calling out the district court’s findings, noting that they “appear to be clearly erroneous.”

It’s quite a misleading victory lap to quote the judge who both higher courts called out for misreading the evidence to say things that the evidence clearly did not say (it was actually worse: the judge fabricated quotes to make it sound like there was evidence when there was not).

As for this “settlement,” anyone who actually reads it would realize that it doesn’t support any of the claims making the rounds.

Now the reason Schmitt claims he didn’t “lose” the case is because, technically, the Supreme Court rejected the case on “standing” grounds — meaning the plaintiffs hadn’t shown they had a legal right to bring the case. But the reason they didn’t have standing was devastating to the plaintiffs’ entire theory. The opinion methodically dismantled the conspiracy theory at the heart of the case:

We reject this overly broad assertion. As already discussed, the platforms moderated similar content long before any of the Government defendants engaged in the challenged conduct. In fact, the platforms, acting independently, had strengthened their pre-existing content-moderation policies before the Government defendants got involved. For instance, Facebook announced an expansion of its COVID–19 misinformation policies in early February 2021, before White House officials began communicating with the platform. And the platforms continued to exercise their independent judgment even after communications with the defendants began. For example, on several occasions, various platforms explained that White House officials had flagged content that did not violate company policy.

The Court further called out how the lower courts had built their case on lies and misrepresentations:

The District Court found that the defendants and the platforms had an “efficient report-and-censor relationship.”… But much of its evidence is inapposite. For instance, the court says that Twitter set up a “streamlined process for censorship requests” after the White House “bombarded” it with such requests. The record it cites says nothing about “censorship requests.” Rather, in response to a White House official asking Twitter to remove an impersonation account of President Biden’s granddaughter, Twitter told the official about a portal that he could use to flag similar issues. This has nothing to do with COVID–19 misinformation.

In other words, the Supreme Court looked at the actual record, found a pile of conspiratorial nonsense, and told the lower courts they got played. This was a loss. A clear, unambiguous loss.

But of course, with Trump back in office and the same crew of ideologues now running the government, it was time to manufacture a win. And so we get this “consent decree.”

On paper, it sounds dramatic. The NCLA breathlessly announced that the settlement “prohibits the U.S. Surgeon General, Centers for Disease Control and Prevention (CDC), and Cybersecurity and Infrastructure Security Agency (CISA) from threatening social media companies into removing or suppressing constitutionally protected speech.” Schmitt claimed the decree means “no more threats of legal, regulatory, or economic punishment. No more coercion. No more unilateral direction or veto of platform decisions.”

But if you actually read the consent decree (and I encourage you to do so, because clearly many of the people celebrating it haven’t), you find something remarkable: the decree prohibits conduct that the Supreme Court found no evidence was happening, while explicitly carving out everything that actually was happening.

First (and most importantly), the decree only applies to three remaining individual plaintiffs (Dr. Aaron Kheriaty, Jill Hines, and Jim Hoft) and two states, and only on five specific platforms. It doesn’t protect anyone else. If you’re a random American whose content gets moderated on social media, this decree does absolutely nothing for you. That certainly doesn’t match what Schmitt claimed.

Second, and more importantly, paragraph 24 of the decree is where the whole thing collapses:

This prohibition does not extend to providing Social-Media Companies with information that the companies are free to use as they wish. Nor does it extend to statements by government officials that posts on Social Media Companies’ platforms are inaccurate, wrong, or contrary to the Administration’s views, unless those statements are otherwise coupled with a threat of punishment within the meaning of the above provision.

That paragraph basically describes exactly what the Biden administration was actually doing — and declares it fine. The government can still share information with social media companies. It can still tell companies that content on their platforms is wrong or inaccurate. It can still express displeasure. It just can’t couple those statements with threats of punishment.

Which is… exactly what the First Amendment already requires. And exactly what the Supreme Court found was not happening in the first place. The consent decree literally codifies the Biden administration’s actual conduct as permissible while grandly prohibiting a phantom version of events that the Supreme Court found no evidence of.

Even better, paragraph 17 of the decree says the quiet part out loud:

The parties acknowledge that this Agreement is entered into solely for the purpose of settling and compromising any remaining claims in this action without further litigation, and, except as stated explicitly in the text of the Agreement, it shall not be construed as evidence or as an admission regarding any issues of law or fact, or regarding the truth or validity of any allegation or claim raised in this action or in any other action.

So the decree is explicitly not an admission of anything. It cannot be construed as evidence of any wrongdoing. The government didn’t admit to censorship. Reclaim the Net’s headline — “US Government Admits Pressuring Social Media Platforms to Censor Protected Speech” — is directly contradicted by the text of the document they’re supposedly celebrating. Did they not read it?

Yes, the preamble quotes Trump’s executive order making grand accusations about Biden-era censorship. But that’s a political document, not a finding of fact. The Trump administration saying the Biden administration did bad things is hardly the same as the Biden administration admitting it did bad things, or a court finding that it did bad things. In fact, the only court to substantively examine the evidence — the Supreme Court — found no evidence to support these claims.

So what we have here is a neat little trick: the Trump administration negotiates a settlement with friendly plaintiffs (some of whom had to drop out of the case because they joined the Trump administration), quotes Trump’s own executive order as if it were established fact, and everyone involved pretends this vindicates the original claims — despite the Supreme Court (and a clean reading of the evidence) having rejected them.

Speaking of those former plaintiffs, let’s talk about the delicious absurdity of how this case ate itself. Dr. Jay Bhattacharya, one of the original individual plaintiffs who claimed he was censored by the Biden administration, had to drop out of the case because he was confirmed as Director of the National Institutes of Health — the agency he claimed (without evidence) had “censored him” even though his lawyers somehow forgot to add NIH as a defendant. Dr. Martin Kulldorff similarly withdrew because of his new role within the Department of Health and Human Services. The supposed victims of government censorship are now running the very agencies they accused of censoring them. And, again, I have to reinforce, that the Supreme Court called out the lack of actual “censorship” for either of these guys.

Both Bhattacharya and Kulldorff were mad that Facebook restricted access to the Great Barrington Declaration, a document they co-authored. But they fail to mention that the person running the Great Barrington Declaration website has publicly revealed that the reason Facebook blocked it was anti-vaxxers mass reporting the site — because they misread the declaration as supporting “forced vaccinations.” (There are more details at the link above).

So naturally, despite all this, the fact that they became top officials in the Trump administration should raise questions about how suddenly the administration worked out a friendly settlement with their friends who were still plaintiffs. What a coincidence.

But the real tell is what’s happening right now, while MAGA is celebrating: the Trump admin is doing far worse than anything Biden was even accused of. Yes, while the Trump administration and its gullible friends are busy patting themselves on the back for supposedly defending free speech from the horrors of the Biden administration sharing information with social media companies, it is engaged in conduct that is far, far worse than anything alleged in Missouri v. Biden.

As you’ll certainly recall, the Trump administration’s FCC Chair Brendan Carr went on a podcast and explicitly threatened Disney with regulatory retaliation over Jimmy Kimmel’s monologues, telling them “we can do this the easy way or the hard way.” Hours later, the show was pulled. That’s textbook coercion — exactly the kind that the Supreme Court in both Murthy and Vullo said would violate the First Amendment if proven. Unlike the conduct in the case that just settled, where the Supreme Court found no such proof.

And then we have the even clearer violation: Pam Bondi’s Department of Justice demanded that Apple and Google remove the ICEBlock app from their stores… and bragged about it! That’s the federal government literally ordering private companies to suppress an application. Not sending mean emails. Not sharing information platforms are free to use as they wish. An explicit demand for removal.

We reached out to Apple today demanding they remove the ICEBlock app from their App Store—and Apple did so,” Bondi added according to the Fox report.

Where’s Schmitt’s outrage? Where’s the NCLA lawsuit? Where’s Philip Hamburger’s condemnation of “the most massive suppression of speech in the nation’s history”?

Nowhere. Because this was never really about free speech. This was about building a narrative that the Biden administration censored conservatives, manufacturing a legal document that appears to vindicate that claim (despite explicitly saying it doesn’t), and then using it as political cover while engaging in an even more extreme version of the conduct you claimed to oppose.

This perfectly matches the pattern Renee DiResta documented in her Lawfare review of Schmitt’s book — which he subtitled “how to beat the left in court” — where she noted his habit of presenting cases he lost as if he won them. The book apparently describes multiple lawsuits where Schmitt failed to achieve his stated legal objectives but then spun the results as massive victories for the narrative benefit. Missouri v. Biden is the crown jewel of this approach: lose at the Supreme Court, negotiate a meaningless consent decree with a friendly administration, declare total victory.

Even the Washington Post editorial board, which gave the decree far more credit than it deserved, couldn’t quite look away from the obvious:

The unfortunate catch is that the settlement only applies to the specific plaintiffs in this particular case. In other words, only the people who initially sued the Biden administration, and public officials from Louisiana and Missouri, will enjoy the court-ordered protections from government censorship. It’s unlikely the current administration would target right-leaning individuals or states, but the consent decree will apply for 10 years.

The settlement also applies only to government pressure on five companies: Facebook, Instagram, X (formerly Twitter), Linkedln and YouTube. That means, for example, Federal Communications Commission Chairman Brendan Carr’s efforts to bully broadcasters to toe the administration’s political line will be unaffected.

So even the Post recognizes that the decree does nothing about actual, current, obvious government coercion of media companies. But somehow this is still a “forceful affirmation of First Amendment principles”? How so? A consent decree that protects three specific people from conduct that wasn’t happening, while the government signing the decree is actively coercing media companies in ways that obviously violate the First Amendment?

The consent decree is a press release disguised as a legal document. It prohibits First Amendment violations the Supreme Court found no evidence of, permits everything the evidence shows the Biden administration was actually doing, and was signed by an administration currently engaged in the exact conduct the decree pretends to prohibit.

The one you lost, indeed.

Posted on Techdirt - 26 March 2026 @ 04:20pm

Ctrl-Alt-Speech: For Meta Or Worse

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

Don’t forget to listen along with Ctrl-Alt-Speech’s 2026 Bingo Card and drop us a line if you win or have ideas for new squares.

More posts from Mike Masnick >>