When a school district sues social media companies claiming they can’t educate kids because Instagram filters exist, that district is announcing to the world that it has fundamentally failed at its core mission. That’s exactly what New York City just did with its latest lawsuit against Meta, TikTok, and other platforms.
The message is unmistakable: “We run the largest school system in America with nearly a million students, but we’re unable to teach children that filtered photos aren’t real or help them develop the critical thinking skills needed to navigate the modern world. So we’re suing someone else to fix our incompetence.”
This is what institutional failure looks like in 2025.
NYC first got taken in by this nonsense last year, as Mayor Adams said all social media was a health hazard and toxic waste. However, that lawsuit was rolled into the crazy, almost impossible to follow, consolidated version of that lawsuit in California that currently has over 2300 filings on the docket. So, apparently, NYC dropped that version, and has now elected to sue, sue again. With the same damn law firm, Keller Rohrback, that kicked off this trend and are the lawyers behind a big chunk of these lawsuits.
The actual complaint is bad, and everyone behind it should feel bad. It’s also 327 pages, and there’s no fucking way I’m going to waste my time going through all of it, watching my blood pressure rise as I have to keep yelling at my screen “that’s not how any of this works.”
The complaint leads with what should be Exhibit A for why NYC schools are failing their students—a detailed explanation of adolescent brain development that perfectly illustrates why education matters:
Children and adolescents are especially vulnerable to developing harmful behaviors because their prefrontal cortex is not fully developed. Indeed, it is one of the last regions of the brain to mature. In the images below, the blue color depicts brain development.
Because the prefrontal cortex develops later than other areas of the brain, children and adolescents, as compared with adults, have less impulse control and less ability to evaluate risks, regulate emotions and regulate their responses to social rewards.
Stop right there. NYC just laid out the neurological case for why education exists. Kids have underdeveloped prefrontal cortexes? They struggle with impulse control, risk evaluation, and emotional regulation? THAT’S LITERALLY WHY WE HAVE SCHOOLS.
The entire premise of public education is that we can help children develop these exact cognitive and social skills. We teach them math because their brains can learn mathematical reasoning. We teach them history so they can evaluate evidence and understand cause and effect. We teach them literature so they can develop empathy and critical thinking.
But apparently, when it comes to digital literacy—arguably one of the most important skills for navigating modern life—NYC throws up its hands and sues instead of teaches.
This lawsuit is a 327-page confession of educational malpractice.
The crux of the lawsuit is, effectively, “kids like social media, and teachers just can’t compete with that shit.”
In short, children find it particularly difficult to exercise the self-control required to regulate their use of Defendants’ platforms, given the stimuli and rewards embedded in those platforms, and as a foreseeable and probable consequence of Defendants’ design choices tend to engage in addictive and compulsive use. Defendants engaged in this conduct even though they knew or should have known that their design choices would have a detrimental effect on youth, including those in NYC Plaintiffs’ community, leading to serious problems in schools and the community.
By this logic, basically any products that children like are somehow a public nuisance.
This lawsuit is embarrassing to the lawyers who brought it and to the NYC school system.
Take the complaint’s hysterical reaction to Instagram filters, which perfectly captures the educational opportunity NYC is missing:
Defendants’ image-altering filters cause mental health harms in multiple ways. First, because of the popularity of these editing tools, many of the images teenagers see have been edited by filters, and it can be difficult for teenagers to remain cognizant of the use of filters. This creates a false reality wherein all other users on the platforms appear better looking than they actually are, often in an artificial way. As children and teens compare their actual appearances to the edited appearances of themselves and others online, their perception of their own physical features grows increasingly negative. Second, Defendants’ platforms tend to reward edited photos, through an increase in interaction and positive responses, causing young users to prefer the way they look using filters. Many young users believe they are only attractive when their images are edited, not as they appear naturally. Third, the specific changes filters make to individuals’ appearances can cause negative obsession or self-hatred surrounding particular aspects of their appearance. The filters alter specific facial features such as eyes, lips, jaw, face shape, and face slimness—features that often require medical intervention to alter in real life
Read that again. The complaint admits that “it can be difficult for teenagers to remain cognizant of the use of filters” and that kids struggle to distinguish between edited and authentic images.
That’s not a legal problem. That’s a curriculum problem.
A competent school system would read that paragraph and immediately start developing age-appropriate digital literacy programs. Media literacy classes. Critical thinking exercises about online authenticity. Discussions about self-image and social comparison that have been relevant since long before Instagram existed.
Instead, NYC read that paragraph and decided the solution is to sue the companies rather than teach the kids.
This is educational malpractice masquerading as child protection. If you run a million-student school system and your response to kids struggling with digital literacy is litigation rather than education, you should resign and let someone competent take over.
They’re also getting sued for… not providing certain features, like age verification. Even though, as we keep pointing out, age verification is (1) likely unconstitutional outside of the narrow realm of pornographic content, and (2) a privacy and security nightmare for kids.
The broader tragedy here extends beyond one terrible lawsuit. NYC is participating in a nationwide trend of school districts abandoning their educational mission in favor of legal buck-passing. These districts, often working with the same handful of contingency-fee law firms, have decided it’s easier to blame social media companies than to do the hard work of preparing students for digital citizenship.
This represents a fundamental misunderstanding of what schools are supposed to do. We don’t shut down the world to protect children from it—we prepare children to navigate the world as it exists. That means teaching them to think critically about online content, understand privacy and security, develop healthy relationships with technology, and build the cognitive skills to resist manipulation.
Every generation gets a moral panic or two, and apparently “social media is destroying kids’ brains” is our version of moral panics of years past. We’ve seen this movie before: the waltz would corrupt young women’s morals, chess would stop kids from going outdoors, novels would rot their brains on useless fiction, bicycles would cause moral decay, radio would destroy family conversation, pinball machines would turn kids into delinquents, television would make them violent, comic books would corrupt their minds, and Dungeons & Dragons would lead them to Satan worship.
As society calmed down, eventually, after each of those, we now look back on those moral panics as silly, hysterical overreactions. You would hope that a modern education system would take note that they have an opportunity to use these new forms of media as a learning opportunity.
But faced with social media, America’s school districts have largely given up on education and embraced litigation. That should terrify every parent more than any Instagram filter ever could.
The real scandal isn’t that social media exists. It’s that our schools have become so risk-averse and educationally bankrupt that they’ve forgotten their core purpose: preparing young people to be thoughtful, capable adults in the world they’ll actually inherit.
It would be something of an understatement to say that Alphabet, Google’s holding company, is big and successful. Some Wall Street analysts are even predicting it could become the world’s most valuable corporation. Of course, even for business giants, enough is never enough. They always want more: more money, more power. As part of that tendency, Google seems to have decided that F-Droid, the free and open source app store for the Android platform, is a threat to the official Google Play Store that needs to be neutralized. At least that is likely to be the effect of Google’s announcement that it will require all Android developers to register and be verified before their apps can be allowed to run on certified Android devices. A post on the F-Droid blog explains what the problem is:
In addition to demanding payment of a registration fee and agreement to their (non-negotiable and ever-changing) terms and conditions, Google will also require the uploading of personally identifying documents, including government ID, by the authors of the software, as well as enumerating all the unique “application identifiers” for every app that is to be distributed by the registered developer.
According to the blog post, the impact on the F-Droid project would be severe:
the developer registration decree will end the F-Droid project and other free/open-source app distribution sources as we know them today, and the world will be deprived of the safety and security of the catalog of thousands of apps that can be trusted and verified by any and all. F-Droid’s myriad users will be left adrift, with no means to install — or even update their existing installed — applications.
Google says registration is needed to “better protect users from repeat bad actors spreading malware and scams”. Registration “creates crucial accountability, making it much harder for malicious actors to quickly distribute another harmful app after we take the first one down.” Slightly less convenient, perhaps, but not much harder. The F-Droid blog post points out that its open source app store already has a far better approach to security than Google’s proposed registration and verification:
every [F-Droid] app is free and open source, the code can be audited by anyone, the build process and logs are public, and reproducible builds ensure that what is published matches the source code exactly. This transparency and accountability provides a stronger basis for trust than closed platforms, while still giving users freedom to choose. Restricting direct app installation not only undermines that choice, it also erodes the diversity and resilience of the open-source ecosystem by consolidating control in the hands of a few corporate players.
Google is at pains to emphasize “Verified developers will have the same freedom to distribute their apps directly to users through sideloading or through any app store they prefer.” But that’s not true: their “freedom” will be soon be conditional, subject to Google’s whim and veto (as the company’s recent removal of the ICE-spotting app ‘Red Dot’ demonstrates). As a special concession, the company says:
we are also introducing a free developer account type that will allow teachers, students, and hobbyists to distribute apps to a limited number of devices without needing to provide a government ID.
But again that is subject to Google’s approval, and only allows distribution to a “limited number of devices” – a circumscribed “freedom”, in other words. And for F-Droid it’s not even an option, because of the following:
How many F-Droid users are there, exactly? We don’t know, because we don’t track users or have any registration: “No user accounts, by design”
As the F-Droid post comments, Google’s move is not credibly about “security”, but actually about “consolidating power and tightening control over a formerly open ecosystem”:
If you own a computer, you should have the right to run whatever programs you want on it. This is just as true with the apps on your Android/iPhone mobile device as it is with the applications on your Linux/Mac/Windows desktop or server. Forcing software creators into a centralized registration scheme in order to publish and distribute their works is as egregious as forcing writers and artists to register with a central authority in order to be able to distribute their creative works. It is an offense to the core principles of free speech and thought that are central to the workings of democratic societies around the world.
Google’s attack on F-Droid is ironic. At the heart of Android, and the key element that allowed it to become so successful so quickly, is the GPL-licensed Linux kernel. Over the years, Google has increased its control over Android by adding more non-free elements. If, as seems likely, its latest move leads to the shutdown of the 15-year-old F-Droid platform, it would represent a further betrayal of the open source world it once supported.
Look, if you want to cut to the chase: the lawyers working for Google and Meta know that the MAGA world is very, very stupid and very, very gullible, and it’s very, very easy to tell them something that they know will be interpreted as a “victory” while actually signaling something very, very different. You could just reread my analysis of Meta and Mark Zuckerberg’s silly misleading caving to Rep. Jim Jordan last year, because this is more of the same.
This time it’s Google doing the caving in a manner they absolutely know doesn’t actually admit to things that Jordan and the MAGAverse will insist it does actually admit. If anything, it’s actually admitting the reverse. Specifically, it sent a letter replying to some Jim Jordan subpoenas, which Jim Jordan is claiming as a victory for free speech because Google said things he can misrepresent as such.
Lots of very silly people (including Jordan) have been running around all week falsely claiming that Google has “admitted” that the Biden administration illegally censored people, and in response, they’re now reinstating accounts of people who were “unfairly censored.”
To be fair, this is what Google wants Jim Jordan and MAGA people to believe because it feeds into their pathetic victim narrative.
But it’s not what Google actually said for people who can read (and comprehend basic English). I won’t go through the entire letter, but let’s cover the supposed admission of censorship from the Biden admin:
Senior Biden Administration officials, including White House officials, conducted repeated and sustained outreach to Alphabet and pressed the Company regarding certain user-generated content related to the COVID-19 pandemic that did not violate its policies. While the Company continued to develop and enforce its policies independently, Biden Administration officials continued to press the Company to remove non-violative user-generated content.
It is not new, nor is it all that controversial, that the Biden administration did some outreach regarding COVID-19 content. But note what Google says here: “the Company continued to develop and enforce its policies independently.” In other words, Biden folks reached out, Google said “thanks, but that doesn’t violate our policies, so we’re not doing anything about it.”
Now, we can say that the government shouldn’t be in the business of telling private companies anything at all, but that’s a bit rich coming from the MAGA world that spent the last week focused on getting Disney to “moderate” Jimmy Kimmel out of a fucking job with actual threats of punishment if they failed to do so.
And that, once again, is the key issue: as the Supreme Court has long held, government officials are allowed to use “the bully pulpit” to seek to persuade companies as long as there is no implicit or explicit threat. Some will argue that the message here must have come with an implicit threat, and that’s an area where people can debate and differ on, though the fact that Google flat out admits that it basically told the Biden admin “no” seems to undermine that there was any threat included.
As online platforms, including Alphabet, grappled with these decisions, the Administration’s officials, including President Biden, created a political atmosphere that sought to influence the actions of platforms based on their concerns regarding misinformation.
Again, this is not new. The Biden admin did this publicly and many of us called them out for it. The question is whether or not they reached the level of coercion.
Meanwhile, this is either accidental irony, or Google’s lawyers know that Jim Jordan would totally miss the sarcasm included in this next bit:
It is unacceptable and wrongwhen any government, including the Biden Administration,attempts to dictate how the Company moderates content, and the Company has consistently fought against those efforts on First Amendment grounds.
Why do I say it’s ironic? Because Jim Jordan’s subpoenas and demands to Google are very much a government official attempting to dictate how Google moderates content (in that he wants them to not moderate content he favors).
Indeed, right after this, Google starts groveling about how it’s so, so sorry that YouTube took moderation actions on conspiracy theory and nonsense peddler accounts that Jordan likes and thus will begin to reinstate them.
Yes, in the very letter where Google tells Jim Jordan “it’s wrong for the government to tell us how to moderate,” it also says “thank you for telling us how to moderate, we are following your demands.” Absolutely incredible.
Perhaps even more incredible is the discussion of fact checking. The company mentions that it doesn’t employ third-party fact checkers for YouTube to review content for moderation purposes:
In contrast to other large platforms, YouTube has not operated a fact-checking program that identifies and compensates fact-checking partners to produce content to support moderation. YouTube has not and will not empower fact-checkers to take action on or label content across the Company’s services.
Which in turn led Jordan to crow about how this was a huge success:
If you can’t read that, it’s Jordan saying:
But that’s not all. YouTube is making changes to its platform to prevent future censorship. YouTube is committing to the American people that it will NEVER use outside so-called “fact-checkers” to censor speech. No more telling Americans what to believe and not believe.
But fact checking is not “censorship.” It’s literally “more speech.” It’s not telling anyone what to believe or what not to believe. It’s providing additional information. You know, that whole “marketplace of ideas” that they keep telling us is so important.
Then, Jordan crowed directly about how his own efforts caused YouTube to reinstate people. In other words, in the same letter that he insists supports him and which says it is “unacceptable and wrong” for government officials “to dictate how the Company moderates content” he excitedly claims credit for dictating how YouTube should moderate content:
“Because of our work.” So you are flat out admitting that you have told Google how to moderate, and it is complying by reinstating accounts that you wanted them to reinstate.
That certainly would raise questions about unconstitutional jawboning if we didn’t live in a world in which it has been decided “it’s okay when Republicans do it” but not okay when Democrats do something much less direct or egregious.
It’s almost like there’s a double standard, and it’s very much like Google is willing to suck up to MAGA folks to take advantage of that double standard… just as Mark Zuckerberg did.
FTC Chair Andrew Ferguson has apparently decided his latest form of politically motivated lawfare (the thing he insisted he would end once he took over) should be threatening Google over… checks notes… having spam filters that work too well at blocking actual spam. In a letter sent to Google CEO Sundar Pichai last week, Ferguson claims the company may be violating the FTC Act because Gmail’s spam detection system catches Republican fundraising emails.
This isn’t just bad policy—it’s a rehash of thoroughly debunked claims from 2022, dressed up with new threats and an alarming misunderstanding of both the First Amendment and the FTC’s actual authority.
The Letter That Shouldn’t Exist
Ferguson’s letter reads like it was written by someone who’s never encountered a spam filter in their life. He claims Gmail’s spam detection constitutes potential “unfair or deceptive acts or practices” because:
My understanding from recent reporting is that Gmail’s spam filters routinely block messages from reaching consumers when those messages come from Republican senders but fail to block similar messages sent by Democrats. Indeed, according to recent reporting, Alphabet has “been caught this summer flagging Republican fundraising emails as ‘dangerous’ spam— keeping them from hitting Gmail users’ inboxes—while leaving similar solicitations from Democrats untouched….”
Let’s be real here: Republican political organizations have a long history of sending emails that look exactly like spam because, well, they often are spam. They use deceptive subject lines, aggressive tactics, and mass-mailing techniques that trigger spam filters not because of political bias, but because they’re using spammy tactics.
Even pro-MAGA commentators have called out their own team for this behavior:
Ferguson then tries to shoehorn this into FTC authority by claiming:
Alphabet’s alleged partisan treatment of comparable messages or messengers in Gmail to achieve political objectives may violate both of these prohibitions under the FTC Act. And the partisan treatment may cause harm to consumers.
This is legal nonsense wrapped in political theater. The FTC has never policed “political bias” in private companies’ editorial decisions, and for good reason—the First Amendment prohibits exactly this kind of government interference.
We’ve Been Here Before (And It Was Stupid Then Too)
This entire controversy stems from a 2022 study by political consultants who discovered that Gmail caught more Republican emails in spam filters. What Ferguson conveniently omits is what the study’s own authors admitted: this only happened on completely untrained accounts. Once users actually used their spam filters—you know, the way normal people do—the difference disappeared entirely.
The study also found that other email providers caught more Democratic emails as spam, but Republicans laser-focused on Gmail because it fit their victimization narrative better.
Republicans then filed both lawsuits and FEC complaints (both of which failed easily) claiming this was somehow an “in-kind contribution” to Democrats. Never mind that when given a chance to weigh in on this matter, the public—including many Republicans—don’t want political spam cluttering their inboxes and wish politicians would stop sending so much of it.
There’s also the fact that Google has offered Republicans a system to have their emails whitelisted… and Republicans never seem to take them up on it.
Why This Is Legally Bankrupt
Tech lawyer Berin Szoka demolished Ferguson’s legal theory in a thread explaining why this investigation violates the FTC’s own authority:
Bias can’t be “unfair” because Section 5(n) requires the FTC to show that “substantial injury” is “not outweighed by countervailing benefits,” and the First Amendment bars the government from weighing a spammer’s right to “speech” against a website’s right to editorial control over how to define and block spam.
Szoka also notes that claiming Google “deceived” users would require showing the company made specific promises about spam handling that it then broke. Ferguson’s letter contains no such allegations… because they don’t exist.
The real tell is in Ferguson’s breathless claim that:
Hearing from candidates and receiving information and messages from political parties is key to exercising fundamental American freedoms and our First Amendment rights.
This fundamentally misunderstands how the First Amendment works. Google has its own First Amendment right to decide what content to host and how to organize it. The government can’t force private companies to amplify speech they’d rather not carry—that would be compelled speech, which the Supreme Court has repeatedly ruled violates the First Amendment.
Political Theater, Not Law Enforcement
Ferguson barely bothers making an actual legal case here, probably because he knows it’s garbage. This is political posturing designed to keep the White House happy by appearing to “do something” about conservative claims of “censorship.”
The letter is particularly rich coming from an administration that spent months threatening tech companies over fact-checking and content moderation, then celebrated when those companies caved to the pressure. Apparently free speech principles only matter when they benefit the right people.
Here’s what Ferguson and his allies refuse to acknowledge: if Republican fundraising emails are getting caught in spam filters more often, maybe the problem isn’t Google’s algorithms. Maybe the problem is that Republican organizations keep using tactics that trigger legitimate spam detection.
Political emails are explicitly exempt from the CAN-SPAM Act, which means political fundraisers can get away with behavior that would be illegal for commercial senders. They often use deceptive subject lines, fake urgency (“FINAL NOTICE”), and other tactics that any reasonable spam filter would catch.
The solution isn’t to threaten tech companies with government investigation for having effective spam filters. The solution is for political organizations to stop acting like spammers.
Ferguson’s letter represents yet another in a long line of attempts at dangerous expansions of FTC authority into areas where it has no business. The FTC is supposed to protect consumers from actual fraud and deception, not police private companies’ editorial decisions based on political considerations.
If this theory of FTC authority were accepted, it would open the door for government officials to threaten any tech company whose algorithms don’t produce politically favorable results. That’s not consumer protection—that’s garden variety authoritarianism.
The First Amendment exists precisely to prevent government officials from using their power to coerce private companies into amplifying preferred political messages. Ferguson’s letter is exactly the kind of government overreach the founders sought to prevent.
Ferguson’s not dumb. He knows this investigation is legally baseless. He knows the FTC lacks authority to police political bias in private editorial decisions. He knows the First Amendment protects Google’s right to determine its own spam filtering policies.
This letter isn’t about consumer protection or fair trade practices. It’s about using government power to intimidate a private company for making editorial decisions that favor users who don’t want spam over Republican politicians. That’s not just bad policy—it’s a violation of everything the First Amendment is supposed to protect.
The real scandal here isn’t that Gmail’s spam filters work too well. It’s that the chairman of a federal agency thinks threatening private companies over their editorial decisions is somehow part of his job description.
Last summer, when Judge Amit Mehta ruled that Google had violated antitrust laws through its search distribution agreements, I was left wondering what the hell any reasonable remedy would look like. The case always struck me as weird—Google was paying billions to Apple and Mozilla to be the default search engine because users actually wanted Google as the default. Any remedy seemed likely to either do nothing useful or actively harm the very competitors it was supposed to help.
Well, Mehta just dropped his remedial ruling, and honestly? It’s more reasonable than I expected, though still messy in predictable ways.
The Big Picture: No Chrome Breakup Or Android Sell Off, But Real Constraints
The DOJ had pushed for some truly bonkers structural remedies, including forcing Google to sell off Chrome or Android. Mehta wasn’t having it:
Google will not be required to divest Chrome; nor will the court include a contingent divestiture of the Android operating system in the final judgment. Plaintiffs overreached in seeking forced divesture of these key assets, which Google did not use to effect any illegal restraints.
This makes sense. As discussed before, under antitrust law, structural breakups should relate to the actual violation. The problem wasn’t Chrome or Android—it was the exclusive deals that locked up search distribution. Breaking up unrelated business units would be pure punishment without purpose and could (again) do more damage to competitors than to Google itself.
The Exclusive Deals Ban: Logical But Concerning
The core remedy targets the actual problem—Google’s exclusive distribution agreements:
Google will be barred from entering or maintaining any exclusive contract relating to the distribution of Google Search, Chrome, Google Assistant, and the Gemini app.
This tracks the violation, which is good. But here’s where it gets tricky. The ruling also says:
Googlewill not be barredfrom making payments or offering other consideration to distribution partners for preloading or placement of Google Search, Chrome, or its GenAI products.
So Google can still pay Apple and Mozilla, just not exclusively? That seems like a distinction that might not make much practical difference. If Google can outbid everyone else (which they can), and Apple/Mozilla have admitted users get pissed when they don’t use Google as default, what exactly changes here?
The court was clearly aware of this problem. In fact, Mehta’s analysis of the downstream effects reads like a catalog of unintended consequences that would make any antitrust reformer wince:
The complete loss or reduction of payments to distributors is likely to have significant downstream effects on multiple fronts, some possibly dire. They could include:
Lost competition and innovation from small developers in the browser market. … (stating that for Opera the loss of payments from Google “would make it hard for [it] to continue to invest in innovative solutions that [it] provide[s] for the US audience”). Mozilla, in particular, fears that lower revenue share payments could “potentially start a downward spiral of usage as people defected from our browser, which . . . could at the end of the day put Firefox out of business.” … (“Mozilla has repeatedly made clear that without these [revenue share] payments, it would not be able to function as it does today.”).
Fewer products and less product innovation from Apple. … (Cue) (stating that the loss of revenue share would “impact [Apple’s] ability at creating new products and new capabilities into the [operating system] itself”). The loss of revenue share “just lets [Apple] do less.”…
Less investment in the U.S. market by Android OEMs, which would reduce competition in the U.S. mobile phone market with Apple. …(“[I]f [Samsung is] not getting paid from Google in the revenue share that [it’s] currently getting, I think it will probably make [Samsung’s] position much weaker to innovate and provide . . . the latest technology and better services to our customer. . . . [W]e might face . . . a very difficult situation to continue our business.”); … (“If [Motorola] were not to receive [revenue share payments], it would have significant financial burdens on [its] business. . . . [A]dvanced resources in North America . . . would be put at risk if [it] were to lose this funding.”); … (“It is much more costly for [Verizon] to promote an [Apple] device than an Android device . . . . So the more the Android ecosystem loses share in the Verizon customer base, the more costly it is for Verizon, and that weighs on our [profit and loss].”).
Higher mobile phone prices and less innovative phone features. … (“[S]ome of [Samsung] product[s] could end up increasing prices or defeature our product[s] to manage the profit, which will make our position very weaker in the market and especially in U.S.”); … (“[O]ne of the ways [AT&T] can help offset some of the cost of th[e] device subsidy and make the devices more affordable to consumers is to have the ability to seek distribution or revenue share agreements with search, but also other services.”); … (“[T]hose restrictions would prevent Google from entering into agreements similar to what [T-Mobile] ha[s] with the Android Activation Agreement, . . . the revenues from which [it] use[s] to help prop up the Android ecosystem through subsidies . . . et cetera.”); … (stating that Verizon’s RSA with Google “help[s] and fund[s] the promotion of devices and offset[s]” billions in subsidies).
The court cannot predict to any degree of certainty that one or more of these effects will in fact occur. But the risk is far from small, which is reason enough not to proceed with the remedy.
Think about the weird logic here: Google’s current payment structure has created an ecosystem where cutting off those payments would likely kill Firefox (a key browser competitor), leave Samsung and other Android manufacturers financially weakened against Apple, and potentially raise phone prices for consumers. Meanwhile, Google would save billions in payments and still likely retain most users anyway.
In such a scenario, keeping the money flowing is actually essential to greater competition.
Data Sharing: The Actually Interesting Bit
But here’s where Mehta may have found the real lever for change. Google will have to share search index and user interaction data with “Qualified Competitors”:
Google will have to make available to Qualified Competitors certain search index and user-interaction data, though not ads data, as such sharing will deny Google the fruits of its exclusionary acts and promote competition.
This could be genuinely transformative, but there are lots of questions about how it will actually work in practice. The biggest barrier to competing with Google isn’t just the exclusive deals—it’s the chicken-and-egg problem of needing massive scale to build a decent search index, but needing a decent search index to attract users that create scale. Google’s search index represents decades of crawling, indexing, and learning from user interactions across billions of queries. No startup can replicate that from scratch.
As DuckDuckGo noted in their remedies proposal, access to Google’s search results via API could actually level the playing field in ways that breaking up Chrome or Android never could (though DuckDuckGo has said that this remedy ruling is insufficient in its eyes). A competitor could potentially build a differentiated search experience—better privacy, different ranking algorithms, specialized vertical search—while leveraging Google’s underlying index as a foundation.
The court was careful to limit this:
The court, however, has narrowed the datasets Google will be required to share to tailor the remedy to its anticompetitive conduct.
The key word here is “narrowed.” Mehta isn’t requiring Google to hand over everything—which would raise legitimate privacy and security concerns—but specifically the datasets that flow from the scale advantages Google gained through its anticompetitive conduct. It’s an elegant solution that addresses the actual harm without creating new ones.
Google will also have to offer search and ads syndication services to qualified competitors:
Google shall offer Qualified Competitors search and search text ads syndication services to enable those firms to deliver high-quality search results and ads to compete with Google while they develop their own search technologies and capacity. Such syndication, however, shall occur largely on ordinary commercial terms that are consistent with Google’s current syndication services.
Think of this as mandated training wheels for search competitors. Google has to help rivals build their own search capacity using Google’s infrastructure, but only until they can develop their own. The “ordinary commercial terms” language is crucial—it prevents Google from pricing competitors out while ensuring the remedy doesn’t become a permanent subsidy.
The AI Wrinkle
What’s fascinating is how much generative AI looms over this entire ruling. As Mehta notes (GSEs is “general search engines”):
The emergence of GenAI changed the course of this case. No witness at the liability trial testified that GenAI products posed a near-term threat to GSEs. The very first witness at the remedies hearing, by contrast, placed GenAI front and center as a nascent competitive threat. These remedies proceedings thus have been as much about promoting competition among GSEs as ensuring that Google’s dominance in search does not carry over into the GenAI space. Many of Plaintiffs’ proposed remedies are crafted with that latter objective in mind.
This timing accident may have saved the case from irrelevance. When the DOJ first filed this lawsuit, Google’s search dominance seemed unshakeable. By the time Mehta was crafting remedies, generative AI had created the first credible alternative to traditional search in decades. Suddenly, preventing Google from extending its search monopoly into AI distribution became just as important as addressing its existing dominance.
Dozens of pages are devoted to the rise of LLM technology, as well as chatbots and agents. While it notes the limits of comparing Generative AI tech to search, it also notes how competitive the market is:
The GenAI space is highly competitive. See id. at 503:25–504:4 (Turley) (Q. And let’s talk about the [GenAI] space . . . . You consider that space to be very competitive; correct? A. Yes, absolutely.”); id. at 3335:19-23 (Collins) (“[Q.] How would you describe the current level of competition with respect to foundation models as compared to the course of competition over the years that you’ve seen? A. [It] is the most competitive market I’ve ever worked in.”); id. at 685:4-8 (Hsiao) (“Q. How would you describe the competitive space that the Gemini app occupies? A. I would say I don’t think I’ve seen a more fierce competition ever in my 20-some years of working in technology.”).
There have been numerous new market entrants. See id. at 685:9-13 (Hsiao) (“It’s explosive growth. There’s new entrants. . . . You know, Grok, DeepSeek, all sort of new emerging models that are really, really strong.” …. (Hitt) (“You see entrants like Grok or DeepSeek, that may not have existed six months ago, are now able to reach the level of performance to wind up in the top ten of these models.”); id. at 2459:21-23 (Pichai) (“You have seen over the last few months as many people have launched chatbots. Very quickly, these chatbots reach tens of millions of users.”).
Again, the ruling makes it clear that Generative AI tools and search aren’t exactly direct competitors yet, but there are signs of the market heading that way:
GenAI products may be having some impact on GSE usage. … (Cue) (testifying that the volume of Google Search queries in Apple’s Safari web browser declined for the first time in 22 years perhaps due to the emergence of GenAI chatbots). But GenAI products have not eliminated the need for GSEs. … (“ChatGPT already expanded what is possible for parts of Search, but users don’t yet use ChatGPT for the full range of Search needs.”); … (Hsiao) (testifying that Google tracks so-called “cannibalization” of Google Search by GenAI chatbots and the Gemini app is not diverting queries from Google Search to a significant degree today); … (Cue) (attributing the recent decline in Safari’s search volume to increasing usage of GenAI apps but recognizing these apps must improve to compete with Google Search); … (Opening Arg.) (Plaintiffs’ counsel acknowledging that general search and GenAI “are different but overlapping products” and that GenAI “is not a replacement for [s]earch today);
Again, it seems like Judge Mehta is properly trying to respond to the actual violations here and trying to make sure any remedies match that, without getting in the way of actual market forces at work.
Some Judicial Humility Is Nice To See
Throughout the ruling, Mehta acknowledges the fundamental challenge of antitrust remedies:
Notwithstanding this power, courts must approach the task of crafting remedies with a healthy dose of humility. This court has done so. It has no expertise in the business of GSEs, the buying and selling of search text ads, or the engineering of GenAI technologies. And, unlike the typical case where the court’s job is to resolve a dispute based on historic facts, here the court is asked to gaze into a crystal ball and look to the future. Not exactly a judge’s forte.
This is refreshingly honest. Courts suck at designing technology markets. The best they can do is try to remove barriers and let competition happen, rather than micromanage outcomes.
Still A Long Road Ahead
Of course, none of this matters immediately. Google will likely appeal (though, honestly, the result here might be worth not having to spend on an appeal and the uncertainty it would bring), and we’re looking at years more litigation before anything actually happens. By then, the entire search landscape might have been transformed by AI anyway.
But if this ruling does eventually stick, it’s not the disaster I feared it might be. It targets the actual problem (exclusive distribution deals), creates some potentially useful competitive tools (data sharing and syndication with proper limitations for privacy reasons), and avoids the worst structural remedies that would have helped no one.
The question remains whether any of this will actually create more competitive search engines. But at least it’s not actively making things worse, which, honestly, was my biggest fear going in. I had feared that the court wouldn’t properly thread the needle on remedies, and yet… this seems to have been done very thoughtfully and strikes what is likely a good balance.
One of the more frustrating things about content streaming has been how quickly we went from having a conversation about cord-cutting to the realization that all of the streaming services that enabled said cord-cutting have morphed into the very cable providers that people wanted to escape. You can see this in a variety of ways. More packaged bundles that include content people don’t actually want. Stupid local blackouts of content, particularly when it comes to live sports. Subscription fees that rapidly shift higher with no value add for the customer. And, of course, carriage disputes.
I could write up an explanation as to what these kind of disputes are, but Karl Bode put it together so beautifully that I’ll just borrow his words instead.
For years cable TV has been plagued by retrans feuds and carriage disputes that routinely end with users losing access to TV programming they pay for. Basically, broadcasters will demand a rate hike in new content negotiations, the cable TV provider will balk, and then each side blames the other for failing to strike a new agreement on time like reasonable adults. That repeatedly results in content being blacked out for months, without consumers ever getting a refund. After a few months, the two sides strike a new confidential deal, your bill goes up, and nobody much cares how that impacts the end user. Rinse, wash, repeat.
The only thing I’d really want to add to that is how the blame game that gets played by both sides is typically directed at the actual customer. The goal typically is to at least threaten the other side’s goodwill with the public by calling them greedy or whatever, or sometimes to get the public to engage in the pressure campaign themselves by calling one side or the other to complain. It’s a rather remarkable thing to watch two wealthy entities use their own customers as pawns in a chess battle with one another over just how much money each side will make from those same pawns.
Well, we’re at it again it seems, this time as YouTube TV and the Fox network are at odds over carriage fees. And the timing, on the eve of the NFL season beginning, isn’t lost on anyone.
YouTube TV could soon lose access to Fox channels, it announced on its official blog, mere days before the 2025 NFL season begins. It warned users that it’s actively negotiating with Fox now that the renewal date for their partnership is approaching, but Fox is allegedly asking for an amount “far higher than what partners with comparable content offerings receive.” YouTube TV says it’s aiming to reach an agreement that “reflects the value of their content and is fair for both sides” without the service having to raise its prices to be able to offer Fox channels.
If both sides aren’t able to come to an agreement by 5PM Eastern time on August 27, subscribers will no longer be able to access all Fox news and business programs, as well as all sporting events (like NFL games) broadcast on Fox channels. The content from the channels saved in their library will also disappear. In case YouTube TV fails to reach a deal with Fox and the network’s channels become unavailable for “an extended period of time,” it will give subscribers a $10 credit.
Who knows what an “extended period of time” means, but I’ll say that the offer of any kind of a credit is better than what usually occurs. As for how out of whack the ask from Fox is, I don’t have those details, but I’m not terribly surprised that it’s unpalletable to YouTube. Between the leverage the network has as football season is about to start, the stranglehold Fox News has on about a third of the country’s cable news viewership, and the fact that Fox is probably still feeling the pain of a nearly $800 million dollar settlement over its defamatory news content, well, I imagine the ask is quite large.
But not so large that YouTube couldn’t absorb it if it wanted to. Instead, both sides are doing some mild public sniping and PR campaigning against each other, while the customer is left to await their fate.
If we were going to keep doing this sort of thing, what was the point of cutting the cord to begin with?
When politicians immediately blamed social media for the horrific 2022 Buffalo mass shooting—despite zero evidence linking the platforms to the attack—it was obvious deflection from actual policy failures. The scapegoating worked: survivors and victims’ families sued the social media companies, and last year a confused state court wrongly ruled that Section 230 didn’t protect them.
Thankfully, an appeals court recently reversed that decision in a ruling full of good quotes about how Section 230 actually works, while simultaneously demonstrating why it’s good that it works this way.
The plaintiffs conceded they couldn’t sue over the shooter’s speech itself, so they tried the increasingly popular workaround: claiming platforms lose Section 230 protection the moment they use algorithms to recommend content. This “product design” theory is seductive to courts because it sounds like it’s about the platform rather than the speech—but it’s actually a transparent attempt to gut Section 230 by making basic content organization legally toxic.
The NY appeals court saw right through this litigation sleight of hand.
Here, it is undisputed that the social media defendants qualify as providers of interactive computer services. The dispositive question is whether plaintiffs seek to hold the social media defendants liable as publishers or speakers of information provided by other content providers. Based on our reading of the complaints, we conclude thatplaintiffs seek to hold the social media defendants liable as publishers of third-party content. We further conclude that the content-recommendation algorithms used by some of the social media defendants do not deprive those defendants of their status as publishers of third-party content. It follows that plaintiffs’ tort causes of action against the social media defendants are barred by section 230.
Even assuming, arguendo, that the social media defendants’ platforms are products (as opposed to services), and further assuming that they are inherently dangerous, which is a rather large assumption indeed,we conclude that plaintiffs’ strict products liability causes of action against the social media defendants fail because they are based on the nature of content posted by third parties on the social media platforms.
The plaintiffs leaned on the disastrous Third Circuit ruling in Anderson v. TikTok—which essentially held that any algorithmic curation transforms third-party content into first-party content. The NY court demolishes this reasoning by pointing out its absurd implications:
We do not find Anderson to be persuasive authority. If content-recommendation algorithms transform third-party content into first-party content, as the Anderson court determined, then Internet service providers using content-recommendation algorithms (including Facebook, Instagram, YouTube, TikTok, Google, and X) would be subject to liability for every defamatory statement made by third parties on their platforms. That would be contrary to the express purpose of section 230, which was to legislatively overrule Stratton Oakmont, Inc. v Prodigy Servs. Co. (1995 WL 323710, 1995 NY Misc LEXIS 229 [Sup Ct, Nassau County 1995]), where “an Internet service provider was found liable for defamatory statements posted by third parties because it had voluntarily screened and edited some offensive content, and so was considered a ‘publisher’ ” (Shiamili, 17 NY3d at 287-288; see Free Speech Coalition, Inc. v Paxton, — US —, —, 145 S Ct 2291, 2305 n 4 [2025]).
Although Anderson was not a defamation case, its reasoning applies with equal force to all tort causes of action, including defamation.One cannot plausibly conclude that section 230 provides immunity for some tort claims but not others based on the same underlying factual allegations. There is no strict products liability exception to section 230.
Furthermore, it points out (just as we had said after the Anderson ruling) that the Anderson ruling messes up its interpretation of the Supreme Court in the Moody case. That case was about the social media content moderation law in Florida, and the Supreme Court noted that content moderation decisions were editorial discretion protected by the First Amendment. The Third Circuit in Anderson incorrectly interpreted that to mean that such editorial discretion could not be protected under 230 because Moody made it “first party speech” instead of third party.
But the NY appeals court points out how that’s complete nonsense because having your editorial discretion protected by the First Amendment is entirely consistent with saying you can’t hold a platform liable for the underlying content which that editorial discretion is covering:
In any event, even if we were to follow Anderson and conclude that the social media defendants engaged in first-party speech by recommending to the shooter racist content posted by third parties,it stands to reason that such speech (“expressive activity” as described by the Third Circuit) is protected by the First Amendment under Moody. While TikTok did not seek protection under the First Amendment, our social media defendants do raise the First Amendment as a defense in addition to section 230.
In Moody, the Supreme Court determined that content-moderation algorithms result in expressive activity protected by the First Amendment (see 603 US at 744). Writing for the majority, Justice Kagan explained that “[d]eciding on the third-party speech that will be included in or excluded from a compilation—and then organizing and presenting the included items—is expressive activity of its own” (id. at 731). While the Moody Court did not consider social media platforms “with feeds whose algorithms respond solely to how users act online—giving them the content they appear to want, without any regard to independent content standards” (id. at 736 n 5 [emphasis added]), our plaintiffs do not allege that the algorithms of the social media defendants are based “solely” on the shooter’s online actions. To the contrary, the complaints here allege that the social media defendants served the shooter material that they chose for him for the purpose of maximizing his engagement with their platforms.Thus, per Moody, the social media defendants are entitled to First Amendment protection for third-party content recommended to the shooter by algorithms.
Although it is true, as plaintiffs point out, that the First Amendment views expressed in Moody are nonbinding dicta, it is recent dicta from a supermajority of Justices of the United States Supreme Court, which has final say on how the First Amendment is interpreted. That is not the type of dicta we are inclined to ignore even if we were to disagree with its reasoning, which we do not.
The majority opinion cites the Center for Democracy and Technology’s amicus brief that points out the obvious: at internet scale, every platform has to do some moderation and some algorithmic ranking, and that cannot and should not somehow remove protections. And the majority uses some colorful language to explain (as we have said before) 230 and the First Amendment work perfectly well together:
As the Center for Democracy and Technology explains in its amicus brief, content-recommendation algorithms are simply tools used by social media companies “to accomplish a traditional publishing function, made necessary by the scale at which providers operate.”Every method of displaying content involves editorial judgments regarding which content to display [*5]and where on the platforms. Given the immense volume of content on the Internet, it is virtually impossible to display content without ranking it in some fashion, and the ranking represents an editorial judgment of which content a user may wish to see first. All of this editorial activity, accomplished by the social media defendants’ algorithms, is constitutionally protected speech.
Thus,the interplay between section 230 and the First Amendment gives rise to a “Heads I Win, Tails You Lose” proposition in favor of the social media defendants. Either the social media defendants are immune from civil liability under section 230 on the theory that their content-recommendation algorithms do not deprive them of their status as publishers of third-party content, per Force and M.P., or they are protected by the First Amendment on the theory that the algorithms create first-party content, as per Anderson.Of course, section 230 immunity and First Amendment protection are not mutually exclusive, and in our view the social media defendants are protected by both. Under no circumstances are they protected by neither.
There is a dissenting opinion that bizarrely relies heavily on a dissenting Second Circuit opinion in the very silly Force v. Facebook case (in which the family victim of a Hamas attack blamed Facebook claiming that because some Hamas members used Facebook, Facebook could be blamed for any victims of a Hamas attack—an argument that was mostly laughed out of court). The majority points out what a silly world it would be if that were actually how things worked:
To the extent that Chief Judge Katzmann concluded that Facebook’s content-recommendation algorithms similarly deprived Facebook of its status as a publisher of third-party content within the meaning of section 230,we believe that his analysis, if applied here, would ipso facto expose most social media companies to unlimited liability in defamation cases. That is the same problem inherent in the Third Circuit’s first-party/third-party speech analysis in Anderson. Again, a social media company using content-recommendation algorithms cannot be deemed a publisher of third-party content for purposes of libel and slander claims (thus triggering section 230 immunity) and not at the same time a publisher of third-party content for strict products liability claims.
And the majority calls out the basic truths: all of these cases are bullshit cases trying to hold social media companies liable for the speech of its users—exactly the thing Section 230 was put in place to prevent:
In the broader context, the dissenters accept plaintiffs’ assertion that these actions are about the shooter’s “addiction” to social media platforms, wholly unrelated to third-party speech or content. We come to a different conclusion.As we read them, the complaints, from beginning to end, explicitly seek to hold the social media defendants liable for the racist and violent content displayed to the shooter on the various social media platforms. Plaintiffs do not allege, and could not plausibly allege, that the shooter would have murdered Black people had he become addicted to anodyne content, such as cooking tutorials or cat videos.
Instead, plaintiffs’ theory of harm rests on the premise that the platforms of the social media defendants were defectively designed because they failed to filter, prioritize, or label content in a manner that would have prevented the shooter’s radicalization. Given that plaintiffs’ allegations depend on the content of the material the shooter consumed on the Internet, their tort causes of action against the social media defendants are “inextricably intertwined” with the social media defendants’ role as publishers of third-party content….
If plaintiffs’ causes of action were based merely on the shooter’s addiction to social media,which they are not, they would fail on causation grounds.It cannot reasonably be concluded that the allegedly addictive features of the social media platforms (regardless of content) caused the shooter to commit mass murder, especially considering the intervening criminal acts by the shooter, which were not “not foreseeable in the normal course of events” and therefore broke the causal chain (Tennant v Lascelle, 161 AD3d 1565, 1566 [4th Dept 2018];see Turturro v City of New York, 28 NY3d 469, 484 [2016]).It was the shooter’s addiction to white supremacy content, not to social media in general, that allegedly caused him to become radicalized and violent.
From there, the majority opinion reminds everyone why Section 230 is so important to free speech:
At stake in these appeals is the scope of protection afforded by section 230, which Congress enacted to combat “the threat that tort-based lawsuits pose to freedom of speech [on the] Internet” (Shiamili, 17 NY3d at 286-287 [internal quotation marks omitted]). As a distinguished law professor has noted, section 230’s immunity “particularly benefits those voices from underserved, underrepresented, and resource-poor communities,” allowing marginalized groups to speak up without fear of legal repercussion (Enrique Armijo, Section 230 as Civil Rights Statute, 92 U Cin L Rev 301, 303 [2023]). Without section 230, the diversity of information and viewpoints accessible through the Internet would be significantly limited.
And the court points out, ruling the other way would “result in the end of the internet as we know it.”
We believe that the motion court’s ruling, if allowed to stand,would gut the immunity provisions of section 230 and result in the end of the Internet as we know it. This is so because Internet service providers who use algorithms on their platforms would be subject to liability for all tort causes of action, including defamation. Because social media companies that sort and display content would be subject to liability for every untruthful statement made on their platforms,the Internet would over time devolve into mere message boards.
It also calls out how the immunity part of 230, getting these kinds of frivolous cases tossed out early on is an important part of 230, because if you have to litigate every such accusation you lose all the benefits of Section 230.
Although the motion court stated that the social media defendants’ section 230 arguments “may ultimately prove true,” dismissal at the pleading stage is essential to protect free expression under Section 230 (see Nemet Chevrolet, Ltd., 591 F3d at 255 [the statute “protects websites not only from ‘ultimate liability,’ but also from ‘having to fight costly and protracted legal battles’ “]).Dismissal after years of discovery and litigation (with ever mounting legal fees) would thwart the purpose of section 230.
Law professor Eric Goldman, whose own research and writings seem to be infused throughout the majority’s opinion, also wrote a blog post about this ruling, celebrating the majority for getting this one right at a time when so many courts are getting it wrong, but (importantly) notes that the 3-2 split on this ruling, including the usual nonsense justifications in the dissent mean that (1) this is almost certainly going to be appealed, possibly to the Supreme Court, and (2) it’s unlikely to persuade many other judges who seem totally committed to the techlash view that says “we can ignore Section 230 if we decide the internet is just, like, really bad.”
I do think it’s likely he’s right (as always) but I still think it’s worth highlighting not just the thoughtful ruling, but how these judges actually understood the full implications of ruling the other way: that it would end the internet as we know it and do massive collateral damage to the greatest free speech platform ever.
Microsoft-owned LinkedIn has quietly joined the parade of tech giants rolling back basic protections for transgender users, removing explicit prohibitions against deadnaming and misgendering from its hate speech policies this week. The change, first spotted by the nonprofit Open Terms Archive, eliminates language that previously listed “misgendering or deadnaming of transgender individuals” as examples of prohibited hateful content.
LinkedInremovedtransgender-related protections from its policy on hateful and derogatory content. The platformno longer lists “misgendering or deadnaming of transgender individuals” as examples of prohibited conduct. While “content that attacks, denigrates, intimidates, dehumanizes, incites or threatens hatred, violence, prejudicial or discriminatory action” is still considered hateful, addressing a person by a gender and name they ask not be designated by is not anymore.
Similarly, the platform removed “race or gender identity” from its examples of inherent traits for which negative comments are considered harassment. That qualification of harassment is now kept only for behaviour that is actively “disparaging another member’s […] perceived gender”, not mentioning race or gender identity anymore.
The move is particularly cowardly because LinkedIn made the change with zero public announcement or explanation. When pressed by a reporter at The Advocate, the company offered the classic corporate non-answer: “We regularly update our policies” and insisted that “personal attacks or intimidation toward anyone based on their identity, including misgendering, violates our harassment policy.”
But here’s the thing: if your policies haven’t actually changed, why remove the explicit protections? Why make it harder for users and moderators to understand what’s prohibited? The answer is as obvious as it is pathetic: LinkedIn is preemptively capitulating to political pressure in this era of MAGA culture war.
This follows the now-familiar playbook we’ve seen from Meta, YouTube, and others. Meta rewrote its policies in January to allow content calling LGBTQ+ people “mentally ill” and portraying trans identities as “abnormal.” YouTube quietly scrubbed “gender identity” from its hate speech policies, then had the audacity to call it “regular copy edits.” Now LinkedIn is doing the same cowardly dance.
What makes this particularly infuriating is the timing. These companies aren’t even waiting for actual government threats. They’re just assuming that sucking up to the Trump administration’s anti-trans agenda will somehow protect them from regulatory scrutiny. It’s the corporate equivalent of rolling over and showing your belly before anyone even raises their voice.
And it won’t help. The Trump administration will still target them and demand more and more, knowing that these companies will just roll over again.
And let’s be clear about what deadnaming and misgendering actually are: they’re deliberate acts of dehumanization designed to erase transgender people’s identities and make them feel unwelcome in public spaces. When platforms explicitly protect against these behaviors, it sends a message that trans people belong in these spaces. When they quietly remove those protections, they’re sending the opposite message. They’re saying “we don’t care about your humanity, and we will let people attack you for your identity.”
LinkedIn’s decision is especially disappointing because professional networking platforms should be spaces where people can present their authentic selves without fear of purely hateful harassment. Trans professionals already face discrimination in hiring and workplace environments. The last thing they need is for LinkedIn to signal that it’s open season for harassment on its platform.
The company is trying to argue that it still prohibits harassment and hate speech generally. But vague, general policies are much harder to enforce consistently than specific examples. When you remove explicit guidance about what constitutes anti-trans harassment, you make it easier for bad actors to push boundaries and harder for moderators to draw clear lines.
This is exactly the wrong moment for tech companies to be weakening protections for vulnerable communities. Anti-trans rhetoric and legislation have reached fever pitch, with the Trump administration making attacks on transgender rights a central part of its agenda. This is when platforms should be strengthening their commitment to protecting people from harassment, not quietly rolling back safeguards.
Sure, standing up for what’s right when there’s political pressure to do otherwise is hard. But that’s exactly when it matters most. These companies have billions in revenue and armies of lawyers. If anyone can afford to take a principled stand, it’s them.
Instead, we’re watching them fold like cheap suits at the first sign of political headwinds. They’re prioritizing their relationships with authoritarian politicians over the safety of their users. And they’re doing it in the most cowardly way possible: quietly, without explanation, hoping no one will notice.
The message this sends to transgender users is clear: you’re expendable. Your safety and dignity are less important than our political calculations. And that message isn’t just coming from fringe platforms or obvious bad actors—it’s coming from mainstream services owned by some of the world’s largest companies.
This isn’t just bad for transgender users. It’s bad for everyone who believes that online spaces should be governed by consistent principles rather than political opportunism. When platforms start making policy decisions based on which way the political winds are blowing, they undermine their own credibility and the trust users place in them.
Hell, for years, all we heard from the MAGA world was how supposedly awful it is when platforms make moderation decisions based on political pressure.
Where are all of those people now?
The irony is that these companies are probably making themselves less safe, not more. By signaling that they’ll cave to political pressure, they’re inviting more of it. Authoritarians don’t respect weakness—they exploit it.
LinkedIn, Meta, YouTube, and the rest need to understand: there’s no appeasing the anti-trans mob. No matter how many protections you strip away, it will never be enough. Stick to your principles and protect your users regardless of political pressure.
But instead of showing backbone, these companies are racing to see who can capitulate fastest. It’s a disgraceful display of corporate cowardice at exactly the moment when courage is most needed.
We all deserve better than watching supposedly values-driven companies abandon their principles the moment it becomes politically inconvenient to maintain them.
I wasn’t wrong when I wrote that Apple, Google, Akamai, and others faced tremendous liability risk if they continued to provide any of their hosting services to TikTok. Of course, not because it should be illegal – the operative law is incredibly unconstitutional, despite the trite reasoning by the Supreme Court finding it otherwise. But because, as long as it remains an enforceable law, it includes terms that make providing these services to TikTok punishable by exorbitant sanctions that can potentially run into the billions of dollars.
And yet, here all these companies are, nevertheless providing these services, as if there were no law telling them they can’t. So what happened?
Apparently, Trump and US Attorney General Pam Bondi are what happened.
Some of this we knew already. The TikTok ban was a ticking time bomb for whomever won the 2024 Presidential Election because from almost the very first moment their term began the law’s teeth would be fully sharpened, effectively banning TikTok in America and penalizing anyone who helped it provide service anyway. As a now fully-ripe law the President would have no choice but to enforce it, consistent with his constitutional obligation to “take Care that the Laws be faithfully executed,” no matter how crummy, stupid, or illiberal those laws may be. Sometimes some presidents have refused to enforce laws that they consider unconstitutional and thus inconsistent with their oath to uphold the Constitution, but taking that position looks extremely dubious when the law in question has already been found constitutional by the Supreme Court (no matter how speciously). And in no circumstance does the President have the constitutional authority to change any laws duly passed by Congress, which has exclusive legislative authority in our constitutional system—none belongs to the President. The President can neither pass legislation nor modify legislation that has been passed. Thus the President’s discretion with respect to this law is limited, except possibly in two potential ways, neither of which allow what has happened here.
One way is by the terms of the law itself, which allowed the President to provide a short reprieve to TikTok before it got fully banned, but only if certain conditions were met, namely that negotiations for its imminent sale were significantly underway. Trump has so far now issued several executive orders purporting to give TikTok and its third party enablers a stay of execution, yet never permissibly because those statutory conditions that would have entitled him to provide them were never met. These “extensions” that he granted were therefore an abuse of an imaginary power Trump does not actually have, either by the terms of the law itself or any other constitutional authority. They are thus a legal nullity no one can safely rely on.
Then there is the other way, which is through the exercise of prosecutorial discretion. The Constitution itself does not actually authorize the President to pick and choose which laws will get enforced—in fact, per its plain language, his job is to enforce all of them—but the realities of law enforcement mean that these sorts of choices effectively happen all the time, at least to some extent. Prosecutors are always deciding whom to charge and how because it can’t realistically be “everyone for everything” nor would we want it to be. Still, there have been other rules and norms that have tried to ensure that federal prosecutions would not be arbitrary and unjust, including the long-recognized separation between the President and the Department of Justice, which helped to ensure that prosecutions would be consistent with the rule of law and not vulnerable to the President’s political whims.
Yet here we are, now learning that, at Trump’s behest, AG Bondi has exercised this supposed prosecutorial discretion by sending letters to these third party companies promising not to enforce the law against them. For example, here is some language from one letter to Apple, with the promise phrased as a(n extremely ludicrous if not also illiterate) determination that there is no liability that could be prosecuted:
Based on the Attorney General’s review of the facts and circumstances, Apple Inc. has committed no violation of the Act and Apple Inc. has incurred no liability under the Act during the Covered Period or the Extended Covered Period. Apple Inc. may continue to provide services to TikTok as contemplated by these Executive Orders without violating the Act, and without incurring any legal liability.
The bigger problem, however, is that the letters do more than tell the companies that Trump will not prosecute them, probably because that promise alone would not be enough to ameliorate the legal risk the companies face by providing TikTok services in violation of the statutory language telling them not to. After all, at five years the statute of limitations—or the time period after which a violation of the law could still be prosecuted—extends beyond a single presidency term. If there’s a new president, with a new Attorney General, violations happening now could still be prosecuted then.
Perhaps realizing that this promise not to prosecute would probably not be enough to induce the platforms to continue to provide their services, Bondi, on behalf of Trump, attempted to sweeten the pot, by offering an irrevocable guarantee that no one could ever prosecute any of the third party companies for continuing to provide services to TikTok (despite any pesky statutory language to the contrary):
The Department of Justice is also irrevocably relinquishing any claims the United States might have had against Apple Inc. for the conduct proscribed in the Act during the Covered Period and Extended Covered Period, with respect to TikTok and the larger family of ByteDance Ltd. and TikTok, Inc. applications covered under the Act. This is derived from the Attorney General’s plenary authority over all litigation, civil and criminal, to which the United States, its agencies, or departments, are parties, as well as the Attorney General’s authority to enter settlements limiting the future exercise of executive branch discretion.
It appears that this “no backsies forever!” promise has done the trick, as everyone’s back in business, but the question is why, because this sort of promise is not a thing that she, or anyone else, can provide under American Constitutional law. What she calls a “plenary power” (aka a thing she thinks her job entitles were to do) is what Steve Vladeck calls a “dispensing power,” which is most definitely something that she does not get to exercise, and nor does Trump. As he explains, this sort of law-by-regal-decree was a creature of the English monarchy before America’s founding, which both pro-democracy forces in England eventually did away with and America’s founders refused to allow from the start.
The “dispensing” power claimed by pre-18th-century English kings was the power to decide, on an ad hoc basis, which laws could and should be set aside in individual cases—to exempt the King’s favorites not just from the retrospective operation of criminal laws (for which after-the-fact pardons could have the same effect), but from the retrospective and prospective application of civil laws, as well. The idea was that the King could literally “dispense” with application of whichever laws he wanted, for whatever reasons he wanted, in whatever cases he wanted.
Here in America, our Constitution provides no room for such executive power. Laws emanate from the people as expressed through Congress, and the President of the United States has no power to mess with that democratic authority. That Trump has, via Bondi, is yet another unconstitutional power grab by Trump and thus yet another legal nullity.
Which means that the third party companies violating the law by providing services are still in just as much legal jeopardy as they would have been to provide the services without the Bondi letter, which is devoid of legal effect. These companies are openly violating the law, and not only do they have to still worry about enforcement from the next president, but given that none of Trump’s promises are worth anything, they are still in jeopardy from this one too! In fact, now that there is further news that Trump is currently unhappy with TikTok and now a lot more keen to see it banned, it looks like a lot of jeopardy.
Of course, perhaps in this new era of apparently tolerable corruption by the Chief Executive of the United States the third party companies made the pragmatic decision that they might be tempting more trouble from the Trump Administration if they did not go back to providing the services he at least did once seem to want them to provide, as suggested by the letters. Perhaps they decided it would be better to go along to get along, even though if they were to lose the bet and find themselves at the receiving end of an enforcement action, in any administration, it would likely result in an enormous financial liability.
On the other hand, should that day come, the third party companies would still have some cards to play to try to fight back. One might be based on reliance harms, in light of Bondi’s promises, although given how facially void they are a court could fairly ask how the companies could have been so dumb to rely on them. Courts are usually only sympathetic to reliance harms that are reasonable and having unlawful activities blessed by an unconstitutional authority is arguably not particularly reasonable. On the other hand, since we are so far through the looking glass with unlawful unconstitutional and corrupt executive behavior, the companies might also be able to raise some sort of defense based on duress. Perhaps plausibly even, but it is a rather bet-the-company decision to presume it will work.
And the better argument is likely the one suggested in the earlier post, which has so far, disappointingly, never been raised in court at all: that this law is still massively unconstitutional, particularly as applied to them, the third party companies. So far the Supreme Court has only said it is perfectly fine for Congress to ban a platform, but it has not said that it is equally fine to ban any other platforms from providing service to it. And given lots of other precedent, including the pretty fresh Moody v. NetChoice decision, which acknowledged their own First Amendment rights to provide their facilitating services, it is not clear that it would find it ok.
But these companies have now bet billions that the Court won’t bless the law with respect to them. Even though they would in the dubious position, should that argument eventually having to be made, of never having chosen to challenge the law and instead only openly defied it.