In a special episode of Ctrl-Alt-Speech, Ben and Mike discuss (with apologies to Tay-Tay) the three eras of content moderation in the media and what comes next.
Together, they unpack three distinct phases: The Strange Fascination Era (2003–2015), when newsrooms powered platform growth and treated social media as an exciting new frontier; The “We’re Watching You” Era (2016–2020), when investigative reporting exposed online harms and pushed platforms to formalise Trust & Safety; and The Mask Off Era (2021–present), as platforms retreat from working with the media and the commitment to moderation waned.
In less than a week, the Pentagon blacklisted an AI company for having ethics, declared it a supply chain risk, watched its preferred replacement face a massive user revolt, and then sat down to amend the replacement’s contract to address the very concerns the blacklisted company had been raising all along. Meanwhile, the blacklisted company is reportedly back in negotiations with the same Pentagon that tried to destroy it, because—wouldn’t you know—its models are apparently better for what the military actually needs.
On Monday night, Sam Altman posted on X that OpenAI had amended its Defense Department agreement to include new language explicitly addressing domestic surveillance:
We have been working with the DoW to make some additions in our agreement to make our principles very clear.
1. We are going to amend our deal to add this language, in addition to everything else:
“Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.
For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.”
Is this better than the original contract language we flagged earlier this week? Probably! The explicit mention of “commercially acquired personal or identifiable information” is new and addresses the exact data type—geolocation, browsing history, the stuff data brokers sell about all of us—that reportedly was the final sticking point in the Anthropic negotiations. The language about “deliberate tracking, surveillance, or monitoring” is more concrete than the original contract’s vague reference to “unconstrained monitoring.”
Altman also noted that the Defense Department “affirmed that our services will not be used by Department of War intelligence agencies (for example, the NSA)” and that any such use “would require a follow-on modification to our contract.”
This sounds better than where they were before, but it’s genuinely hard to tell from the outside. And that difficulty—the opaque nature of what any of this means in practice—is the actual story here.
Because the problem with OpenAI’s deal was never just about the specific contract language. As we laid out earlier this week, the intelligence community has spent decades engineering legal definitions that let it conduct what any reasonable person would call mass surveillance while truthfully claiming otherwise. Whether this new amendment survives contact with those definitions is a question no outside observer can answer right now.
The bigger issue is happens to innovation when the rules can change based on a cabinet secretary’s mood. The contract still references compliance with existing legal authorities—the same authorities that have been stretched and reinterpreted for years to permit exactly the kinds of data collection the new language purports to prohibit.
Anthropic’s Dario Amodei was characteristically blunt about the gap between OpenAI’s public framing and what the contract language actually delivers. In a memo to staff that has since leaked:
“The main reason [OpenAI] accepted [the DoD’s deal] and we did not is that they cared about placating employees, and we actually cared about preventing abuses.”
Damn.
He called OpenAI’s messaging around the deal “straight up lies” and described the whole thing as “safety theater.” You can dismiss some of that as competitive sniping, but Amodei was in the room for the Anthropic negotiations, and his characterization of what the Pentagon was actually demanding lines up with what the New York Times separately reported. His criticism is specific and technical: the Pentagon asked Anthropic to delete a “specific phrase about ‘analysis of bulk acquired data'” that was “the single line in the contract that exactly matched this scenario we were most worried about.” OpenAI’s original contract conspicuously lacked any such language. The amendment addresses this, at least on its face. Whether it does so in a way that actually binds the Pentagon’s behavior is a different question.
But the contract language debate, as important as it is, obscures the much larger problem.
“So maybe you think the Iran strike was good and the Venezuela invasion was bad…. You don’t get to weigh in on that.”
That’s the CEO of one of the most important AI companies on the planet telling his workforce that operational decisions about how their technology gets used in military actions are entirely up to Defense Secretary Pete Hegseth. The same Pete Hegseth who, just days earlier, tried to nuke an entire company for asking that AI not make autonomous kill decisions. The same Hegseth whose idea of contract negotiation was to issue what we described earlier this week as a “corporate death penalty” against Anthropic.
Speaking of Anthropic, that situation has gone from tragedy to farce and back again. The Financial Times reports that Amodei is now in direct talks with Emil Michael, a Hegseth lackey, to try to salvage a deal. This is the same Emil Michael (a scandal-ridden former Uber exec) who, just last week, called Amodei a “liar” with a “God complex”. And the same Defense Department that designated Anthropic a supply chain risk. The same administration that directed every federal agency to “immediately cease” all use of Anthropic’s technology.
And yet here they are, back at the table. Because, as multiple reports have made clear, Anthropic’s Claude models were already deployed on the Pentagon’s classified network and were quite useful for the Defense Department. The Pentagon apparently needs Anthropic’s technology because it’s actually good at the job. This just highlights how monumentally stupid the whole “supply chain risk” gambit was. You don’t issue a corporate death penalty against a company whose product you’re actively relying on for military operations unless you’re operating on pure spite rather than strategy.
The public, meanwhile, is making its own calculations under this cloud of uncertainty. ChatGPT uninstalls spiked 295% the day after the OpenAI deal was announced, while downloads dropped significantly. Anthropic’s Claude app jumped to the top of the App Store. One-star reviews of ChatGPT surged nearly 775% over the weekend.
Users who have zero ability to evaluate the legal intricacies of EO 12333 or the practical significance of “commercially acquired personal or identifiable information” are making choices based on the clear understanding that something has gone seriously wrong.
Call it the uncertainty tax: when users can’t verify whether a company’s principles are real, they treat visible conflict with authority as proof of authenticity. When people can’t tell whether a company’s safety commitments are real, they default to the company that got punished for having safety commitments—because at least that tells you that there were at least some principles at play.
Getting punished for having principles is, perversely, the clearest indication that you had any, whether or not it’s true.
Altman himself seems to recognize that the rollout was a disaster. From his post:
One thing I think I did wrong: we shouldn’t have rushed to get this out on Friday. The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.
“Looked” opportunistic is doing a lot of work in that sentence. But okay.
The deeper issue here goes beyond any one contract or any one company. What we’ve watched unfold over the past week is a case study in why you cannot build a functional technology industry under a petulant, arbitrary authoritarian regime.
This is now what every AI company knows: if you tell the government “no” on something—even something as basic as “our AI shouldn’t make autonomous kill decisions without human oversight”—the Defense Secretary may try to destroy your company, publicly call you treasonous, and bar anyone doing business with the military from working with you. If you tell the government “yes,” you may face a massive consumer backlash, lose hundreds of thousands of users, and find yourself amending contracts on the fly to address concerns you should have thought about before signing.
Seems like a rough way to encourage innovation in the AI space.
And the rules can change at any moment. This week it’s “give us unrestricted access for all lawful purposes.” Next week, the definition of “lawful” might shift. The week after that, maybe the administration decides it doesn’t like something else about your company and the threats start anew. Altman told his employees that Hegseth made clear OpenAI doesn’t “get to make operational decisions.” So the company writes the safety stack, crosses its fingers, and hopes the people who just tried to destroy its largest competitor over basic ethical commitments will honor the contract language.
This is the environment the AI industry’s biggest Trump boosters created for themselves. For months, the refrain on certain VC bro podcasts was that the Biden administration was going to destroy AI and hand the industry to China. In reality, Biden’s AI policy amounted to a toothless set of principles and some extra paperwork. It was annoying, sure. It did not involve the Defense Secretary threatening to obliterate companies or the president directing all federal agencies to stop using a specific American company’s technology.
And the irony of it all is that the market seems to be figuring this out even as the companies’ leadership teams scramble to pretend everything is fine. The same users who were happily using ChatGPT a week ago are fleeing to Claude—the product of the company the government tried to destroy—because they’ve correctly identified that a company that got punished for standing up to an authoritarian government is probably more trustworthy than one that rushed to fill the void.
Innovation requires predictability. It requires the ability to plan, to hire, to build product roadmaps that extend beyond next Friday’s presidential tweet. It requires knowing that if you build something good and compete fairly, the government won’t try to destroy you because you annoyed a cabinet secretary during contract negotiations. Every AI company—even the ones currently benefiting from Anthropic’s punishment—should be deeply unsettled by what happened last week.
Because the leopard that ate Anthropic’s face last Friday can eat yours next Friday. All it takes is one disagreement, one insufficiently sycophantic response, one moment of “duplicity” defined as “having principles.”
Altman seems to partially grasp this. He publicly stated that the decision to designate Anthropic as a supply chain risk was “a very bad decision” and that the Pentagon should offer Anthropic the same terms OpenAI agreed to. That’s the right thing to say when facing a PR crisis like this. But saying it while simultaneously benefiting from the decision, while telling your employees they don’t get to have opinions about how their technology gets used in military operations, sends a somewhat mixed signal.
The lesson here has less to do with the specifics of any contract than with the fact that an impetuous, arbitrary, out-of-control authoritarian government is bad for innovation. I mean, it’s also bad for the public, society, and (arguably) the military as well. The US has led in innovation for decades in part because we had stable institutions and predictable rule of law.
But hey, at least nobody’s asking them to fill out compliance forms anymore. That was the real threat to American AI leadership.
Homeland Security Secretary Kristi Noem misled Congress on Tuesday about the powers of her controversial top aide Corey Lewandowski, according to records reviewed by ProPublica and four current and former DHS officials.
Lewandowski has an unusual role at DHS, where he is not a paid government employee but is nonetheless acting as a top official, helping Noem run the sprawling agency. For months, members of Congress have asked the agency to detail the scope of his work and authority.
At a Senate Judiciary Committee hearing on Tuesday, Sen. Richard Blumenthal, D-Conn., asked Noem whether Lewandowski has “a role in approving contracts” at DHS. Noem responded with a flat denial: “No.”
But internal DHS records reviewed by ProPublica contradict Noem’s Senate testimony. The records show Lewandowski personally approved a multimillion-dollar equipment contract at the agency last summer.
That was not a one-off. Lewandowski has approved numerous contracts at DHS and often needs to sign off on large ones before any money goes out the door, the current and former department employees said.
Last year, Noem imposed a new policy that consolidated her and her top aides’ power over all spending at DHS, requiring that she personally review and approve all contracts above $100,000. Before the contracts reach Noem, they must be approved by a series of political appointees, who each sign or initial a checklist sometimes referred to internally as a routing sheet. Typically, the last name on the checklist before Noem’s is Lewandowski’s, the DHS officials said.
Under federal law, it is a crime to “knowingly and willfully” make a false statement to Congress. But in practice, it is rarely prosecuted.
In a statement, a DHS spokesperson reiterated Noem’s claim. “Mr. Lewandowski does NOT play a role in approving contracts,” the spokesperson said. “Mr. Lewandowski does not receive a salary or any federal government benefits. He volunteers his time to serve the American people.” Lewandowski did not respond to a request for comment.
Several news outlets, including Politico, have previously reported on aspects of Lewandowski’s involvement in contracting at DHS.
There have been widespread reports of delays caused by the new contract approval process at the agency, which has responsibilities spanning from immigration enforcement to disaster relief to airport security. DHS has asserted that the review process saved taxpayers billions of dollars.
A similar sign-off process exists for other policy decisions at DHS. One of the checklists, about rolling back protections for Haitians in the U.S., emerged in litigation last year. It featured the signatures of several top DHS advisers. Under them was Lewandowski’s signature, and then Noem’s.
An internal Department of Homeland Security policy document from February 2025 shows agency officials, including top aide Corey Lewandowski and Noem — referred to as “S1,” signing off on a policy change. U.S. District Court for the District of Maryland. Scrim added by ProPublica for clarity.
Lewandowski is what’s known as a “special government employee,” a designation historically used to let experts serve in government for limited periods without having to give up their outside jobs. (At the beginning of the Trump administration, Elon Musk was one, too.) Special government employees have to abide by only some of the same ethics rules as normal officials and are permitted to have sources of outside income.
Lewandowski has declined to disclose whether he is being paid by any outside companies and, if so, who.
Not a day goes by that its hypocrisy isn’t exposed. Here’s the latest, which certainly isn’t the last: the DOJ’s insistence that government employees be given preferential treatment in court.
Multiple bullshit prosecutions are underway, with AG Pam Bondi’s DOJ hoping to convert regular protest stuff into long-lasting federal felony charges. This hasn’t gone well for the DOJ, which tends to find itself rejected by grand juries when not getting its vindictive prosecutions tossed because they’ve been brought by prosecutors who don’t have legal claim to the positions they’re holding.
While the government continues to make social media hay by tweeting out wild allegations and the personal information of people who have yet to have their day in court, it simultaneously claims it should be illegal to identify federal officers and post their information to social media.
And while that’s just the government being hypocritical in terms of social media blasts, it’s engaging in another level of hypocrisy that’s not as easily dismissed. As Josh Gerstein reports for Politico, Attorney General Pam Bondi’s personal participation in this form of hypocrisy is not only inexcusable, but it’s also on the wrong side of the law.
Two federal judges have raised concerns about Attorney General Pam Bondi’s use of social media to publicize a wave of arrests last month of people charged with interfering with federal officers during an immigration enforcement surge in Minnesota.
When the government seeks protective orders to shield the details of cases from the public eye, the order applies to the government as much as it does to the defendants. But since Bondi can’t keep herself from scoring internet points on behalf of the Trump administration, she’ll be lucky to keep these particular prosecutions going.
That’s the upshot of this court order [PDF], handed down by Minnesota federal judge Dulce Foster:
As a threshold matter, the government’s claimed concern about the victim/agents’ “dignity and privacy” and the risk of doxxing is eyebrow-raising, to say the least. On January 28, 2026, at 12:53 p.m., Attorney General Pam Bondi publicly posted a tweet on X announcing, to a national audience, that Ms. Flores was arrested along with 15 other people as “rioters” who “have been resisting and impeding our law enforcement officers.” […] In publicly posting that information, the government failed to respect Ms. Flores’s dignity and privacy, exposed her to a risk of doxxing, and generally thumbed its nose at the notion that defendants are innocent until proven guilty. The post also directly violated a court order sealing the case (ECF No. 6), which was not lifted until the Court conducted initial appearances later that day (see ECF No. 7).
If the argument is that it’s dangerous for federal officers to be publicly identified but perfectly fine for random citizens to be exposed to threats of violence, the argument is deeply flawed. At worst, it’s the most powerful people arguing that the least powerful people should be exposed to the same sort of stuff they claim federal officers might be exposed to if their names are made public.
At best, it’s a tacit admission that more people are opposed to this administration’s actions than are opposed to the actions of those who engage in protests. If the DOJ really believed what the government is doing was good and supported by a majority of the public, it wouldn’t seek protective orders preventing the release of personal information.
But that’s not the case it made in court. And courts are now refusing to pretend the government is operating in good faith when it says some personal information is more equal than other personal information.
This determination was echoed in another court decision dealing with a Minneapolis-based prosecution:
At a hearing in a separate Minneapolis case last week, another magistrate judge, Shannon Elkins, directed prosecutors to “address whether the public posting of photographs violated the Court’s sealing order.” The government missed a deadline Tuesday to respond. Elkins later agreed to extend the deadline until Monday.
In the first case, the judge gave the government what it wanted, but applied those desires to both parties in the prosecution. If the defense team is barred from publicly revealing information about the government officers, the government is likewise barred from making information about the defendants public. It doesn’t get to have it both ways.
While it would have been somewhat refreshing to see the court allow the defendants to release whatever information they’d gathered about the federal officers to, I guess, make things even, I also recognize “two wrongs make a right” is no way to run a judicial system. I do say that very hesitantly, however. After all, we’re being governed by people who believe that even if they purposefully do wrong, there’s no power that can stop them. But there’s little that’s more satisfying than beating cheaters at their own game while playing by the rules. Hopefully, this great nation will be able to weather the constant attacks on what makes it great by people who are seeking to destroy it from the inside.
Become a language expert with a Babbel Language Learning subscription. With the app, you can use Babbel on desktop and mobile, and your progress is synchronized across devices. Want to practice where you won’t have Wi-Fi? Download lessons before you head out, and you’ll be good to go. However you choose to access your 10K+ hours of online language education, you’ll be able to choose from 14 languages. And you can tackle one or all in 10-to-15-minute bite-sized lessons, so there’s no need to clear hours of your weekend to gain real-life conversation skills. Babbel was developed by over 100 expert linguists to help users speak and understand languages quickly. With Babbel, it’s easy to find the right level for you — beginner, intermediate, or advanced — so that you can make progress while avoiding tedious drills. Within as little as a month, you could be holding down conversations with native speakers about transportation, dining, shopping, directions, and more, making any trip you take so much easier. It’s on sale for $159 when you use the code LEARN at checkout.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Section 230 remains one of the most misunderstood laws in America, and that misunderstanding keeps producing policy proposals that would make the internet worse, not better. Last year, I wrote a lengthy response to reporter Brian Reed’s claims about Section 230, and this week Sam Seder brought us both onto The Majority Report to hash it out directly for over an hour.
The format was conversational rather than structured, which means some points didn’t get as cleanly laid out as they would in a written piece. But the back-and-forth surfaced some of the real underlying disagreements about what people think 230 does versus what it actually does, and I think that makes it worth the watch. The discussion kicks off around the 30-minute mark.
Thanks to Sam and Emma for having me on, and to Brian for the discussion.
I stand by the larger point I keep trying to make: even if I agree that there are elements of the present internet I dislike, removing or reforming Section 230 will almost certainly make all of those things worse, not better. Without 230’s protections, the compliance costs alone would further entrench the biggest platforms while crushing any smaller competitor or new entrant that might actually offer users something better. People overindex on 230 as “the cause” of everything bad online, when it’s not what’s actually responsible—and that misdiagnosis leads to policy proposals that would deepen the very problems they claim to solve. It still strikes me as odd that 230 is the one law everyone is fixated on when there are far more deserving targets: the CFAA, the DMCA, patent law, and the continued absence of meaningful privacy legislation.
As a condition of the deal, the two companies have promised to be more racist and sexist. More specifically, they’ve promised the FCC they’ll eliminate already fairly pathetic corporate programs acknowledging that systemic racism and sexism exist. The FCC posted this handy infographic on social media that breaks down all of their lies about the deal in an easily digestible way:
Literally none of those claims are true. These deals never result in any of these benefits. There’s more than forty years of concrete evidence proving it. Consolidation across U.S. telecom has consistently resulted in spotty service, high prices, and routinely abysmal customer service.
Cox and Charter don’t directly compete, but their large scale, size, and political influence ensures they’ll have more power than ever to lobby against robust competition more generally.
The debt from the kinds of deals is also always predominately paid off by labor and consumers in the form of mass layoffs and higher prices. These deals never meaningfully serve the public interest, but our captured regulators, consolidated telecoms, and shitty press work together to help pretend otherwise.
The FCC explains the “DEI” provisions this way in their news release:
“Charter has committed to new safeguards to protect against DEI discrimination and has reaffirmed the merged entity’s commitment to equal opportunity and nondiscrimination. Specifically, Charter commits to recruiting, hiring, and promoting individuals based on the factors that matter most: skills, qualifications, and experience.”
You’re not told what these “safeguards” actually are. Just that Cox and Charter won’t try to give minorities or women a leg up in country full of systemic hatred and intolerance because that might be unfair to a dude.
The Trump administration has repeatedly tried to insist that simply acknowledging that systemic racism and sexism exists — or doing literally anything about it — is somehow discriminatory to white men. This is diseased white supremacist thinking; the sheer delusional hubris to think this way, let alone integrate this ignorance into already problematic pro-monopoly policy, is the mind garbage of simpletons.
If you pop around and read the news coverage of this deal (see: this piece from Reuters or this piece at CNN), you’ll notice the consolidated corporate press helps sell the lie that more consolidation somehow serves the public interest. These kinds of stories will parrot the companies’ claims, ignore their history of monopolistic predation, and even downplay the mandated racism as a droll policy bullet point.
CNN author Jordan Valinsky even went so far as to write this sentence with a straight face:
“The transaction is contingent on regulatory approval and could be a litmus test for President Donald Trump’s views on major companies combining.”
Trump’s FCC has rubber stamped every single shitty telecom merger that has crossed its desk. We know Trump loves harmful consolidation, provided he can personally get something from it. A cornerstone of GOP policy has been to coddle monopoly power for literally fifty fucking years! Across every industry in America. None of this is really up for debate. Any “litmus” test was failed long ago.
If this country is going to have any sort of real future, it has to seriously come to grips with the fact that it’s broadly too corrupt to function in the public interest (across government, media, policy, and culturally). If future federal and state governments don’t make antitrust and corruption reform a central pillar of all policy in every sector, we’re quite literally cooked.
It was plainly obvious when RFK Jr. decided to fully remake ACIP, the CDC committee that advises the nation on immunization schedules and practices, that it was done so to place Kennedy sycophants that would enact his batshit theories on vaccinations. ACIP, now chockablock with anti-vaxxer, anti-science grift-gremlins, has been slowly chipping away at decades of good medical practice around immunization. The administration has already altered the recommended vaccine schedules for COVID and Hep B, while appearing to potentially question polio vaccines as well. It has been, to be pointed, an unmitigated shitshow thus far.
But at least ACIP has managed to color inside the lines of its own mandate to date. That appears to be about to change, as reporting indicates that ACIP’s meeting next month will put COVID vaccine injuries on the agenda.
Dorit Reiss, a vaccine policy expert at the University of California Law San Francisco, said the panel does not typically focus on vaccine injuries.
“Vaccine injuries are not a direct part of the committee’s mandates,” Reiss said in an email. “When they make vaccine recommendations, they should consider vaccines risks, and new risks may lead to changed recommendations; but that’s not directly about vaccine injuries.”
This isn’t to suggest that ACIP completely disregards risks associated with vaccinations, as Reiss mentions. ACIP does make changes to vaccination schedules and recommendations based on macro-data it is provided for specific vaccines. But discussions about the prevalence and validity of claims of vaccine injury are well outside ACIP’s purview. To use but one facile piece of evidence of that, you can review the CDC’s own webpage about what ACIP’s purpose and program does. You will notice that there is not a single reference to vaccine injury within it. Nor does the ACIP page that outlines its own charter. There you will see vague references to ACIP’s duties include the “consideration” of “vaccine safety”, but that is the macro-data I referenced earlier, not a deep dive into the specific topic of vaccine injury.
Vaccine injury is a serious topic, for which the Vaccine Injury Compensation System (VICP) was created in the 1980s. Consulting in lawsuits and writing about vaccine injuries is how Kennedy made millions of dollars. Expanding VICP, a stated goal of his, and using ACIP to add validity to those expansions, is a great way for Kennedy and his allies to make more and more money from these types of lawsuits immediately, or once he’s out of government.
It’s just another grift, powered by hand-picked muppets willing to do his bidding in ACIP.
“Some committee members have made repeated claims about Covid vaccine harms that were either unsupported by verifiable data or reflected clear mischaracterizations of the existing scientific literature,” said Michael Osterholm, director of the Center for Infectious Disease Research and Policy at the University of Minnesota. Last year, Osterholm launched the Vaccine Integrity Project, which serves as an alternative source of vaccine information to the CDC.
“If the committee intends to revisit vaccine safety questions, it has an obligation to do so transparently and rigorously,” he said. “Given past misstatements, members do not deserve the benefit of the doubt.”
No, they most certainly do not. You may not think that questions about COVID vaccines are all that important any longer. We’ve moved on, you may think, from this novel virus being a major issue in our lives. And for some of us, that is true. I am very pro-vaccination, but I’m not getting every booster out there.
But that’s not really what this is about. Kennedy wants ACIP to spotlight supposed COVID vaccine injuries in a way that will certainly come with questionable evidence at best. Not out of concern for public health, mind you. But almost certainly for money.
Several high-ranking federal election officials attended a summit last week at which prominent figures who worked to overturn Donald Trump’s loss in the 2020 election pressed the president to declare a national emergency to take over this year’s midterms.
Election experts say that the meeting reflects an intensifying push to persuade Trump to take unprecedented actions to affect the vote in November. Courts have largely blocked his efforts to reshape elections through an executive order, and legislation has stalled in Congress that would mandate strict voter ID requirements across the country.
The Washington Post reported Thursday that activists associated with those at the summit have been circulating a draft of an executive order that would ban mail-in ballots and get rid of voting machines as part of a federal takeover. Peter Ticktin, a lawyer who worked on the executive order and had a client at the summit, told ProPublica these actions were “all part of the same effort.”
The summit followed other meetings and discussions between administration officials and activists — many not previously reported — stretching back to at least last fall, according to emails and recordings obtained by ProPublica. The coordination between those inside and outside the government represents a breakdown of crucial guardrails, experts on U.S. elections said.
“The meeting shows that the same people who tried to overturn the 2020 election have only grown better organized and are now embedded in the machinery of government,” said Brendan Fischer, a director at the Campaign Legal Center, a nonpartisan pro-democracy organization. “This creates substantial risk that the administration is laying the groundwork to improperly reshape elections ahead of the midterms or even go against the will of the voters.”
Five of six federal officials who attended the summit didn’t answer questions about the event from ProPublica.
A White House official, speaking on the condition of anonymity, said federal officials’ attendance at the gathering shouldn’t be construed as support for a national emergency declaration and that it was “common practice” for staffers to communicate with outside advocates who want to share policy ideas. The official pointed to comments Trump made to PBS News denying he was considering a national emergency or had read the draft executive order. “Any speculation about policies the administration may or may not undertake is just that — speculation,” the official said.
Mitchell did not respond to questions from ProPublica about the summit. A spokesperson for Flynn responded to detailed questions from ProPublica by disparaging experts who expressed concerns, texting, “LOL ‘EXPERTS.’”
The 30-person roundtable discussion on Feb. 19, at an office building in downtown Washington, D.C., was sponsored by the Gold Institute for International Strategy, a conservative think tank. Afterward, activists and government officials dined together, photos reviewed by ProPublica showed.
Flynn, the institute’s chair, told a social media personality why he’d arranged the event.
“I wanted to bring this group together physically, because most of us have met online” while “fighting battles” in swing states from Arizona to Georgia, Flynn said to Tommy Robinson on the gathering’s sidelines. Robinson posted videos of these interactions online. “The overall theme of this event was to make sure that all of us aren’t operating in our own little bubbles.”
Flynn has repeatedly advocated for Trump to declare a national emergency and posted on social media after the event addressing Trump, “We The People want fair elections and we know there is only one office in the land that can make that happen given the current political environment in the United States.”
In addition to Olsen and Honey, four other federal officials from agencies that will shape the upcoming elections attended the event. At least four of the six attended the dinner.
One is Clay Parikh, a special government employee at the Office of the Director of National Intelligence who’s helping Olsen with the 2020 inquiry. A spokesperson at ODNI said Parikh had attended the summit “in his personal capacity.”
Another, Mac Warner, handled election litigation at the Justice Department. A department spokesperson said that Warner had resigned the day after the event and had not received the required approval from agency ethics officials to participate.
The department “remains committed to upholding the integrity of our electoral system and will continue to prioritize efforts to ensure all elections remain free, fair, and transparent,” the spokesperson said in an email.
A third administration official who attended the summit, Marci McCarthy, directs communications for the nation’s cyber defense agency, which oversees the security of elections infrastructure like voting machines.
Kari Lake, whom Trump appointed as senior adviser to the U.S. Agency for Global Media, was a featured speaker. Lake worked with Olsen and Parikh in her unsuccessful bid to overturn her loss in the 2022 Arizona gubernatorial election.
Lake said in an email that she “showed up to the event, spoke for about 20 minutes about the overall importance of election integrity, a non-partisan issue that matters to all citizens — both in the United States and abroad. I left without listening to any other speeches.”
“Elections should be free from fraud or any other malfeasance that subverts the will of the people,” she added.
At the meeting, activists presented on ways to transform American elections that would help conservatives, according to social media posts and interviews they gave on conservative media, such as LindellTV, a streaming platform created by the pillow mogul Mike Lindell. They said the group broke down into two camps: those who wanted to pursue a more incremental legal and legislative strategy and those who wanted Trump to declare a national emergency.
Multiple activists left the meeting convinced Trump should do the latter, a step they believe would allow the president to get around the Constitution’s directive that elections should be run by states.
Former Overstock.com CEO Patrick Byrne, a prominent funder of efforts to overturn the 2020 election, told LindellTV that Trump has “played nice” so far in not seizing control of American elections. “But at some point,” Byrne said, “he’s got to do something, the muscular thing: declare a national emergency.”
Byrne responded to questions from ProPublica by sending a screenshot of a poll that he said suggested “2/3 of Americans correctly do not trust” voting machines, which the proposed national emergency declaration aims to do away with.
Will Huff, who has advocated for doing away with voting machines, told a conservative vlogger that Olsen, the White House lawyer, and other administration representatives would take the “consensus” from the gathering back to Trump. “It’s got to be a national emergency,” said Huff, the campaign manager for a Republican candidate for Arkansas secretary of state.
In response to questions from ProPublica, Huff said in an email that Olsen and Trump would use their judgment to decide whether to declare a national emergency.
“The President has been briefed on findings of shortcomings in election infrastructure,” Huff wrote. “I believe there are steady hands around the President wanting to ensure that any action taken is, first, constitutional and legal, but also backed by evidence.”
McCarthy, the cybersecurity official, expressed more general solidarity with fellow attendees in a post on social media about the summit. “Grateful for friendships forged through years of standing shoulder-to-shoulder, united by purpose and conviction,” she wrote. “The mission continues… and so does the fellowship.”
Marci McCarthy, second from left, Heather Honey, fourth from right, and Cleta Mitchell, third from right, were among the conservative activists and officials who attended the summit. McCarthy posted about the event on LinkedIn. Screenshot by ProPublica. Redactions by ProPublica.
Last week’s gathering was the latest in a string of private interactions between conservative election activists and administration officials, according to emails, documents and recordings obtained by ProPublica. Many have involved Mitchell’s Election Integrity Network. Before taking her government post, Honey was a leader in the Election Integrity Network, ProPublica has reported, as was McCarthy.
Previously unreported emails obtained by ProPublica show that just weeks after Honey started at the Department of Homeland Security, she briefed election activists, a Republican secretary of state and another federal official on a conference call arranged by her former boss, Mitchell.
“We are excited to welcome her on our call this morning to hear about her work for election integrity inside DHS,” Mitchell wrote in an email introducing presenters on the call.
Honey didn’t respond to questions from ProPublica about the call. Experts said Honey’s briefing gave her former employer access that likely would have violated ethics rules in place under previous administrations, including the first Trump administration — though not this one.
The prior “ethics guardrails would have prevented some of the revolving door issues we’re seeing between the election denial movement and the government officials,” said Fischer, the Campaign Legal Center director. Those prior rules “were supposed to prevent former employers and clients from receiving privileged access.”
We’ve been pointing out the fundamental contradiction at the heart of mandatory age verification laws for years now. To verify someone’s age online, you have to collect personal data from them. If that someone turns out to be a child, congratulations: you’ve just collected personal data from a child without parental consent. Which is a direct violation of the Children’s Online Privacy Protection Act (COPPA)—the very law that’s supposed to be protecting kids.
So what happens when the agency charged with enforcing COPPA finally notices this obvious problem? If you guessed “they admit the conflict and then just promise not to enforce the law,” you’d be exactly right.
The Federal Trade Commission issued a policy statement today announcing that the Commission will not bring an enforcement action under the Children’s Online Privacy Protection Rule (COPPA Rule) against certain website and online service operators that collect, use, and disclose personal information for the sole purpose of determining a user’s age via age verification technologies.
The FTC appears to be explicitly acknowledging that age verification technologies involve collecting personal information from users—including children—in a way that would otherwise trigger COPPA liability. If the technology didn’t create a COPPA problem, there would be no need for a policy statement promising non-enforcement. You don’t issue a formal announcement saying “we won’t sue you for this” unless “this” is something you could, in fact, sue people for.
The statement itself tries to dress this up by noting that age verification tech “may require the collection of personal information from children, prompting questions about whether such activities could violate the COPPA Rule.” But “prompting questions” is doing an awful lot of work in that sentence. The answer to those questions is pretty obviously “yes, collecting personal information from children without parental consent violates the rule that says you can’t collect personal information from children without parental consent.” The FTC just doesn’t want to say that part out loud, because then the follow-up question becomes: “so why are you encouraging companies to do it?”
Instead, they’ve decided to create an enforcement carve-out. Do the thing that violates the law, but pinky-promise you’ll only use the data to check the kid’s age, delete it afterward, and keep it secure. Then we won’t come after you. This is the FTC solving a legal contradiction not by asking Congress to fix the underlying law or admitting the technology is fundamentally flawed, but by deciding to selectively not enforce the law it’s supposed to be enforcing.
The honest approach would have been to tell Congress that age verification, as currently conceived, cannot be squared with existing privacy law—and that if lawmakers want it anyway, they need to resolve that conflict themselves rather than asking the FTC to pretend it doesn’t exist.
No such luck.
And boy, do they seem proud of themselves. Here’s Christopher Mufarrige, Director of the FTC’s Bureau of Consumer Protection:
“Age verification technologies are some of the most child-protective technologies to emerge in decades…. Our statement incentivizes operators to use these innovative tools, empowering parents to protect their children online.”
“The most child-protective technologies to emerge in decades.”
Excuse me, what?
This is the kind of statement that sounds authoritative right up until you spend thirty seconds thinking about it. Anyone with any knowledge of security and privacy knows that age verification is anything but “child protective.” It involves a huge invasion of privacy, for extremely faulty technology, that has all sorts of downstream effects that put kids at risk.
Oh, and the FTC seems proud that the vote for this was unanimous—though it’s worth noting that Donald Trump fired the two Democratic members of the FTC and has made no apparent efforts to replace them, despite Congress designating that the FTC is supposed to have five full members, with two from the opposing party. A unanimous vote among the remaining two Republicans is a strange thing to brag about.
The FTC even posted about this on X, and the response was… well, let me just show you:
If you can’t see that, the main part to pay attention to is not the tweet from the FTC itself, but the Community Note that (under the way Community Notes works, notes need widespread consensus among users to be appended to the public tweet):
Readers added context they thought people might want to know
Contrary to their claim, using age verification has numerous issues, including but not limited to:
1. Easily bypassed
2. Risks of security data breach
3. Inaccuracies (Placing adults into underage groups, vice versa)
And many more… (sigh, I need a break).
Yeah, we all need a break.
That Community Note does a better job explaining the state of age verification technology than the FTC’s entire Bureau of Consumer Protection. It methodically lists out the problems: kids easily bypass these systems, the collected data creates massive security breach risks, and the technology produces wildly inaccurate results that lock adults out while letting kids through (and vice versa). When the consensus-driven crowdsourced fact-check on your own announcement is more informative than the announcement itself, maybe it’s time to reconsider the announcement.
But let’s say, for the sake of argument, that the technology worked perfectly. Would mandatory age verification still be a good idea?
That still wouldn’t solve the issues with this technology and the harm it does to kids. Even UNICEF (UNICEF!) has been warning that age restriction approaches can actively harm the children they’re supposed to protect. After Australia’s social media ban for under-16s went into effect, UNICEF put out a statement that could not have been more clear about the risks:
“While UNICEF welcomes the growing commitment to children’s online safety, social media bans come with their own risks, and they may even backfire,” the agency said in a statement.
For many children, particularly those who are isolated or marginalised, social media is a lifeline for learning, connection, play and self-expression, UNICEF explained.
Moreover, many will still access social media – for example, through workarounds, shared devices, or use of less regulated platforms – which will only make it harder to protect them.
So the actual child welfare experts are saying that age verification can backfire, push kids into less safe spaces, and should never be treated as a substitute for real safety measures. Meanwhile, the FTC is calling the same technology “the most child-protective” thing to come along in a generation and is waiving its own enforcement authority to encourage more of it.
What we have here is a federal agency that has identified a direct conflict between the law it enforces and the policy outcome it wants. Rather than grappling with what that conflict means—maybe age verification as currently conceived just doesn’t work within the existing legal framework, and for good reason—the FTC has chosen to simply look the other way. The message to companies is clear: go ahead and collect data from kids to figure out if they’re kids. We know that violates COPPA. We don’t care. We like age verification more than we like enforcing our own rules.
That’s a hell of a policy position for the agency that’s supposed to be the last line of defense for children’s privacy online.