This may not be an actual “Wyden siren,” but it still has his name attached to it. What’s being said here isn’t nearly as ominous as this single sentence he sent to CIA leadership earlier this year:
I write to alert you to a classified letter I sent you earlier today in which I express deep concerns about CIA activities.
Few people are capable of saying so much with so little. This one runs a bit longer, but it has implications that likely run deeper than the surface level issue raised by Wyden and others in a recent letter to Trump’s (satire is dead) Director of National Intelligence, Tulsi Gabbard. Here are the details, as reported by Dell Cameron for Wired:
In a letter sent Thursday to Director of National Intelligence Tulsi Gabbard, the lawmakers say that because VPNs obscure a user’s true location, and because intelligence agencies presume that communications of unknown origin are foreign, Americans may be inadvertently waiving the privacy protections they’re entitled to under the law.
Several federal agencies, including the FBI, the National Security Agency, and the Federal Trade Commission, have recommended that consumers use VPNs to protect their privacy. But following that advice may inadvertently cost Americans the very protections they’re seeking.
The letter was signed by members of the Democratic Party’s progressive flank: Senators Ron Wyden, Elizabeth Warren, Edward Markey, and Alex Padilla, along with Representatives Pramila Jayapal and Sara Jacobs.
That’s alarming. It’s also a conundrum. VPN use (often required for remote logins to corporate systems) is a great way to secure connections that are otherwise insecure, like those made originating from people’s homes (to log into their work stuff) or utilizing public Wi-Fi. There are also more off-the-book uses, like circumventing regional content limitations or just ensuring your internet activity can’t be tied to your physical location.
The trade-off depends on the threat you’re trying to mitigate. It’s kind of like the trade-off in cell phone security. Using biometrics markers to unlock your phone might be the best option if what you’re mainly concerned with is theft of your device. A thief might be able to guess a password, but they won’t be able to duplicate an iris or a fingerprint.
But if the threat you’re more worried about is this government, you’ll want the passcode. Courts have generally found that fingerprints and eyeballs aren’t “testimonial,” so if you’re worried about being compelled to unlock your device, the Fifth Amendment tends to favor passwords, at least as far as the courts are concerned.
It’s almost the same thing here. VPNs might protect you against garden-variety criminals, but the intentional commingling of origin/destination points by VPNs could turn purely domestic communications into “foreign” communications the NSA can legally intercept (and the FBI, somewhat less-legally can dip into at will).
That’s the substance of the letter sent to Gabbard, in which the legislators ask the DNI to issue public guidance on VPN usage that makes it clear that doing so might subject users to (somewhat inadvertent) domestic surveillance:
Americans reportedly spend billions of dollars each year on commercial VPN services, many of which are offered by foreign-headquartered companies using servers located overseas. According to the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency, VPNs have the potential to be vulnerable to surveillance by foreign adversaries. While Americans should be warned of these risks, they should also be told if these VPN services, which are advertised as a privacy protection, including by elements of the federal government, could, in fact, negatively impact their rights against U.S. government surveillance. To that end, we urge you to be more transparent with the American public about whether the use of VPNs can impact their privacy with regard to U.S. government surveillance, and clarify what, if anything, American consumers can do to ensure they receive the privacy protections they are entitled to under the law and Constitution.
I wouldn’t expect a response from ODNI. I mean, I wouldn’t expect one in any case, but I especially don’t expect Tulsi Gabbard to respond to a letter sent by a handful of Democratic Party members.
A warning would be nice, but even an Intelligence Community overseen by competent professionals, rather than loyalists and Fox News commentators would be hard-pressed to present a solution. To be fair, this letter isn’t asking for a fix, but rather telling the Director of National Intelligence to inform the public of the risks of VPN usage, including increasing their odds of being swept up in NSA dragnets.
Certainly the NSA isn’t concerned about “incidental collection.” It’s never been too concerned about its consistent “incidental” collection of US persons’ communications and data in the past and this isn’t going to budge the needle, especially since it means the NSA would have to do more work to filter out domestic communications and the FBI would be less than thrilled with any efforts made to deny it access to communications it doesn’t have the legal right to obtain on its own.
Since the government won’t do this, it’s up to the general public, starting with everyone sharing the contents of this letter with others. VPNs can still offer considerable security benefits. But everyone needs to know that domestic surveillance is one of the possible side effects of utilizing this tech.
A quick refresher: there was originally $42.5 billion in broadband grants headed to the states thanks to the 2021 infrastructure bill most Republicans voted against (yet routinely try to take credit for among their constituents).
Money given to Bezos and Musk is money not spent on better, faster, local fiber optics (especially popular community owned networks). A serious broadband policy would ensure that open access fiber is the priority, followed by wireless, with satellite filling the gaps. Satellite was never intended to be the primary delivery mechanism for broadband, because of obvious congestion and capacity constraints.
The Trump NTIA is doing all of this under the pretense that giving taxpayer money to billionaires (for satellite service they already planned to deploy) instead of spending it on high quality fiber is “saving taxpayers money.” That’s generally resulted in widespread delays for this BEAD (Broadband, Equity, Access and Deployment) program, despite Republicans spending much of last election season complaining this program was taking too long.
The Trump NTIA hijacking of the program has also created a $21 billion pool of “non deployment funds” made up of the fake savings Republicans claim they created by screwing up the program. There’s a looming fight emerging over what happens to that money. Congress and the infrastructure law specifically states this money is supposed to be dedicated to expanding broadband access.
States would obviously like to use this money either for broadband, or for local infrastructure. But you get the sense that this giant wad of cash is very tempting for the Trump administration to just hijack and use as an unaccountable slush fund, doled out to its most loyal red state allies (or just kept by the “Treasury”).
After delays and excuses extended in to last summer, the Trump NTIA was supposed to provide guidance for states on how this money could be used earlier this month, but has been a no show:
“Under pressure from senators at an appropriations hearing, Commerce Secretary Howard Lutnick last month sought to calm fears when he said that so-called “non-deployment” funds under the Broadband Equity, Access and Deployment, or BEAD, program would not be rescinded.
But with no guidance so far from the department’s National Telecommunications and Information Administration, which was expected but delayed this week, lawmakers and others are pushing to have their voice heard on exactly how states will be able to use the $21 billion pot of money.”
It’s not clear if states can trust the word of Lutnick (who’s been a little distracted by Epstein allegations). The Trump administration has threatened (quite illegally) to withhold BEAD funding entirely from states that attempt to stand up to telecom monopolies or insist that taxpayer-funded broadband is affordable. There were also several initiatives to withhold BEAD funds if states tried to regulate AI.
Unsurprisingly, many states are afraid to be honest about what a cock up this whole hijacking has been in the press for fear of losing billions in potential (and already technically awarded) funding.
There’s a real potential here that taxpayer money that was originally earmarked for future-proof, ultra-fast fiber network is going to be repurposed into a general free for all slush fund that gets redirected to whoever praises the Trump administration the most. And I wouldn’t be surprised that this ultimately results in state lawsuits against the federal government for redirecting funds.
“I think the state officials who think they’re going to be made whole, need to reread the Merchant of Venice, because [NTIA boss Arielle] Roth is coming for her pound of flesh,” Sascha Meinrath, Palmer Chair in Telecommunications at Penn State University, told me in an email. “I wouldn’t be at all surprised if it’s operationalized in a way to directly target or disadvantage blue states — whether in what it does, or what’s tied to the acceptance of the funding.”
One last side note: last election season the “abundance” folks like Ezra Klein spent ample time parroting GOP criticism of the admitted delays and problems with this BEAD program (ignoring why the program took so long, as well as other examples of similar broadband grant programs from the same year doing well) as an example of a Democratic bureaucratic dysfunction.
But I’ve noticed that since Trump hijacked the program, introduced massive delays, redirected billions to billionaires, and even tried to run off with half the funding, the subject hasn’t been revisited by Klein since. Quite generally (since infrastructure just doesn’t get those clicks) the press coverage of this whole mess has ranged from nonexistent to positively tepid.
Last summer, a group of officials from the Department of Energy gathered at the Idaho National Laboratory, a sprawling 890-square-mile complex in the eastern desert of Idaho where the U.S. government built its first rudimentary nuclear power plant in 1951 and continues to test cutting-edge technology.
On the agenda that day: the future of nuclear energy in the Trump era. The meeting was convened by 31-year-old lawyer Seth Cohen. Just five years out of law school, Cohen brought no significant experience in nuclear law or policy; he had just entered government through Elon Musk’s Department of Government Efficiency team.
As Cohen led the group through a technical conversation about licensing nuclear reactor designs, he repeatedly downplayed health and safety concerns. When staff brought up the topic of radiation exposure from nuclear test sites, Cohen broke in.
“They are testing in Utah. … I don’t know, like 70 people live there,” he said.
“But … there’s lots of babies,” one staffer pushed back. Babies, pregnant women and other vulnerable groups are thought to be potentially more susceptible to cancers brought on by low-level radiation exposure, and they are usually afforded greater protections.
“They’ve been downwind before,” another staffer joked.
“This is why we don’t use AI transcription in meetings,” another added.
ProPublica reviewed records of that meeting, providing a rare look at a dramatic shift underway in one of the most sensitive domains of public policy. The Trump administration is upending the way nuclear energy is regulated, driven by a desire to dramatically increase the amount of energy available to power artificial intelligence.
Career experts have been forced out and thousands of pages of regulations are being rewritten at a sprint. A new generation of nuclear energy companies — flush with Silicon Valley cash and boasting strong political connections — wield increasing influence over policy. Figures like Cohen are forcing a “move fast and break things” Silicon Valley ethos on one of the country’s most important regulators.
The Trump administration has been particularly aggressive in its attacks on the Nuclear Regulatory Commission, the bipartisan independent regulator that approves commercial nuclear power plants and monitors their safety. The agency is not a household name. But it’s considered the international gold standard, often influencing safety rules around the world.
The NRC has critics, especially in Silicon Valley, where the often-cautious commission is portrayed as an impediment to innovation. In an early salvo, President Donald Trump fired NRC Commissioner Christopher Hanson last June after Hanson spoke out about the importance of agency independence. It was the first time an NRC commissioner had been fired.
During that Idaho meeting, Cohen shot down any notion of NRC independence in the new era.
“Assume the NRC is going to do whatever we tell the NRC to do,” he said, records reviewed by ProPublica show. In November, Cohen was made chief counsel for nuclear policy at the Department of Energy, where he oversees a broad nuclear portfolio.
The aggressive moves have sent shock waves through the nuclear energy world. Many longtime promoters of the industry say they worry recklessness from the Trump administration could discredit responsible nuclear energy initiatives.
“The regulator is no longer an independent regulator — we do not know whose interests it is serving,” warned Allison Macfarlane, who served as NRC chair during the Obama administration. “The safety culture is under threat.”
A ProPublica analysis of staffing data from the NRC and the Office of Personnel Management shows a rush to the exits: Over 400 people have left the agency since Trump took office. The losses are particularly pronounced in the teams that handle reactor and nuclear materials safety and among veteran staffers with 10 or more years of experience. Meanwhile, hiring of new staff has proceeded at a snail’s pace, with nearly 60 new arrivals in the first year of the Trump administration compared with nearly 350 in the last year of the Biden administration.
Some nuclear power supporters say the administration is providing a needed level of urgency given the energy demands of AI. They also contend the sweeping changes underway aren’t as dangerous or dire as some experts suggest.
“I think the NRC has been frozen in time,” said Brett Rampal, the senior director of nuclear and power strategy at the investment and strategy consultancy Veriten. “It’s a great time to get unfrozen and aim to work quickly.”
The White House referred most of ProPublica’s questions to the Department of Energy, where spokesperson Olivia Tinari said the agency is committed to helping build more safe, high-quality nuclear energy facilities.
“Thanks to President Trump’s leadership, America’s nuclear industry is entering a new era that will provide reliable, abundant power for generations to come,” she wrote. The DOE is “committed to the highest standards of safety for American workers and communities.”
Cohen did not respond to multiple requests for comment. The NRC declined to comment.
Blindsided by DOGE
The U.S. has not had a serious nuclear incident since the Three Mile Island partial meltdown in 1979, a track record many experts attribute to a rigorous regulatory environment and an intense safety culture.
Major nuclear incidents around the world have only strengthened the resolve of past regulators to stay independent from industry and from political winds. A chief cause of Japan’s Fukushima accident, investigators found, was the cozy relationship between the country’s industry and oversight body, which opened the door for thin safety assessments and inaccurate projections overlooking the possible impact of a major tsunami.
“We knew regulatory capture led directly to Fukushima and to Chernobyl,” said Kathryn Huff, who was assistant secretary for the Office of Nuclear Energy during the Biden administration.
The U.S. has barely built any nuclear power plants in recent decades. Only three new reactors have been completed in the last 25 years, and since 1990 the U.S has barely added any net new nuclear electricity to its grid. Though about 20% of U.S. energy is supplied by nuclear power plants, the fleet is aging. Some experts blame the slow build-out on the challenging economics of financing a multibillion-dollar project and the uncertainty of accessing and disposing of nuclear fuels.
But an increasingly vocal group of industry voices and deregulation advocates have blamed the slow build-out on overly cautious and inefficient regulators. Among the most powerful exponents of this view are billionaires Peter Thiel and Marc Andreessen; both venture capitalists have their own investments in the nuclear energy sector and are influential Trump supporters.
Andreessen camped out at Mar-a-Lago, Trump’s private club in Florida, after Trump won the 2024 election, helping pick staff for the new administration. In late 2024, Thiel personally vetted at least one candidate for the Office of Nuclear Energy, according to people familiar with the conversations. Neither responded to requests for comment.
Four months into his second term, Trump signed a series of executive orders designed to supercharge nuclear power build-out. “It’s a hot industry, it’s a brilliant industry,” said Trump, flanked by nuclear energy CEOs in the Oval Office. He added: “And it’s become very safe.”
Under those orders, the NRC was directed to reduce its workforce, speed up the timeline for approving nuclear reactors and rewrite many of its safety rules. The DOE — which has a vast nuclear portfolio, including waste cleanup sites and government research labs — was tasked with creating a pathway for so-called advanced nuclear companies to test their designs.
The goal, Trump said, was to quadruple nuclear energy output and provide new power to the data centers behind the AI boom.
As DOGE gutted agencies, departures mounted in the nuclear sector. Career experts in nuclear regulations and safety departed or were forced out. When Trump fired Hanson, a Democratic NRC commissioner, the president’s team explained the move by saying, “All organizations are more effective when leaders are rowing in the same direction.”
In an unsigned email to ProPublica, the White House press office wrote: “All commissioners are presidential appointees and can be fired just like any other appointee.”
In August, the NRC’s top attorney resigned and was replaced by oil and gas lawyer David Taggart, who had been working on DOGE cuts at the DOE. In all, the nuclear office at the DOE had lost about a third of its staff, according to a January 2026 count by the Federation of American Scientists, a nonprofit focused on science and technology policy.
That summer, Cohen and a team of DOGE operatives touched down at the NRC offices, a series of nondescript towers across from a Dunkin’ in suburban Maryland. He was joined by Adam Blake, an investor who had recently founded an AI medical startup and has a background in real estate and solar energy, and Ankur Bansal, president of a company that created software for real estate agents. Neither would comment for this story.
Many career officials who spoke with ProPublica were blindsided: The new Trump officials at the NRC seemed to have no experience with the intricacies of nuclear energy policy or law, they said. One NRC lawyer who briefed some of the new arrivals decided to resign. “They were talking about quickly approving all these new reactors, and they didn’t seem to care that much about the rules — they wanted to carry out the wishes of the White House,” the official said.
At one point, Cohen began passing out hats from nuclear energy startup Valar Atomics, one of the companies vying to build a new reactor, according to sources familiar with the matter and records seen by ProPublica. NRC staffers balked; they were supposed to monitor companies like Valar for safety violations, not wear its swag.
NRC ethics officials warned Cohen that the hat handout was a likely violation of conflict rules. It betrayed a misunderstanding of the safety regulator’s role, said a former official familiar with the exchange. “Imagine you live near a nuclear power plant, and you find out a supposedly independent safety regulator — the watchdog — is going around wearing the power plant’s branded hats,” the official said. “Would that make you feel safe?” The NRC and Cohen did not respond to requests for comment about the hat incident.
Valar counts Trump’s Silicon Valley allies as angel investors. They include Palmer Luckey, a technology executive and founder of the defense contractor Anduril, and Shyam Sankar, chief technology officer of Palantir, the software company helping power Immigration and Customs Enforcement’s deportation raids.
It was among three nuclear reactor companies that sued the NRC last year in an attempt to strip it of its authority to regulate its reactors and replace it with a state-level regulator. Before the Trump administration came into office, lawyers watching the case were confident the courts would quickly dismiss the suit, as the NRC’s authority to regulate reactors is widely acknowledged. But new Trump appointees pushed for a compromise settlement — which is still being negotiated. The career NRC lawyer working on the case quietly left the agency.
Valar and its executives did not reply to requests for comment.
“Going So Fast”
The deregulatory push is the culmination of mounting pressure — both political and economic — to make it easier to build nuclear power in the U.S. Over the years, a bipartisan coalition supporting nuclear expansion brought together environmentalists who favor zero-carbon power and defense hawks focused on abundant domestic energy production.
Anti-nuclear activists still argue that renewable energy like wind and solar are safer and more economical. But streamlining the NRC has been a bipartisan priority as well. The latest major reform came in 2024, when President Joe Biden signed into law the ADVANCE Act, which went as far as changing the mission statement of the NRC to ensure it “does not unnecessarily limit” nuclear energy development.
Some nuclear power supporters say the Trump administration is merely accelerating these changes. They cite instances in which the current regulations appear out of sync with the times. The NRC’s byzantine rules are designed for so-called large light-water reactors — massive facilities that can power entire cities — and not the increasingly in vogue smaller advanced reactor designs popular among Silicon Valley-backed firms.
Rules that require fences of certain heights might make little sense for new reactors buried in the earth; and rules that require a certain number of operators per reactor could be a bad fit for a cluster of smaller reactors with modern controls. Advances in sensors, modeling and safety technologies, they say, should be taken into account across the board.
The NRC has said it expects over two dozen new license requests from small modular and advanced reactor companies in coming years. Many of those requests are likely to come from new, Silicon Valley-based nuclear firms.
“There was a missing link in the innovation cycle, and it was very difficult to build something and test it in the U.S. because of mostly licensing and site availability constraints in the past,” said Adam Stein of the pro-nuclear nonprofit Breakthrough Institute.
The regulatory changes are in flux: This spring, the NRC is starting to release thousands of pages of new rules governing everything from the safety and emergency preparedness plans reactor companies are required to submit to the procedures for objecting to a reactor license.
“It’s hard to know if they are getting rid of unnecessary processes or if it’s actually reducing public safety,” said one official working on reactor licensing, who, like others, spoke on the condition of anonymity for fear of retaliation from the Trump administration. “And that’s just the problem with going so fast — everything just kind of gets lost in a mush.”
Lawyers from the Executive Office of the President have been sent to the NRC to keep an eye on the new rules, a move that further raised alarms about the agency’s independence.
Nicholas Gallagher — a relatively recent New York University law school graduate and conservative writer whom ProPublica previously identified as a DOGE operative at the General Services Administration — has been involved in conversations about overhauling environmental rules.
He’s working alongside Sydney Volanski, a 30-year-old recent law school graduate who rose to national attention while she was in high school for her campaign against the Girl Scouts of America, which she accused of promoting “Marxists, socialists and advocates of same-sex lifestyle.”
NRC lawyers working on the rules were told last October that Gallagher and Volanski would be joining them, and they both appear on the regular NRC rulemaking calendar invite.
The White House maintains, however, that “zero lawyers from the Executive Office of the President have been dispatched to work on rulemaking.” Neither Gallagher nor Volanski replied to requests for comment.
The administration is routing the new rules through an office overseen by Trump’s cost-cutting guru Russell Vought, a move that was previously unheard of for an independent regulator like the NRC. The White House spokesperson noted that, under a recent executive order, this process is now required for all agencies.
Political operatives have been “inserted into the senior leadership team to the point where they could significantly influence decision-making,” said Scott Morris, who worked at the NRC for more than 32 years, most recently as the No. 2 career operations official. “I just think that would be a dangerous proposition.”
Morris voted for Trump twice and broadly supports the goals of deregulating and expanding nuclear energy, but he has begun speaking out against the administration’s interference at the NRC. He retired in May 2025 as part of a wave of retirements and firings.
At a recent hearing before the Atomic Safety and Licensing Board — an independent body that helps adjudicate nuclear licensing — NRC lawyers withdrew from the proceedings, citing “limited resources.” The judge remarked that it was the first time in over 20 years the NRC had done so.
Meanwhile, some staff members, other career officials say, are afraid to voice dissenting views for fear of being fired. “It feels like being a lobster in a slowly boiling pot,” one NRC official who has been working on the rule changes told ProPublica, describing the erosion of independence.
The official was one of three who compared their recent experience at NRC to being in a pot of slowly boiling water. “If somebody is raising something that they think that the industry or the White House would have a problem with, they think twice,” the official said.
Inside the NRC, the steering committee overseeing the changes includes Cohen, Taggart and Mike King, a career NRC official who is the newly installed executive director for operations. The former director, Mirela Gavrilas, a 21-year veteran of the agency, retired after getting boxed out of decision-making, according to a person familiar with her departure. Gavrilas did not respond to a request for comment.
Any final changes will be approved by the NRC’s five commissioners, three of whom are Republicans. In September, the two Democratic commissioners told a Senate committee they might be fired at any time if they get crosswise with Trump — including over revisions to safety rules.
Draft rules being circulated inside the NRC propose drastic rollbacks of security and safety inspections at nuclear facilities. Those include a proposed 56% cut in emergency preparedness inspection time, CNN reported in March.
Even some pro-nuclear groups are troubled by the emerging order. Some have tried to backchannel to their contacts in the Trump administration to explain the importance of an independent regulator to help maintain public support for nuclear power. Without it, they risk losing credibility.
“You have to make sure you don’t throw out the baby with the bathwater,” said Judi Greenwald, president and CEO of the Nuclear Innovation Alliance, a nonprofit that promotes nuclear energy and supports many of the regulatory changes being proposed by the Trump administration.
Greenwald’s group favors faster timelines for approving nuclear reactors, but she worries that the agency’s fundamental independence has been undermined. “We would prefer that they yield back more of NRC independence,” she said.
“Nuke Bros” in Silicon Valley
One Trump administration priority has been making it easier for so-called advanced reactor companies to navigate the regulatory process. These firms, mostly backed by Silicon Valley tech and venture money, are often working on designs for much smaller reactors that they hope to mass produce in factories.
“There are two nuclear industries,” said Macfarlane, the former NRC chair. “There are the actual people who use nuclear reactors to produce power and put it on the grid … and then there are the ‘nuke bros’” in Silicon Valley.
Trump’s Silicon Valley allies have loomed large over his nuclear policy. One prospective political appointee for a top DOE nuclear job got a Christmas Eve call from Thiel, the rare Silicon Valley leader to back Trump in 2016. Thiel, whose Founders Fund invested in a nuclear fuel startup and an advanced reactor company, quizzed the would-be official about deregulation and how to rapidly build more nuclear energy capacity, said sources familiar with the conversation.
Nuclear energy startups jockeyed to spend time at Mar-a-Lago in the months before the start of Trump’s second term. Balerion Space Ventures, a venture capital firm that has invested in multiple companies, convened an investor summit there in January 2025, according to an invitation viewed by ProPublica. Balerion did not reply to a request for comment.
A few months later, when Trump was drawing up the executive orders, leaders at many of those nuclear companies were given advanced access to drafts of the text — and the opportunity to provide suggested edits, documents viewed by ProPublica show.
Those orders created a new program to test out experimental reactor designs, addressing a common complaint that companies are not given opportunities to experiment. There are currently about a dozen advanced reactor companies planning to participate. Each has a concierge team within the DOE to help navigate bureaucracy. As NPR reported in January, the DOE quietly overhauled a series of safety rules that would apply to these new reactors and shared the new regulations with these companies before making them public.
Secretary of Energy Chris Wright — who served on the board of one of those companies, Oklo — has said fast nuclear build-out is a priority: “We are moving as quickly as we can to permit, build and enable the rapid construction of as much nuke capacity as possible,” he told CNBC last fall. Oklo noted that Wright stepped down from the board when he was confirmed.
The Trump administration hopes some of the companies would have their reactors “go critical” — a key first step on the way to building a functioning power plant — by July 2026. Then the NRC, which signs off on the safety designs of commercial nuclear power plants, could be expected to quickly OK these new reactors to get to market.
According to people familiar with the conversations, at least one nuclear energy startup CEO personally recruited potential members of the DOGE nuclear team, though it’s not clear if Cohen was brought aboard this way. Cohen has told colleagues and industry contacts that he reports to Emily Underwood, one of Trump adviser Stephen Miller’s top aides for economic policy. He is perceived inside government as a key avatar of the White House’s nuclear agenda.
In its email to ProPublica, the White House said, “Seth Cohen is a Department of Energy employee and does not report to Emily Underwood or Stephen Miller in any capacity.”
The DOE spokesperson added, “Seth’s role at the Department of Energy is to support the Trump administration’s mission to unleash American Energy Dominance.”
Cohen has been pushing to raise the legal limit of radiation that nuclear energy companies are allowed to emit from their facilities. One nuclear industry insider, who spoke on the condition of anonymity, said many firms are fixating on changing these radiation rules: Their business model requires moving nuclear reactors around the country, often near workers or the general public.
Building thick, expensive shielding walls can be prohibitively expensive, they said.
Valar CEO Isaiah Taylor has called limits on exposure to radiation a top barrier to industry growth. A recent DOE memo seen by ProPublica cites cost savings on shielding for Valar’s reactor to justify changing those limits. “Shielding-related cost reductions,” the memo said, “could range from $1-2 million per reactor.” The debate over the precise rule change is ongoing.
The DOE has been considering a fivefold increase to the limit for public exposure to radiation, which will allow some nuclear reactor companies to cut costs on these expensive safety shields, internal DOE documents seen by ProPublica show.
A presentation prepared by DOE staffers in their Idaho offices that has circulated inside the department makes the “business case” for changing the radiation dose rules: It could cut the cost of some new reactors by as much as 5%. These more relaxed standards are likely to be adopted by the NRC and apply to reactors nationwide, documents show.
In February, Wright accompanied Valar’s executive team on a first-of-its-kind flight, as a U.S. military plane was conscripted to fly the company’s reactor from Los Angeles to Utah. Valar does not yet have a working nuclear reactor, and a number of industry sources told ProPublica they viewed the airlift as a PR exercise. Internal government memos justified the airlift by designating it as “critical” to the U.S. “national security interests.”
Cohen posted smiling pictures of himself from the cargo bay of the military plane.
Cohen told an audience at the American Nuclear Society that the rapid build-out was essential to powering Silicon Valley’s AI data centers. He framed the policy in existential terms: “I can’t emphasize this strongly enough that losing the AI war is an outcome akin to the Nazis developing the bomb before the United States.”
As it deliberated rule changes, the DOE has cut out its internal team of health experts who work on radiation safety at the Office of Environment, Health, Safety and Security, said sources familiar with the decision. The advice of outside experts on radiation protection has been largely cast aside.
The DOE spokesperson said its radiation standards “are aligned with Gold Standard Science … with a focus on protecting people and the environment while avoiding unnecessary bureaucracy.”
The department has already decided to abandon the long-standing radiation protection principle known as “ALARA” — the “As Low As Reasonably Achievable” standard — which directs anyone dealing with radioactive materials to minimize exposure.
It often pushes exposure well below legal thresholds. Many experts agreed that the ALARA principle was sometimes applied too strictly, but the move to entirely throw it out was opposed by many prominent radiation health experts.
Whether the agencies will actually change the legal thresholds for radiation exposure is an open question, said sources familiar with the deliberations.
Internal DOE documents arguing for changing dose rules cite a report produced at the Idaho National Laboratory, which was compiled with the help of the AI assistant Claude. “It’s really strange,” said Kathryn Higley, president of the National Council on Radiation Protection and Measurements, a congressionally chartered group studying radiation safety. “They fundamentally mistake the science.”
John Wagner, the head of the Idaho National Laboratory and the report’s lead author, acknowledged to ProPublica that the science over changing radiation exposure rules is hotly contested. “We recognize that respected experts interpret aspects of this literature differently,” he wrote. His analysis was not meant to be the final word, he said, but was “intended to inform debate.”
The impact of radiation levels at very low doses is hard to measure, so the U.S. has historically struck a cautious note. Raising dose limits could put the U.S. out of step with international standards.
For his part, Cohen has told the nuclear industry that he sees his job as making sure the government “is no longer a barrier” to them.
In June, he shot down the notion of companies putting money into a fund for workplace accidents. “Put yourself in the shoes of one of these startups,” he said. “They’re raising hundreds of millions of dollars to do this. And then they would have to go to their VCs and their board and say, listen, guys, we actually need a few hundred million dollars more to put into a trust fund?”
He also suggested that regulators should not fret about preparing for so-called 100-year events — disasters that have roughly a 1% chance of taking place but can be catastrophic for nuclear facilities.
“When SpaceX started building rockets, they sort of expected the first ones to blow up,” he said.
There is a familiar media failure in which opposing viewpoints are presented as equally valid, even when the evidence overwhelmingly supports one side. It’s called Bothsidesism. This false balance phenomenon legitimizes misinformation and undermines public understanding by giving disproportionate weight to baseless claims.
Why bring this up? Because the new AI Doc film is based on it.
Once you understand that false equivalence is baked into the film’s storytelling, you understand how misleading and manipulative the documentary is. And it is compounded by a series of falsehoods that go unchallenged and uncorrected.
This review addresses both failures.
The “AI Doc” Movie
“The AI Doc: Or How I Became an Apocaloptimist,” co-directed by Daniel Roher and Charlie Tyrell, sets out to explore AI, especially its potential for good and bad, with a strong emphasis on the filmmakers’ anxieties and fears. Its basic premise is: “A father-to-be tries to figure out what is happening with all this AI insanity.” As summarized by Andrew Maynard from Future of Being Human:
“The documentary progresses through the eyes of director Daniel Roher as he faces a tsunami of existential AI angst while grappling with the responsibility of becoming a father. Motivated by a fear that artificial intelligence could spell the end of everything that matters, he sets out to interview some of the largest (and loudest) voices in AI to fathom out whether this is the best of times or worst of times for him and his wife (filmmaker Caroline Lindy) to bring a kid into the world.”
The “loudest voices” include many AI doomer figures, such as Eliezer Yudkowsky, Dan Hendrycks, Daniel Kokotajlo, Connor Leahy, Jeffrey Ladish, and two of the most populist voices on emerging tech (first social media and now AI): Tristan Harris and Yuval Noah Harari. The film also features voices on AI ethics, including David Evan Haris, Emily M. Bender, Timnit Gebru, Deborah Raji, and Karen Hao. On the more boosterish side, there are Peter Diamandis and Guillaume Verdon (AKA Beff Jezos). Three leading AI CEOs were also interviewed: OpenAI’s Sam Altman, DeepMind’s Demis Hassabis, and Anthropic’s Amodei siblings, Dario and Daniela. (Meta’s Mark Zuckerberg declined, and xAI’s Elon Musk agreed but never showed up).
The movie started playing in theaters on March 27, but there are already plenty of reviews (dating back to the Sundance Film Festival). The praise is fairly consistent: It is timely, wide-ranging, visually energetic, and unusually well-connected, with access to major AI figures.
The most common criticism is that it is too deferential to interviewees and too thin on hard interrogation or concrete answers. As several reviewers put it:
“Roher’s willingness to blindly accept any and all of his speakers’ pronouncements leaves The AI Doc feeling toothless.”
“By giving its doomer and accelerationist voices so much time to present AI’s most hyperbolic potential outcomes with little pushback, the documentary’s first half plays more like an overlong advertisement for the technology as opposed to a piece of measured analysis.”
Tristan Harris, co-founder of the Center for Humane Technology, told the AP: “My hope is that this film is kind of like ‘An Inconvenient Truth’ or ‘The Social Dilemma’ for AI.”
That is not reassuring. It is more like a glaring warning sign. Harris’s “Social Dilemma” and “AI Dilemma” movies were full of misinformation and nonsensical hyperbole, and both were designed to be manipulative and dishonest. If anything, his endorsement tells you exactly what kind of movie this is.
After watching the AI Doc, I realized what the doomers had managed to accomplish here: The film absorbs the panic rather than investigates it.
The False Balance of The AI Doc
The AI Doc starts with what one reviewer called a “Doom Parade.” It aims to set the tone.
“The worst AI predictions are presented first,” another reviewer noted. “Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute, calmly talks of the ‘abrupt extermination’ of humanity.”
And it is worth remembering who Yudkowsky is and what he has actually advocated. In his notorious TIME op-ed, “Shut it All Down,” he argued that governments should “be willing to destroy a rogue datacenter by airstrike.” In his book “If Anyone Builds It, Everyone Dies,” which many reviewers found unconvincing and “unnecessarily dramatic sci-fi,” he (and his co-author Nate Soares) proposed that governments must bomb labs suspected of developing AI. Based on what exactly? On the authors’ overconfident, binary worldview and speculative scenarios, which they mistake for inevitability.
One review of that book observed, “The plan with If Anyone Builds It seems to be to sane-wash him [Yudkowsky] for the airport books crowd, sanding off his wild opinions.”
That is more or less what the new documentary does, too. The AI Doc sane-washes the loudest doomers for mainstream viewers, sanding off their wild opinions.
In his newsletter, David William Silva addresses the documentary’s “series of doomers,” who “describe AI-driven extinction with the calm confidence of people who have said these things so many times they have stopped noticing they have no evidence for them.”
“Roher’s reaction is full terror,” Silva adds. “I hope it is unequivocally evident that this is not journalism.”
That gets to the heart of it. The film pretends to weigh competing perspectives, but in practice, it grants disproportionate authority to people most invested in flooding the zone with AI panic. And there is a well-oiled machine behind this kind of AI panic. As Silva writes:
“The people behind the AI anxiety machine. […] They know that predicting human extinction by software is an extraordinary claim requiring extraordinary evidence. They know they don’t have it. They know ‘my kids won’t live to see middle age’ is nothing but performance. […] And they do it anyway. Why do you think that is? The calculation is simple. Some people will see through it, and they will be annoyed, write rebuttals, call it what it is. Ok, fine. Just an acceptable loss. The believers, on the other hand, are a market. As long as the ratio stays favorable, the machine is profitable.”
One of the biggest beneficiaries of this film is Harris.[1] He is framed as if he is in the middle between the two main camps (doomers and accelerationists), and his narrative gradually becomes the film’s narrative (similar to the Social Dilemma). His call to action even serves as the ending (with a QR code directing viewers to a designated website).
The problem is that this framing has very little to do with reality. Harris’s Center for Humane Technology got $500,000 from the Future of Life Institute for “AI-related policy work and messaging cohesion within the AI X-risk [existential risk] community.” That is not a neutral player.
There’s a touching scene in the film where Roher mentions his father’s cancer treatment and expresses hope that AI might help. Harris appears visibly emotional. But in other contexts, Harris has argued against looking at AI for help with cancer treatment… in the belief that it would lead to extinction. Here he is on Glenn Beck’s show in 2023:
“My mother died from cancer several years ago. And if you told me that we could have AI that was going to cure her of cancer, but on the other side of that coin was that all the world would go extinct a year later, because of the, the only way to develop that was to bring something, some Demon into the world that would we would not be able to control, as much as I love my mother, and I would want her to be here with me right now, I wouldn’t take that trade.”
That sort of hyperbole seems relevant to Harris’ stance on such things, but was not mentioned in the film at all.
Connor Leahy of Conjecture and ControlAI gets a similar makeover. In the documentary, he appears as another pessimistic expert. Elsewhere, he said he does not expect humanity “to make it out of this century alive; I’m not even sure we’ll get out of this decade!” His “Narrow Path” proposal for policymakers begins with the claim that “AI poses extinction risks to human existence.” Instead of calling for a six-month AI pause, he argued for a 20-year pause, because “two decades provide the minimum time frame to construct our defenses.”
This is exactly why background checks matter. Viewers of the AI Doc deserve to know the full scope of the more extreme positions these interviewees have publicly taken elsewhere. If someone has publicly argued for destroying data centers by airstrikes or stopping AI for 20 years, the audience should know that.
Debunking the Falsehoods
The film goes way beyond just pushing a panic. It also recycles several misleading or plainly false claims, letting them pass as established facts. Three stood out in particular.
Anthropic’s Blackmail study
One of the most repeated “facts” in reviews of the movie is that Anthropic’s AI model, Clause, decided, unprompted, to blackmail a fictional employee. In the film, Daniel Roher asks, “And nobody taught it to do that?” Jeffrey Ladish, of Palisade Research and Tristan’s Center for Humane Technology, replies: “No, it learned to do that on its own.”
That is a misleading characterization of the actual experiment, it has already been debunked in “AI Blackmail: Fact-Checking a Misleading Narrative.” Anthropic researchers admitted that they strongly pressured the model and iterated through hundreds of prompts before producing that outcome. It wasn’t a spontaneous emergence of “evil” behavior; the researchers explicitly ensured it would be the default. Telling viewers that the model has gone full “HAL 9000” omits the facts about the heavily engineered experimental setup.
Although this is a classic case of big claims and thin evidence, the film offers so little pushback that viewers are left to take Ladish’s statements at face value.
It is also worth remembering that Ladish has fought against open-source AI, pushed for a crackdown on open-source models, and once said, “We can prevent the release of a LLaMA 2! We need government action on this asap.” He later updated his position (and it’s good to revise such views). But does the film mention his earlier public hysteria? No.
Is AI less regulated than sandwich shops? No.
Connor Leahy tells Daniel Roher, “There is currently more regulation on selling a sandwich to the public” than there is on AI development. This talking point has become a favorite slogan in AI doomer circles. It was repeatedly stated by The Future of Life Institute’s Max Tegmark and, more recently, by Senator Bernie Sanders. It’s catchy. It’s also false.
State attorneys general from both parties have explicitly argued that existing laws already apply to AI. Lina Khan, writing on behalf of the Federal Trade Commission, stated that “AI is covered by existing laws. Each agency here today has legal authorities to readily combat AI-driven harm.” The existing AI regulatory stack already includes antitrust & competition regulation, civil rights & anti-discrimination law, consumer protection, data privacy & security, employment & labor law, financial regulation, insurance & accident compensation, property & contract law, among others.
So no, AI is not less regulated than sandwich shops. It’s a misleading soundbite, not a serious description of legal reality.
Data center water usage
In the film, Karen Hao criticizes data centers, warning that “People are literally at risk, potentially of running out of drinking water.” That sounds alarming, which is presumably the point. But it is highly misleading.
In fact, Karen Hao had to issue corrections to her “Empire of AI” book because a key water-use figure was off by a factor of 4,500. The discrepancy was not 45x or 450x, but rather 4,500x. That is not a rounding error. For detailed rebuttals, see Andy Masley’s “The AI water issue is fake” and “Empire of AI is widely misleading about AI water use.”
There is also a basic proportionality issue here. As demonstrated by The Washington Post, “The water used by data centers caused a stir in Arizona’s drought-prone Maricopa County. But while they used about 905 million gallons there last year, that’s a small fraction of the 29 billion gallons devoted to the county’s golf courses.” To put that plainly: data centers accounted for just 0.1% of the county’s water use.
It is also worth noting that “most of the water used by data centers returns to its source unchanged.” In closed-loop cooling systems, for example, water is recirculated multiple times, which significantly reduces net consumption.
None of this is hidden information. A basic fact-check by the filmmakers could have brought it to light. But that was not the film’s goal. They chose fear-based framing over actual reporting. They could have pressed interviewees on their track records, failed predictions, and political agendas. Instead, they let them narrate the stakes, unchallenged.
So, I think we can conclude that the AI Doc may want to appear balanced and thoughtful, but, unfortunately, too often it is not.
Final Remark
While Western filmmakers are busy platforming advocates for “bombing data centers” and “Stop AI for 20 years,” the Chinese Communist Party is building the actual infrastructure. The CCP is not making doom-and-gloom documentaries; it is racing ahead. This is a real strategic threat, and it is far more concerning than anything featured in this film.
—————————
Dr. Nirit Weiss-Blatt, Ph.D. (@DrTechlash), is a communication researcher and author of “The TECHLASH and Tech Crisis Communication” book and the “AI Panic” newsletter.
The producers of The AI Doc said in a conversation with Tristan Harris (on the Your Undivided Attention podcast) that after the ChatGPT moment, Harris approached them to discuss generative AI. They watched the AI Dilemma and, based on it, decided their next project would focus on AI.↩︎
Back in October, Meta announced that its new Instagram Teen Accounts would feature content moderation “guided by the PG-13 rating.” On its face, this made a certain kind of sense as a communication strategy: parents know what PG-13 means (or at least think they do), and Meta was clearly trying to borrow that cultural familiarity to signal that it was taking teen safety seriously.
The Motion Picture Association, however, was not amused. Within hours of the announcement, MPA Chairman Charles Rivkin fired off a statement. Then came a cease-and-desist letter. Then a Washington Post op-ed whining about the threat to its precious brand. The MPA was very protective of its trademark, and very unhappy that Meta was freeloading off the supposed credibility of its widely mocked rating system.
And now, this week, the two sides have announced a formal resolution in which Meta has agreed to “substantially reduce” its references to PG-13 and include a rather remarkable disclaimer:
“There are lots of differences between social media and movies. We didn’t work with the MPA when updating our content settings, and they’re not rating any content on Instagram, and they’re not endorsing or approving our content settings in any way. Rather, we drew inspiration from the MPA’s public guidelines, which are already familiar to parents. Our content moderation systems are not the same as a movie ratings board, so the experience may not be exactly the same.”
In Meta’s official response, you can practically hear the PR team gritting their teeth:
“We’re pleased to have reached an agreement with the MPA. By taking inspiration from a framework families know, our goal was to help parents better understand our teen content policies. We rigorously reviewed those policies against 13+ movie ratings criteria and parent feedback, updated them, and applied them to Teen Accounts by default. While that’s not changing, we’ve taken the MPA’s feedback on how we talk about that work. We’ll keep working to support parents and provide age-appropriate experiences for teens,” said a Meta spokesperson.
Translation: we’re still doing the same thing, we’re just no longer allowed to call it what we were calling it.
There are several layers of nonsense worth unpacking here. First, there’s the MPA getting all high and mighty about its rating system. Let’s remember how the MPA’s film rating system came into existence in the first place: it was a voluntary self-regulation scheme created in the late 1960s specifically to head off government regulation after the government started making noises about the harm Hollywood was doing to children with the content it platformed. Sound familiar? The studios decided that if they rated their own content, maybe Congress would leave them alone. As the MPA explains in their own boilerplate:
For nearly 60 years, the MPA’s Classification and Rating Administration’s (CARA) voluntary film rating system has helped American parents make informed decisions about what movies their children can watch… CARA does not rate user-generated content. CARA-rated films are professionally produced and reviewed under a human-centered system, while user-generated posts on platforms like Instagram are not subject to the same rating process.
Sure, there’s a trademark issue here, but let’s be real: no one thought Instagram was letting a panel of Hollywood parents rate the latest influencer videos.
Next, the PG-13 analogy never actually made much sense for social media. As we discussed on Ctrl-Alt-Speech back when this whole thing started, the context and scale are just completely different. At the time, I pointed out that a system designed to rate a 90-minute professionally produced film — reviewed in its entirety by a panel of parents — is a wholly different beast than moderating hundreds of millions of short-form posts generated by individuals (and AI) every single day.
So, yes, calling the system “PG-13” was a marketing gimmick, meant to trade on a familiar brand while obscuring how differently social media actually works — but the idea that this somehow dilutes the MPA’s marks is still pretty silly.
Then there’s the rating system’s well-documented arbitrariness. The MPA’s ratings have been criticized for decades for their seemingly incoherent standards. On that same podcast, I noted that the rating system is famous for its selective prudishness — nudity gets you an R rating, but two hours of violence can skate by with a PG-13.
There was a whole documentary about this — This Film Is Not Yet Rated — that exposed just how subjective and inconsistent the whole process was. Meta was effectively borrowing credibility from a system that was itself created as a regulatory dodge, is famously inconsistent, and was designed for an entirely different medium. And the MPA’s response was essentially: “Hey, that’s our famously inconsistent regulatory dodge, and you can’t have it.”
The whole thing was silly. And now it’s been formally resolved with Meta agreeing to stop doing the thing it had already mostly stopped doing back in December. So even the resolution is anticlimactic.
But there’s a more substantive point buried under all this trademark squabbling: the whole approach reflects a flawed assumption that one company can set a universal standard for every teen on the planet.
As I argued on the podcast, the deeper issue is that the whole framework is wrong for the medium. The MPA’s rating system was built to evaluate a single 90-minute film, reviewed in its entirety by a panel of parents. Applying that logic to hundreds of millions of short-form posts generated by people across wildly different cultural contexts — a kid in rural Kansas, a teenager in Berlin, a twelve-year-old in Lagos — was never going to produce anything coherent. Different kids, different families, different communities have different standards, and no single company should be setting a universal threshold for all of them. The smarter approach is giving parents and users real controls with customizable defaults, rather than having Zuckerberg (or a Hollywood trade association) decide what counts as age-appropriate for every teenager on the planet.
This whole dispute was silly from start to finish.
Opusonix is the workflow-first platform built for music producers and engineers who are tired of endless email chains and scattered files. By centralizing feedback, versions, and tasks in one structured workspace, it helps you cut email traffic by up to 90% so you can focus more on creating and less on chasing approvals. From time-coded comments and version testing to album planning and client-friendly demo pages, Opusonix gives you the tools to manage every mix, project, and album with clarity and speed. It’s on sale for $50.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Trump’s do everything all at once approach to immigration enforcement is starting to go off the rails. Trump’s plainly stated hatred of “shithole countries” and their inhabitants manifested in early wins for his bigoted “remove the brown people” programs. Then Stephen Miller (the man who answers the “what if a lightbulb had eyebrows and was also a white nationalist” question no one asked) showed up and amped things up. 3,000 arrests per day! he screamed into the void. (The void did not respond to our request for comment before press time.)
A lot of wrenches approached the anti-migrant works and immediately threw themselves into it. First, ICE didn’t have enough officers to staff a surge. No problem, said the administration. Here’s $50,000 and almost no training to get you started! Here’s several (more!) billion dollars to keep it going! Here’s everyone we actually can’t spare from multiple federal agencies!
Bang! Into the blue cities they went, kidnapping and murdering their way towards Miller’s arrest quota. All well and good but at the end of the day, you’ve still got to have some lawyers left to fight the lawsuits these surges generated, as well as to handle challenges against detentions, removals, and direct flights to foreign torture prisons.
Well, the Trump administration no longer has enough lawyers left to do its dirty work. Whoever hasn’t been purged for not being loyal enough or exited ahead of the purges has been asked to clean up a mess with extremely limited amounts of resources and manpower. To make things worse, Trump’s handpicked prosecutors keep being kicked out of court because Trump bypassed the appointment process essential to them remaining employed.
Then there’s the self-inflicted reputational damage Trump’s DOJ has done. The government, for the most part, is no longer granted the presumption of good faith. Courts across the land are not only aware this government isn’t acting in good faith, but they’re refusing to pretend it is, no matter how much copy-pasted boilerplate appears in DOJ filings.
Hundreds of adverse rulings have already been handed down. Hundreds more are on the horizon, especially now that the DOJ has admitted pretty much every arrest that took place in an immigration court was illegal.
It all adds up to the long tail of “flooding the zone.” If you can’t bail water fast enough, you’re going to drown. Here’s how this is working out for the DOJ now, as reported by Kyle Cheney for Politico:
In dozens of cases over the past several weeks, Justice Department lawyers have declined to push back on detainees’ claims that they’re owed a chance to make a case for their release. In those cases, the administration has simply agreed to provide a bond hearing, or even outright release, telling judges that officials “do not have an opposition argument to present” or saying they couldn’t cobble together enough information to mount a defense.
[…]
The new phenomenon is the latest manifestation of the extraordinary strain that the administration’s mass deportation effort — compounded by the mass detention of people who have lived for years without incident in the U.S. interior — has exacted on the justice system.
While ICE bathes in newly awarded billions, the problems its efforts have created are being attended to by a skeleton crew that can’t keep up with Trump’s rights-violating fire hose. That’s created some pretty gaudy numbers, which certainly isn’t a compliment.
Federal judges have ruled more than 7,000 times in recent months that ICE has illegally locked people up without — at the very least — a chance to prove they can live safely in the community.
That’s a lot. This administration is setting judicial records that hopefully will never be broken. It’s not just the government losing cases on the merits. Many of these losses are the result of the DOJ simply being unable to respond at all to legal challenges by people ICE has arrested, detained, or deported.
If there’s a silver lining in this bigoted war on non-white people, it’s everything listed above. Trump’s administration may be evil and stupid in equal measures, but those aspects are being held in check by its inability or unwillingness to anticipate the natural side effects of sending wave after wave of masked goons into cities to kidnap anyone who looks a little bit foreign. The administration is a defective centrifuge that edges closer to disintegration with every rotation. What remains to be seen is who’s going to get hit with the majority of the shrapnel when it finally falls apart. We can only hope it’s the people that started it spinning in the first place.
Last election season the Trump campaign lied to everyone repeatedly about how his second administration would “rein in big tech,” and be a natural extension of the Lina Khan antitrust movement. As we noted at the time, that was always an obvious fake populist lie, but it was propped up anyway by a lazy U.S. press and a long line of useful idiots (including some purported “antitrust experts“.)
The Wall Street Journal last week published a new interesting story about that last bit. Specifically, it’s about how Mike Davis, a radical Trump loyalist and corporate lobbyist, found it relatively trivial to oust the small handful of actual antitrust reformers embedded within the MAGA coalition who occasionally cared about the public interest (Gail Slater and Mark Hamer):
“A Journal investigation found that Davis pushed antitrust officials at the Justice Department to approve his deals—and he went over their heads when they wouldn’t comply, according to interviews with more than three dozen DOJ employees, lobbyists, lawyers and others familiar with the antitrust division.”
Davis, who opportunistically pivoted to pseudo-big-tech criticism after being refused a job in the industry, is a transactional bully who was very excited about Trump’s plan to put minority children in cages last election season. He’s also, according to the Journal, been pivotal in elbowing out any remaining real antitrust enforcers to help Trump operate an even more “pay to play” government:
“Davis, despite having little experience practicing antitrust law, is one of the most visible practitioners of a change playing out across the division. Current and former antitrust officials said some mergers now get approval or draw mild settlements based on political ties rather than public interest. The new dynamic casts a shadow over the Justice Department’s integrity, they said, and has alarmed even some Trump loyalists in the department.”
And this is the Rupert Murdoch owned Wall Street Journal; not exactly the bastion of progressive left wing thought. In Davis’ head, he’s not easily exploiting the comical levels of corruption in the Trump White House, he’s just exceptional, according to comments he made to the Journal:
“I’m the best fixer in Washington, period. Full stop,” said the 48-year-old Iowan. “I know the people. I know the process. I know their pressure points. I know how to win.”
That Trump 2.0 was going to be a corrupt shitshow–and that the movement’s fake dedication to “reining in big tech” and “antitrust reform would be completely hollow–was one of the easier election season predictions I’d ever had to make. It should have been particularly and abundantly obvious to the ostensible fans of antitrust still peppered within the administration.
Even these “antitrust enforcers” within MAGA weren’t what you’d call remotely consistent when it comes to reining in corporate power. And while the Journal sort of romanticizes the first Trump term for “having guardrails,” it too was full of all manner of mindless rubber stamping of harmful deals that eroded competition and drove up costs (like the Sprint T-Mobile merger).
Yet, again, there were no shortage of press outlets (and supposed progressive antitrust experts like Matt Stoller) that spent much of last election season insisting that while Trump 2.0 might be problematic, it would feature ample populist checks on corporate power. You were to believe a sizeable chunk of the GOP had suddenly and uncharacteristically seen the light on antitrust reform.
Building meaningful and productive alliances with authoritarians is like trying to cultivate an intimate relationship with a running chainsaw. And the act of treating them as serious actors on antitrust reform (something Stoller and the press broadly did, repeatedly, with everyone from JD Vance to Josh Hawley) gave them press and policy credibility they never had to earn.
MAGA leadership is largely comprised of transactional bullies whose primary interest is in wealth accumulation and power. Everything else, whether it’s MAHA, or the administration’s purported antiwar stance, or its love of “antitrust reform” was an obvious populist lie, designed to convince a broadly befuddled electorate that dim, violent, and corrupt autocracy would be good for them.
In late 2024, the federal government’s cybersecurity evaluators rendered a troubling verdict on one of Microsoft’s biggest cloud computing offerings.
The tech giant’s “lack of proper detailed security documentation” left reviewers with a “lack of confidence in assessing the system’s overall security posture,” according to an internal government report reviewed by ProPublica.
Or, as one member of the team put it: “The package is a pile of shit.”
For years, reviewers said, Microsoft had tried and failed to fully explain how it protects sensitive information in the cloud as it hops from server to server across the digital terrain. Given that and other unknowns, government experts couldn’t vouch for the technology’s security.
Such judgments would be damning for any company seeking to sell its wares to the U.S. government, but it should have been particularly devastating for Microsoft. The tech giant’s products had been at the heart of two major cybersecurity attacks against the U.S. in three years. In one, Russian hackers exploited a weakness to steal sensitive data from a number of federal agencies, including the National Nuclear Security Administration. In the other, Chinese hackers infiltrated the email accounts of a Cabinet member and other senior government officials.
The federal government could be further exposed if it couldn’t verify the cybersecurity of Microsoft’s Government Community Cloud High, a suite of cloud-based services intended to safeguard some of the nation’s most sensitive information.
Yet, in a highly unusual move that still reverberates across Washington, the Federal Risk and Authorization Management Program, or FedRAMP, authorized the product anyway, bestowing what amounts to the federal government’s cybersecurity seal of approval. FedRAMP’s ruling — which included a kind of “buyer beware” notice to any federal agency considering GCC High — helped Microsoft expand a government business empire worth billions of dollars.
“BOOM SHAKA LAKA,” Richard Wakeman, one of the company’s chief security architects, boasted in an online forum, celebrating the milestone with a meme of Leonardo DiCaprio in “The Wolf of Wall Street.” Wakeman did not respond to requests for comment.
It was not the type of outcome that federal policymakers envisioned a decade and a half ago when they embraced the cloud revolution and created FedRAMP to help safeguard the government’s cybersecurity. The program’s layers of review, which included an assessment by outside experts, were supposed to ensure that service providers like Microsoft could be entrusted with the government’s secrets. But ProPublica’s investigation — drawn from internal FedRAMP memos, logs, emails, meeting minutes, and interviews with seven former and current government employees and contractors — found breakdowns at every juncture of that process. It also found a remarkable deference to Microsoft, even as the company’s products and practices were central to two of the most damaging cyberattacks ever carried out against the government.
FedRAMP first raised questions about GCC High’s security in 2020 and asked Microsoft to provide detailed diagrams explaining its encryption practices. But when the company produced what FedRAMP considered to be only partial information in fits and starts, program officials did not reject Microsoft’s application. Instead, they repeatedly pulled punches and allowed the review to drag out for the better part of five years. And because federal agencies were allowed to deploy the product during the review, GCC High spread across the government as well as the defense industry. By late 2024, FedRAMP reviewers concluded that they had little choice but to authorize the technology — not because their questions had been answered or their review was complete, but largely on the grounds that Microsoft’s product was already being used across Washington.
Today, key parts of the federal government, including the Justice and Energy departments, and the defense sector rely on this technology to protect highly sensitive information that, if leaked, “could be expected to have a severe or catastrophic adverse effect” on operations, assets and individuals, the government has said.
“This is not a happy story in terms of the security of the U.S.,” said Tony Sager, who spent more than three decades as a computer scientist at the National Security Agency and now is an executive at the nonprofit Center for Internet Security.
For years, the FedRAMP process has been equated with actual security, Sager said. ProPublica’s findings, he said, shatter that facade.
“This is not security,” he said. “This is security theater.”
ProPublica is exposing the government’s reservations about this popular product for the first time. We are also revealing Microsoft’s yearslong inability to provide the encryption documentation and evidence the federal reviewers sought.
The revelations come as the Justice Department ramps up scrutiny of the government’s technology contractors. In December, the department announced the indictment of a former employee of Accenture who allegedly misled federal agencies about the security of the company’s cloud platform and its compliance with FedRAMP’s standards. She has pleaded not guilty. Accenture, which was not charged with wrongdoing, has said that it “proactively brought this matter to the government’s attention” and that it is “dedicated to operating with the highest ethical standards.”
Microsoft has also faced questions about its disclosures to the government. As ProPublica reported last year, the company failed to inform the Defense Department about its use of China-based engineers to maintain the government’s cloud systems, despite Pentagon rules stipulating that “No Foreign persons may have” access to its most sensitive data. The department is investigating the practice, which officials say could have compromised national security.
Microsoft has defended its program as “tightly monitored and supplemented by layers of security mitigations,” but after ProPublica’s story published last July, the company announced that it would stop using China-based engineers for Defense Department work.
In response to written questions for this story and in an interview, Microsoft acknowledged the yearslong confrontation with FedRAMP but also said it provided “comprehensive documentation” throughout the review process and “remediated findings where possible.”
“We stand by our products and the comprehensive steps we’ve taken to ensure all FedRAMP-authorized products meet the security and compliance requirements necessary,” a spokesperson said in a statement, adding that the company would “continue to work with FedRAMP to continuously review and evaluate our services for continued compliance.”
The program was an early target of the Trump administration’s Department of Government Efficiency, which slashed its staff and budget. Even FedRAMP acknowledges it is operating “with an absolute minimum of support staff” and “limited customer service.” The roughly two dozen employees who remain are “entirely focused on” delivering authorizations at a record pace, FedRAMP’s director has said. Today, its annual budget is just $10 million, its lowest in a decade, even as it has boasted record numbers of new authorizations for cloud products.
The consequence of all this, people who have worked for FedRAMP told ProPublica, is that the program now is little more than a rubber stamp for industry. The implications of such a downsizing for federal cybersecurity are far-reaching, especially as the administration encourages agencies to adopt cloud-based artificial intelligence tools, which draw upon reams of sensitive information.
The General Services Administration, which houses FedRAMP, defended the program, saying it has undergone “significant reforms to strengthen governance” since GCC High arrived in 2020. “FedRAMP’s role is to assess if cloud services have provided sufficient information and materials to be adequate for agency use, and the program today operates with strengthened oversight and accountability mechanisms to do exactly that,” a GSA spokesperson said in an emailed statement.
The agency did not respond to written questions regarding GCC High.
A “Cloud First” World
About two decades ago, federal officials predicted that the cloud revolution, providing on-demand access to shared computing via the internet, would usher in an era of cheaper, more secure and more efficient information technology.
Moving to the cloud meant shifting away from on-premises servers owned and operated by the government to those in massive data centers maintained by tech companies. Some agency leaders were reluctant to relinquish control, while others couldn’t wait to.
In an effort to accelerate the transition, the Obama administration issued its “Cloud First” policy in 2011, requiring all agencies to implement cloud-based tools “whenever a secure, reliable, cost-effective” option existed. To facilitate adoption, the administration created FedRAMP, whose job was to ensure the security of those tools.
FedRAMP’s “do once, use many times” system was intended to streamline and strengthen the government procurement process. Previously, each agency using a cloud service vetted it separately, sometimes applying different interpretations of federal security requirements. Under the new program, agencies would be able to skip redundant security reviews because FedRAMP authorization indicated that the product had already met standardized requirements. Authorized products would be listed on a government website known as the FedRAMP Marketplace.
On paper, the program was an exercise in efficiency. But in practice, the small FedRAMP team could not keep up with the flood of demand from tech companies that wanted their products authorized.
The slow approval process frustrated both the tech industry, eager for a share in the billions of federal dollars up for grabs, and government agencies that were under pressure to migrate to the cloud. These dynamics sometimes pitted the cloud industry and agency officials together against FedRAMP. The backlog also prompted many agencies to take an alternative path: performing their own reviews of the products they wanted to adopt, using FedRAMP’s standards.
It was through this “agency path” that GCC High entered the federal bloodstream, with the Justice Department paving the way. Initially, some Justice officials were nervous about the cloud and who might have access to its information, which includes highly sensitive court and law enforcement records, a Justice Department official involved in the decision told ProPublica. The department’s cybersecurity program required it to ensure that only U.S. citizens “access or assist in the development, operation, management, or maintenance” of its IT systems, unless a waiver was granted. Justice’s IT specialists recommended pursuing GCC High, believing it could meet the elevated security needs, according to the official, who spoke on condition of anonymity because they were not authorized to discuss internal matters.
Pursuant to FedRAMP’s rules, Microsoft had GCC High evaluated by a so-called third-party assessment organization, which is supposed to provide an independent review of whether the product has met federal standards. The Justice Department then performed its own evaluation of GCC High using those standards and ruled the offering acceptable.
By early 2020, Melinda Rogers, Justice’s deputy chief information officer, made the decision official and soon deployed GCC High across the department.
It was a milestone for all involved. Rogers had ushered the Justice Department into the cloud, and Microsoft had gained a significant foothold in the cutthroat market for the federal government’s cloud computing business.
Moreover, Rogers’ decision placed GCC High on the FedRAMP Marketplace, the government’s influential online clearinghouse of all the cloud providers that are under review or already authorized. Its mere mention as “in process” was a boon for Microsoft, amounting to free advertising on a website used by organizations seeking to purchase cloud services bearing what is widely seen as the government’s cybersecurity seal of approval.
That April, GCC High landed at FedRAMP’s office for review, the final stop on its bureaucratic journey to full authorization.
Microsoft’s Missing Information
In theory, there shouldn’t have been much for FedRAMP’s team to do after the third-party assessor and Justice reviewed GCC High, because all parties were supposed to be following the same requirements.
But it was around this time that the Government Accountability Office, which investigates federal programs, discovered breakdowns in the process, finding that agency reviews sometimes were lacking in quality. Despite missing details, FedRAMP went on to authorize many of these packages. Acknowledging these shortcomings, FedRAMP began to take a harder look at new packages, a former reviewer said.
This was the environment in which Microsoft’s GCC High application entered the pipeline. The name GCC High was an umbrella covering many services and features within Office 365 that all needed to be reviewed. FedRAMP reviewers quickly noticed key material was missing.
The team homed in on what it viewed as a fundamental document called a “data flow diagram,” former members told ProPublica. The illustration is supposed to show how data travels from Point A to Point B — and, more importantly, how it’s protected as it hops from server to server. FedRAMP requires data to be encrypted while in transit to ensure that sensitive materials are protected even if they’re intercepted by hackers.
But when the FedRAMP team asked Microsoft to produce the diagrams showing how such encryption would happen for each service in GCC High, the company balked, saying the request was too challenging. So the reviewers suggested starting with just Exchange Online, the popular email platform.
“This was our litmus test to say, ‘This isn’t the only thing that’s required, but if you’re not doing this, we are not even close yet,’” said one reviewer who spoke on condition of anonymity because they were not authorized to discuss internal matters. Once they reached the appropriate level of detail, they would move from Exchange to other services within GCC High.
It was the kind of detail that other major cloud providers such as Amazon and Google routinely provided, members of the FedRAMP team told ProPublica. Yet Microsoft took months to respond. When it did, the former reviewer said, it submitted a white paper that discussed GCC High’s encryption strategy but left out the details of where on the journey data actually becomes encrypted and decrypted — so FedRAMP couldn’t assess that it was being done properly.
A Microsoft spokesperson acknowledged that the company had “articulated a challenge related to illustrating the volume of information being requested in diagram form” but “found alternate ways to share that information.”
Rogers, who was hired by Microsoft in 2025, declined to be interviewed. In response to emailed questions, the company provided a statement saying that she “stands by the rigorous evaluation that contributed to” her authorization of GCC High. A spokesperson said there was “absolutely no connection” between her hiring and the decisions in the GCC High process, and that she and the company complied with “all rules, regulations, and ethical standards.”
The Justice Department declined to respond to written questions from ProPublica.
A Fight Over “Spaghetti Pies”
As 2020 came to a close, a national security crisis hit Washington that underscored the consequences of cyber weakness. Russian state-sponsored hackers had been quietly working their way through federal computer systems for much of the year and vacuuming up sensitive data and emails from U.S. agencies — including the Justice Department.
At the time, most of the blame fell on a Texas-based company called SolarWinds, whose software provided hackers their initial opening and whose name became synonymous with the attack. But, as ProPublica has reported, the Russians leveraged that opening to exploit a long-standing weakness in a Microsoft product — one that the company had refused to fix for years, despite repeated warnings from one of its engineers. Microsoft has defended its decision not to address the flaw, saying that it received “multiple reviews” and that the company weighs a variety of factors when making security decisions.
In the aftermath, the Biden administration took steps to bolster the nation’s cybersecurity. Among them, the Justice Department announced a cyber-fraud initiative in 2021 to crack down on companies and individuals that “put U.S. information or systems at risk by knowingly providing deficient cybersecurity products or services, knowingly misrepresenting their cybersecurity practices or protocols, or knowingly violating obligations to monitor and report cybersecurity incidents and breaches.”
Deputy Attorney General Lisa Monaco said the department would use the False Claims Act to pursue government contractors “when they fail to follow required cybersecurity standards — because we know that puts all of us at risk.”
But if Microsoft felt any pressure from the SolarWinds attack or from the Justice Department’s announcement, it didn’t manifest in the FedRAMP talks, according to former members of the FedRAMP team.
The discourse between FedRAMP and Microsoft fell into a pattern. The parties would meet. Months would go by. Microsoft would return with a response that FedRAMP deemed incomplete or irrelevant. To bolster the chances of getting the information it wanted, the FedRAMP team provided Microsoft with a template, describing the level of detail it expected. But the diagrams Microsoft returned never met those expectations.
“We never got past Exchange,” one former reviewer said. “We never got that level of detail. We had no visibility inside.”
In an interview with ProPublica, John Bergin, the Microsoft official who became the government’s main contact, acknowledged the prolonged back-and-forth but blamed FedRAMP, equating its requests for diagrams to a “rock fetching exercise.”
“We were maybe incompetent in how we drew drawings because there was no standard to draw them to,” he said. “Did we not do it exactly how they wanted? Absolutely. There was always something missing because there was no standard.”
A Microsoft spokesperson said without such a standard, “cloud providers were left to interpret the level of abstraction and representation on their own,” creating “inconsistency and confusion, not an unwillingness to be transparent.”
But even Microsoft’s own engineers had struggled over the years to map the architecture of its products, according to two people involved in building cloud services used by federal customers. At issue, according to people familiar with Microsoft’s technology, was the decades-old code of its legacy software, which the company used in building its cloud services.
One FedRAMP reviewer compared it to a “pile of spaghetti pies.” The data’s path from Point A to Point B, the person said, was like traveling from Washington to New York with detours by bus, ferry and airplane rather than just taking a quick ride on Amtrak. And each one of those detours represents an opportunity for a hijacking if the data isn’t properly encrypted.
Other major cloud providers such as Amazon and Google built their systems from the ground up, said Sager, the former NSA computer scientist, who worked with all three companies during his time in government.
Microsoft’s system is “not designed for this kind of isolation of ‘secure’ from ‘not secure,’” Sager said.
A Microsoft spokesperson acknowledged the company faces a unique challenge but maintained that its cloud products meet federal security requirements.
“Unlike providers that started later with a narrower product scope, Microsoft operates one of the broadest enterprise and government platforms in the world, supporting continuity for millions of customers while simultaneously modernizing at scale,” the spokesperson said in emailed responses. “That complexity is not ‘spaghetti,’ but it does mean the work of disentangling, isolating, and hardening systems is continuous.”
The spokesperson said that since 2023, Microsoft has made “security‑first architectural redesign, legacy risk reduction, and stronger isolation guarantees a top, company‑wide priority.”
Assessors Back-Channel Cyber Concerns
The FedRAMP team was not the only party with reservations about GCC High. Microsoft’s third-party assessment organizations also expressed concerns.
The firms are supposed to be independent but are hired and paid by the company being assessed. Acknowledging the potential for conflicts of interest, FedRAMP has encouraged the assessment firms to confidentially back-channel to its reviewers any negative feedback that they were unwilling to bring directly to their clients or reflect in official reports.
In 2020, two third-party assessors hired by Microsoft, Coalfire and Kratos, did just that. They told FedRAMP that they were unable to get the full picture of GCC High, a former FedRAMP reviewer told ProPublica.
“Coalfire and Kratos both readily admitted that it was difficult to impossible to get the information required out of Microsoft to properly do a sufficient assessment,” the reviewer told ProPublica.
The back channel helped surface cybersecurity issues that otherwise might never have been known to the government, people who have worked with and for FedRAMP told ProPublica. At the same time, they acknowledged its existence undermined the very spirit and intent of having independent assessors.
A spokesperson for Coalfire, the firm that initially handled the GCC High assessment, requested written questions from ProPublica, then declined to respond.
A spokesperson for Kratos, which replaced Coalfire as the GCC High assessor, declined an interview request. In an emailed response to written questions, the spokesperson said the company stands by its official assessment and recommendation of GCC High and “absolutely refutes” that it “ever would sign off on a product we were unable to fully vet.” The company “has open and frank conversations” with all customers, including Microsoft, which “submitted all requisite diagrams to meet FedRAMP-defined requirements,” the spokesperson said.
Kratos said it “spent extensive time working collaboratively with FedRAMP in their review” and does not consider such discussions to be “backchanneling.”
FedRAMP, however, was dissatisfied with Kratos’ ongoing work and believed the firm “should be pushing back” on Microsoft more, the former reviewer said. It placed Kratos on a “corrective action plan,” which could eventually result in loss of accreditation. The company said it did not agree with FedRAMP’s action but provided “additional trainings for some internal assessors” in response to it.
The Microsoft spokesperson told ProPublica the company has “always been responsive to requests” from Kratos and FedRAMP. “We are not aware of any backchanneling, nor do we believe that backchanneling would have been necessary given our transparency and cooperation with auditor requests,” the spokesperson said.
In response to questions from ProPublica about the process, the GSA said in an email that FedRAMP’s system “does not create an inherent conflict of interest for professional auditors who meet ethical and contractual performance expectations.”
GSA did not respond to questions about back-channeling but said the “correct process” is for a third-party assessor to “state these problems formally in a finding during the security assessment so that the cloud service provider has an opportunity to fix the issue.”
FedRAMP Ends Talks
The back-and-forth between the FedRAMP reviewers and Microsoft’s team went on for years with little progress. Then, in the summer of 2023, the program’s interim director, Brian Conrad, got a call from the White House that would alter the course of the review.
Chinese state-sponsored hackers had infiltrated GCC, the lower-cost version of Microsoft’s government cloud, and stolen data and emails from the commerce secretary, the U.S. ambassador to China and other high-ranking government officials. In the aftermath, Chris DeRusha, the White House’s chief information security officer, wanted a briefing from FedRAMP, which had authorized GCC.
The decision predated Conrad’s tenure, but he told ProPublica that he left the conversation with several takeaways. First, FedRAMP must hold all cloud providers — including Microsoft — to the same standards. Second, he had the backing of the White House in standing firm. Finally, FedRAMP would feel the political heat if any cloud service with a FedRAMP authorization were hacked.
DeRusha confirmed Conrad’s account of the phone call but declined to comment further.
Within months, Conrad informed Microsoft that FedRAMP was ending the engagement on GCC High.
“After three years of collaboration with the Microsoft team, we still lack visibility into the security gaps because there are unknowns that Microsoft has failed to address,” Conrad wrote in an October 2023 email. This, he added, was not for FedRAMP’s lack of trying. Staffers had spent 480 hours of review time, had conducted 18 “technical deep dive” sessions and had numerous email exchanges with the company over the years. Yet they still lacked the data flow diagrams, crucial information “since visibility into the encryption status of all data flows and stores is so important,” he wrote.
If Microsoft still wanted FedRAMP authorization, Conrad wrote, it would need to start over.
A FedRAMP reviewer, explaining the decision to the Justice Department, said the team was “not asking for anything above and beyond what we’ve asked from every other” cloud service provider, according to meeting minutes reviewed by ProPublica. But the request was particularly justified in Microsoft’s case, the reviewer told the Justice officials, because “each time we’ve actually been able to get visibility into a black box, we’ve uncovered an issue.”
“We can’t even quantify the unknowns, which makes us very uncomfortable,” the reviewer said, according to the minutes.
Microsoft and the Justice Department Push Back
Microsoft was furious. Failing to obtain authorization and starting the process over would signal to the market that something was wrong with GCC High. Customers were already confused and concerned about the drawn-out review, which had become a hot topic in an online forum used by government and technology insiders. There, Wakeman, the Microsoft cybersecurity architect, deflected blame, saying the government had been “dragging their feet on it for years now.”
Meanwhile, to build support for Microsoft’s case, Bergin, the company’s point person for FedRAMP and a former Army official, reached out to government leaders, including one from the Justice Department.
The Justice official, who spoke on condition of anonymity because they were not authorized to discuss the matter, said Bergin complained that the delay was hampering Microsoft’s ability “to get this out into the market full sail.” Bergin then pushed the Justice Department to “throw around our weight” to help secure FedRAMP authorization, the official said.
That December, as the parties gathered to hash things out at GSA’s Washington headquarters, Justice did just that. Rogers, who by then had been promoted to the department’s chief information officer, sat beside Bergin — on the opposite side of the table from Conrad, the FedRAMP director.
Rogers and her Justice colleagues had a stake in the outcome. Since authorizing and deploying GCC High, she had receivedaccolades for her work modernizing the department’s IT and cybersecurity. But without FedRAMP’s stamp of approval, she would be the government official left holding the bag if GCC High were involved in a serious hack. At the same time, the Justice Department couldn’t easily back out of using GCC High because once a technology is widely deployed, pulling the plug can be costly and technically challenging. And from its perspective, the cloud was an improvement over the old government-run data centers.
Shortly after the meeting kicked off, Bergin interrupted a FedRAMP reviewer who had been presenting PowerPoint slides. He said the Justice Department and third-party assessor had already reviewed GCC High, according to meeting minutes. FedRAMP “should essentially just accept” their findings, he said.
Then, in a shock to the FedRAMP team, Rogers backed him up and went on to criticize FedRAMP’s work, according to two attendees.
In its statement, Microsoft said Rogers maintains that FedRAMP’s approach “was misguided and improperly dismissed the extensive evaluations performed by DOJ personnel.”
Bergin did not dispute the account, telling ProPublica that he had been trying to argue that it is the purview of third-party assessors such as Kratos — not FedRAMP — to evaluate the security of cloud products. And because FedRAMP must approve the third-party assessment firms, the program should have taken its issues up with Kratos.
“When you are the regulatory agency who determines who the auditors are and you refuse to accept your auditors’ answers, that’s not a ‘me’ problem,” Bergin told ProPublica.
The GSA did not respond to questions about the meeting. The Justice Department declined to comment.
Pressure Mounts on FedRAMP
If there was any doubt about the role of FedRAMP, the White House issued a memorandum in the summer of 2024 that outlined its views. FedRAMP, it said, “must be capable of conducting rigorous reviews” and requiring cloud providers to “rapidly mitigate weaknesses in their security architecture.” The office should “consistently assess and validate cloud providers’ complex architectures and encryption schemes.”
But by that point, GCC High had spread to other federal agencies, with the Justice Department’s authorization serving as a signal that the technology met federal standards.
It also spread to the defense sector, since the Pentagon required that cloud products used by its contractors meet FedRAMP standards. While it did not have FedRAMP authorization, Microsoft marketed GCC High as meeting the requirements, selling it to companies such as Boeing that research, develop and maintain military weapons systems.
But with the FedRAMP authorization up in the air, some contractors began to worry that by using GCC High, they were out of compliance. That could threaten their contracts, which, in turn, could impact Defense Department operations. Pentagon officials called FedRAMP to inquire about the authorization stalemate.
The Defense Department acknowledged but did not respond to written questions from ProPublica.
Rogers also kept pressing FedRAMP to “get this thing over the line,” former employees of the GSA and FedRAMP said. It was the “opinion of the staff and the contractors that she simply was not willing to put heat to Microsoft on this” and that the Justice Department “was too sympathetic to Microsoft’s claims,” Eric Mill, then GSA’s executive director for cloud strategy, told ProPublica.
Authorization Despite a “Damning” Assessment
In the summer of 2024, FedRAMP hired a new permanent director, government technology insider Pete Waterman. Within about a month of taking the job, he restarted the office’s review of GCC High with a new team, which put aside the debate over data flow diagrams and instead attempted to examine evidence from Microsoft. But these reviewers soon arrived at the same conclusion, with the team’s leader complaining about “getting stiff-armed” by Microsoft.
“He came back and said, ‘Yeah, this thing sucks,’” Mill recalled.
While the team was able to work through only two of the many services included in GCC High, Exchange Online and Teams, that was enough for it to identify “issues that are fundamental” to risk management, including “timely remediation of vulnerabilities and vulnerability scanning,” according to a summary of the team’s findings reviewed by ProPublica.
Those issues, as well as a lack of “proper detailed security documentation” from Microsoft, limit “visibility and understanding of the system” and “impair the ability to make informed risk decisions.”
The team concluded, “There is a lack of confidence in assessing the system’s overall security posture.”
A Microsoft spokesperson said in a statement that the company “never received this feedback in any of its communications with FedRAMP.”
When ProPublica read the findings to Bergin, the Microsoft liaison, he said he was surprised.
“That’s pretty damning,” Bergin said, adding that it sounded like language that “would’ve generally been associated with a finding of ‘not worthy.’ If an assessor wrote that, I would be nervous.”
Despite the findings, to the FedRAMP team, turning Microsoft down didn’t seem like an option. “Not issuing an authorization would impact multiple agencies that are already using GCC-H,” the summary document said. The team determined that it was a “better value” to issue an authorization with conditions for continued government oversight.
While authorizations with oversight conditions weren’t unusual, arriving at one under these circumstances was. GCC High reviewers saw problems everywhere, both in what they were able to evaluate and what they weren’t. To them, most of the package remained a vast wilderness of untold risk.
Nevertheless, FedRAMP and Microsoft reached an agreement, and the day after Christmas 2024, GCC High received its FedRAMP authorization. FedRAMP appended a cover report to the package laying out its deficiencies and noting it carried unknown risks, according to people familiar with the report.
It emphasized that agencies should carefully review the package and engage directly with Microsoft on any questions.
“Unknown Unknowns” Persist
Microsoft told ProPublica that it has met the conditions of the agreement and has “stayed within the performance metrics required by FedRAMP” to ensure that “risks are identified, tracked, remediated, and transparently communicated.”
But under the Trump administration, there aren’t many people left at FedRAMP to check.
While the Biden-era guidance said FedRAMP “must be an expert program that can analyze and validate the security claims” of cloud providers, the GSA told ProPublica that the program’s role is “not to determine if a cloud service is secure enough.” Rather, it is “to ensure agencies have sufficient information to make these risk decisions.”
The problem is that agencies often lack the staff and resources to do thorough reviews, which means the whole system is leaning on the claims of the cloud companies and the assessments of the third-party firms they pay to evaluate them. Under the current vision, critics say, FedRAMP has lost the plot.
“FedRAMP’s job is to watch the American people’s back when it comes to sharing their data with cloud companies,” said Mill, the former GSA official, who also co-authored the 2024 White House memo. “When there’s a security issue, the public doesn’t expect FedRAMP to say they’re just a paper-pusher.”
Meanwhile, at the Justice Department, officials are finding out what FedRAMP meant by the “unknown unknowns” in GCC High. Last year, for example, they discovered that Microsoft relied on China-based engineers to service their sensitive cloud systems despite the department’s prohibition against non-U.S. citizens assisting with IT maintenance.
Officials learned about this arrangement — which was also used in GCC High — not from FedRAMP or from Microsoft but from a ProPublica investigation into the practice, according to the Justice employee who spoke with us.
A Microsoft spokesperson acknowledged that the written security plan for GCC High that the company submitted to the Justice Department did not mention foreign engineers, though he said Microsoft did communicate that information to Justice officials before 2020. Nevertheless, Microsoft has since ended its use of China-based engineers in government systems.
Former and current government officials worry about what other risks may be lurking in GCC High and beyond.
The GSA told ProPublica that, in general, “if there is credible evidence that a cloud service provider has made materially false representations, that matter is then appropriately referred to investigative authorities.”
Ironically, the ultimate arbiter of whether cloud providers or their third-party assessors are living up to their claims is the Justice Department itself. The recent indictment of the former Accenture employee suggests it is willing to use this power. In a court document, the Justice Department alleges that the ex-employee made “false and misleading representations” about the cloud platform’s security to help the company “obtain and maintain lucrative federal contracts.” She is also accused of trying to “influence and obstruct” Accenture’s third-party assessors by hiding the product’s deficiencies and telling others to conceal the “true state of the system” during demonstrations, the department said. She has pleaded not guilty.
There is no public indication that such a case has been brought against Microsoft or anyone involved in the GCC High authorization. The Justice Department declined to comment. Monaco, the deputy attorney general who launched the department’s initiative to pursue cybersecurity fraud cases, did not respond to requests for comment.
She left her government position in January 2025. Microsoft hired her to become its president of global affairs.
A company spokesperson said Monaco’s hiring complied with “all rules, regulations, and ethical standards” and that she “does not work on any federal government contracts or have oversight over or involvement with any of our dealings with the federal government.”