In early March, 438 security and privacy researchers from 32 countries signed a massive open letter warning that age verification mandates for the internet are technically impossible to get right, easy to circumvent, a serious threat to privacy and security, and likely to cause more harm than good. While many folks (including us at Techdirt) have been calling out similar problems with age verification, this was basically a ton of experts all teaming up to call out how dangerous the technology is — by any reasonable measure, a hugely significant collective statement from the scientific community on an active area of internet regulation.
It got about a day of press coverage, and then legislators everywhere went right back to doing the thing the scientists just told them was dangerous.
We’ve been writing about the serious problems with age verification mandates for years now. The arguments haven’t changed, because the underlying technical realities haven’t changed. But this letter deserves far more attention than it received because of how thoroughly it tears apart every assumption that age verification proponents rely on.
The letter starts by acknowledging what should be obvious: the signatories share the concerns about kids encountering harmful content online. This matters, because the go-to response to any criticism of age verification is to accuse critics of not caring about children. These are hundreds of scientists saying: we care, we’ve studied this, and what you’re proposing will make things worse.
We share the concerns about the negative effects that exposure to harmful content online has on children, and we applaud that regulators dedicate time and effort to protect them. However, we fear that, if implemented without careful consideration of the technological hazards and societal impact, the new regulation might cause more harm than good.
Some will argue that this is meaningless without a proposed “fix” to the problems facing children online, but that’s nonsense. As these experts argue, the focus on age verification and age gating will make things worse. It’s the classic “we must do something, this is something, therefore we must do this” fallacy dressed up as child protection.
The fact that child safety problems are specific and complex is exactly why simplistic bans and age-gating cause so much damage. And it’s a genuine indictment of our current discourse that refusing to embrace a non-solution somehow gets read as not caring about the problem itself.
From there, the letter walks through the actual problems with these commonly proposed solutions in a level of detail that should be mandatory reading for any legislator voting on these laws. (It almost certainly won’t be, but we can dream.)
First, the biggest problem: these systems are ridiculously easy to circumvent. This point gets hand-waved away constantly by politicians who seem to think that because something sounds like it should work, it must. The scientists have a different view, grounded in actual evidence from actual deployments:
There is ample evidence from existing deployments that lying about age is not hard. It can be as easy as using age-verified accounts borrowed from an elder sibling or friend. In fact, there are reported cases of parents helping their children with age circumvention. There is evidence that, shortly after age-based controls appear, markets and services that sell valid accounts or credentials quickly arise. This enables the use of online services deploying age assurance at an affordable price or even for free. This is the case even if the verification is based on government-issued certificates, as shown by the ease with which fake vaccination certificates could be acquired during the COVID pandemic
We just recently talked about the evidence in Australia showing that a huge percentage of kids have simply learned how to get around age gates. Australia’s biggest accomplishment: teaching kids how to cheat the system.
The letter makes a point that almost never appears in the legislative debates: The threat model for age verification is fundamentally broken because the people building these systems assume the only adversary is a teenager. But since every adult internet user will also be subjected to these checks, and many adults will not want to submit to this kind of surveillance, we’re going to be creating huge incentives for adults to get around these age checks as well, meaning that new industries (some likely to be pretty sketchy) will arise to help people of all ages avoid this kind of surveillance. And that, alone, will make it easier for everyone (kids and adults) to bypass age gates (though in a way that will likely make many people less safe overall):
As its main goal is to restrict the activities of children, it is common to believe that the only adversary is minors trying to bypass age verification. Yet, age verification mechanisms also apply to adults that will have to prove their age in many of their routine online interactions, to access services or to keep them away from children-specific web spaces. As these checks will jeopardize their online experience, adults will have incentives to create means to bypass them both for their own use or to monetize the bypass. Thus, it is foreseeable that an increase in the deployment of age assurance will result in growing availability of circumvention mechanisms, reducing its effectiveness.
The circumvention problem alone should be enough to give legislators pause. But the letter goes further, addressing what happens to people who can’t circumvent the systems, or who try to and end up worse off.
One of the strongest sections addresses the perverse safety consequences. Deplatforming minors from mainstream services doesn’t make them stop using the internet. It pushes them toward less regulated, less secure alternatives where the risks are dramatically higher, and where these services care less about actually taking steps to protect kids:
If minors or adults are deplatformed via age-related bans, they are likely to migrate to find similar services. Since the main platforms would all be regulated, it is likely that they would migrate to fringe sites that escape regulation. This would not only negate any benefit of the age-based controls but also expose users to other dangers, such as scams or malware that are monitored in mainstream platforms but exist on smaller providers. Even if users do not move platforms, attempting circumvention to access mainstream services from a jurisdiction that does not mandate age assurance might also increase their risk. For example, free VPN providers might not follow secure practices or might monetize users’ data (especially non-EU providers that are not subject to data protection obligations), and websites accessed in other jurisdictions through VPNs would not provide the user with the data protection standards and rights which are guaranteed in the EU.
And as we keep explaining: age verification makes adults think they’ve “made the internet safe,” which creates all sorts of downstream problems — including failing to teach young people how to navigate the internet safely, while doing nothing to address the actual threats. As the letter notes, it creates a false sense of security:
The promise of children-specific services that serve as safe spaces is unrealizable with current technology. This means that children might become exposed to predators who infiltrate these spaces, either via circumvention or acquisition of false credentials that allow them to pose as minors in a verifiable way.
So the system designed to “protect the children” could end up creating verified hunting grounds for predators, while simultaneously pushing kids who get locked out of mainstream platforms toward sketchy fringe sites.
Some child safety measure.
The privacy concerns are equally serious. Age verification mandates give online services a justification — indeed, a legal requirement — to collect far more personal data than they currently do. The letter notes that age estimation and age inference technologies are “highly privacy-invasive” and “rely on the collection and processing of sensitive, private data such as biometrics, or behavioural or contextual information.”
And this data will leak. It always does. The letter points to a concrete example: 70,000 users had their government ID photos exposed after appealing age assessment errors on Discord. That’s what happens when you force the creation of massive centralized databases of sensitive identity information. You create targets.
The most alarming part of the letter is the one that gets the least discussion: centralization of power. The scientists warn, bluntly, that age verification infrastructure doubles as censorship infrastructure:
Those deciding which age-based controls need to exist, and those enforcing them gain a tremendous influence on what content is accessible to whom on the internet. Recall that age assurance checks might go well beyond what is regulated in the offline world and set up an infrastructure to enforce arbitrary attribute-based policies online. In the wrong hands, such as an authoritarian government, this influence could be used to censor information and prevent users from accessing services, for example, preventing access to LGBTQ+ content. Centralizing access to the internet easily leads to internet shutdowns, as seen recently in Iran. If enforcement happens at the browser or operating system level, the manufacturers of this software would gain even more control to make decisions on what content is accessible on the Internet. This would enable primarily big American companies to control European citizens’ access to the internet.
This should be the part that makes everyone uncomfortable, regardless of their political orientation.
This brings us to what is already happening to real people right now.
A recent article in The Verge details how age verification systems are creating serious, specific harms for trans internet users. Kansas passed a law invalidating trans people’s driver’s licenses and IDs overnight, requiring them to obtain new IDs with incorrect gender markers. Combine that with age verification laws requiring digital identity checks, and you get exactly the kind of discriminatory exclusion the scientists warned about:
“These systems are specifically designed to look for discrepancies, and they’re going to find them,” said Kayyali. “If you are a woman and anyone on the street would say ‘that’s a woman,’ but that’s not what your ID says, that’s a discrepancy.” The danger of these discrepancies extends not just to trans people, but to anyone else whose appearance doesn’t match normative gendered expectations.
“A lot of age estimation systems are built on a combination of anthropological sex markers and skin texture. This means they fall over and provide inaccurate results when faced with people whose markers and skin texture, well, don’t match,” explains Keyes. For example, one of the most prominent markers algorithms measure to determine sex is the brow ridge. “Suppose you have a trans man on HRT and a trans woman on HRT, the former with low brow ridges and rougher skin, the latter with high ridges and softer skin,” Keyes explains. “The former is likely to have their age overestimated; the latter, underestimated.”
So you have biometric systems that are specifically designed to flag discrepancies between someone’s appearance and their identity documents. And you have a government that is deliberately creating discrepancies in trans people’s identity documents. The result is predictable and ugly: trans people get locked out, flagged, forced to out themselves, or simply blocked from accessing services that everyone else uses freely.
Most of these verification systems are black boxes with no meaningful appeal process. The laws themselves are written with deliberately vague language requiring platforms to verify age through “a commercially available database” or “any other commercially reasonable method,” with nothing about transparency, accuracy, or redress for people who get wrongly flagged or excluded.
And in many of these laws, the definitions of content “harmful to children” are flexible enough to encompass LGBTQ+ communities, information about birth control, and whatever else a given administration decides it doesn’t like. As one of Techdirt’s favorite technology and speech lawyers, Kendra Albert, noted to The Verge:
“I think it’s fair to say that if you look at the history of obscenity in the US and what’s considered explicit material, stuff with queer and trans material is much more likely to be considered sexually explicit even though it’s not. You may be in a circumstance where sites with more content about queer and trans people are more likely to face repercussions for not implementing appropriate age-gating or being tagged as explicit.”
So to summarize: the age verification infrastructure being built across the world (1) doesn’t actually work to keep kids from accessing content, (2) pushes kids toward less safe alternatives, (3) creates verified “safe spaces” that predators can infiltrate, (4) forces massive collection of sensitive personal data that will inevitably leak, (5) creates infrastructure purpose-built for censorship and authoritarian control, (6) systematically discriminates against trans people, people of color, the elderly, immigrants, and anyone whose appearance doesn’t match neat bureaucratic categories, (7) concentrates enormous power over internet access in the hands of governments and a handful of tech companies, and (8) lacks any scientific evidence that it will actually improve children’s mental health or safety.
Seems like a problem.
And 438 scientists from 32 countries put their names on a letter saying so. The letter closes with this:
We believe that it is dangerous and socially unacceptable to introduce a large-scale access control mechanism without a clear understanding of the implications that different design decisions can have on security, privacy, equality, and ultimately on the freedom of decision and autonomy of individuals and nations.
“Dangerous and socially unacceptable.” That isn’t just me being dramatic. That’s the considered, collective judgment of hundreds of researchers whose professional expertise is specifically in the systems being deployed.
Meanwhile, the laws keep passing. Nobody seems to have bothered asking the scientists. Or, more accurately, the scientists volunteered their expertise in the most public way possible, and everyone in a position to act on it decided that the political appeal of “protecting the children” was more important than whether the proposed method of protection actually protects children, or whether it creates a sprawling new infrastructure for surveillance, discrimination, and censorship that will be almost impossible to dismantle once it’s built.
The scientists’ letter called for studying the benefits and harms of age verification before mandating it at internet scale. That seems like a comically low bar. “Maybe understand whether this works before requiring it everywhere” shouldn’t be a controversial position. And yet here we are, with legislators around the world charging ahead, building systems that security experts have told them are broken, in pursuit of goals that the evidence says these systems can’t achieve, at a cost to privacy, security, equality, and freedom that nobody in a position of power seems interested in calculating.
In late 2024, the federal government’s cybersecurity evaluators rendered a troubling verdict on one of Microsoft’s biggest cloud computing offerings.
The tech giant’s “lack of proper detailed security documentation” left reviewers with a “lack of confidence in assessing the system’s overall security posture,” according to an internal government report reviewed by ProPublica.
Or, as one member of the team put it: “The package is a pile of shit.”
For years, reviewers said, Microsoft had tried and failed to fully explain how it protects sensitive information in the cloud as it hops from server to server across the digital terrain. Given that and other unknowns, government experts couldn’t vouch for the technology’s security.
Such judgments would be damning for any company seeking to sell its wares to the U.S. government, but it should have been particularly devastating for Microsoft. The tech giant’s products had been at the heart of two major cybersecurity attacks against the U.S. in three years. In one, Russian hackers exploited a weakness to steal sensitive data from a number of federal agencies, including the National Nuclear Security Administration. In the other, Chinese hackers infiltrated the email accounts of a Cabinet member and other senior government officials.
The federal government could be further exposed if it couldn’t verify the cybersecurity of Microsoft’s Government Community Cloud High, a suite of cloud-based services intended to safeguard some of the nation’s most sensitive information.
Yet, in a highly unusual move that still reverberates across Washington, the Federal Risk and Authorization Management Program, or FedRAMP, authorized the product anyway, bestowing what amounts to the federal government’s cybersecurity seal of approval. FedRAMP’s ruling — which included a kind of “buyer beware” notice to any federal agency considering GCC High — helped Microsoft expand a government business empire worth billions of dollars.
“BOOM SHAKA LAKA,” Richard Wakeman, one of the company’s chief security architects, boasted in an online forum, celebrating the milestone with a meme of Leonardo DiCaprio in “The Wolf of Wall Street.” Wakeman did not respond to requests for comment.
It was not the type of outcome that federal policymakers envisioned a decade and a half ago when they embraced the cloud revolution and created FedRAMP to help safeguard the government’s cybersecurity. The program’s layers of review, which included an assessment by outside experts, were supposed to ensure that service providers like Microsoft could be entrusted with the government’s secrets. But ProPublica’s investigation — drawn from internal FedRAMP memos, logs, emails, meeting minutes, and interviews with seven former and current government employees and contractors — found breakdowns at every juncture of that process. It also found a remarkable deference to Microsoft, even as the company’s products and practices were central to two of the most damaging cyberattacks ever carried out against the government.
FedRAMP first raised questions about GCC High’s security in 2020 and asked Microsoft to provide detailed diagrams explaining its encryption practices. But when the company produced what FedRAMP considered to be only partial information in fits and starts, program officials did not reject Microsoft’s application. Instead, they repeatedly pulled punches and allowed the review to drag out for the better part of five years. And because federal agencies were allowed to deploy the product during the review, GCC High spread across the government as well as the defense industry. By late 2024, FedRAMP reviewers concluded that they had little choice but to authorize the technology — not because their questions had been answered or their review was complete, but largely on the grounds that Microsoft’s product was already being used across Washington.
Today, key parts of the federal government, including the Justice and Energy departments, and the defense sector rely on this technology to protect highly sensitive information that, if leaked, “could be expected to have a severe or catastrophic adverse effect” on operations, assets and individuals, the government has said.
“This is not a happy story in terms of the security of the U.S.,” said Tony Sager, who spent more than three decades as a computer scientist at the National Security Agency and now is an executive at the nonprofit Center for Internet Security.
For years, the FedRAMP process has been equated with actual security, Sager said. ProPublica’s findings, he said, shatter that facade.
“This is not security,” he said. “This is security theater.”
ProPublica is exposing the government’s reservations about this popular product for the first time. We are also revealing Microsoft’s yearslong inability to provide the encryption documentation and evidence the federal reviewers sought.
The revelations come as the Justice Department ramps up scrutiny of the government’s technology contractors. In December, the department announced the indictment of a former employee of Accenture who allegedly misled federal agencies about the security of the company’s cloud platform and its compliance with FedRAMP’s standards. She has pleaded not guilty. Accenture, which was not charged with wrongdoing, has said that it “proactively brought this matter to the government’s attention” and that it is “dedicated to operating with the highest ethical standards.”
Microsoft has also faced questions about its disclosures to the government. As ProPublica reported last year, the company failed to inform the Defense Department about its use of China-based engineers to maintain the government’s cloud systems, despite Pentagon rules stipulating that “No Foreign persons may have” access to its most sensitive data. The department is investigating the practice, which officials say could have compromised national security.
Microsoft has defended its program as “tightly monitored and supplemented by layers of security mitigations,” but after ProPublica’s story published last July, the company announced that it would stop using China-based engineers for Defense Department work.
In response to written questions for this story and in an interview, Microsoft acknowledged the yearslong confrontation with FedRAMP but also said it provided “comprehensive documentation” throughout the review process and “remediated findings where possible.”
“We stand by our products and the comprehensive steps we’ve taken to ensure all FedRAMP-authorized products meet the security and compliance requirements necessary,” a spokesperson said in a statement, adding that the company would “continue to work with FedRAMP to continuously review and evaluate our services for continued compliance.”
The program was an early target of the Trump administration’s Department of Government Efficiency, which slashed its staff and budget. Even FedRAMP acknowledges it is operating “with an absolute minimum of support staff” and “limited customer service.” The roughly two dozen employees who remain are “entirely focused on” delivering authorizations at a record pace, FedRAMP’s director has said. Today, its annual budget is just $10 million, its lowest in a decade, even as it has boasted record numbers of new authorizations for cloud products.
The consequence of all this, people who have worked for FedRAMP told ProPublica, is that the program now is little more than a rubber stamp for industry. The implications of such a downsizing for federal cybersecurity are far-reaching, especially as the administration encourages agencies to adopt cloud-based artificial intelligence tools, which draw upon reams of sensitive information.
The General Services Administration, which houses FedRAMP, defended the program, saying it has undergone “significant reforms to strengthen governance” since GCC High arrived in 2020. “FedRAMP’s role is to assess if cloud services have provided sufficient information and materials to be adequate for agency use, and the program today operates with strengthened oversight and accountability mechanisms to do exactly that,” a GSA spokesperson said in an emailed statement.
The agency did not respond to written questions regarding GCC High.
A “Cloud First” World
About two decades ago, federal officials predicted that the cloud revolution, providing on-demand access to shared computing via the internet, would usher in an era of cheaper, more secure and more efficient information technology.
Moving to the cloud meant shifting away from on-premises servers owned and operated by the government to those in massive data centers maintained by tech companies. Some agency leaders were reluctant to relinquish control, while others couldn’t wait to.
In an effort to accelerate the transition, the Obama administration issued its “Cloud First” policy in 2011, requiring all agencies to implement cloud-based tools “whenever a secure, reliable, cost-effective” option existed. To facilitate adoption, the administration created FedRAMP, whose job was to ensure the security of those tools.
FedRAMP’s “do once, use many times” system was intended to streamline and strengthen the government procurement process. Previously, each agency using a cloud service vetted it separately, sometimes applying different interpretations of federal security requirements. Under the new program, agencies would be able to skip redundant security reviews because FedRAMP authorization indicated that the product had already met standardized requirements. Authorized products would be listed on a government website known as the FedRAMP Marketplace.
On paper, the program was an exercise in efficiency. But in practice, the small FedRAMP team could not keep up with the flood of demand from tech companies that wanted their products authorized.
The slow approval process frustrated both the tech industry, eager for a share in the billions of federal dollars up for grabs, and government agencies that were under pressure to migrate to the cloud. These dynamics sometimes pitted the cloud industry and agency officials together against FedRAMP. The backlog also prompted many agencies to take an alternative path: performing their own reviews of the products they wanted to adopt, using FedRAMP’s standards.
It was through this “agency path” that GCC High entered the federal bloodstream, with the Justice Department paving the way. Initially, some Justice officials were nervous about the cloud and who might have access to its information, which includes highly sensitive court and law enforcement records, a Justice Department official involved in the decision told ProPublica. The department’s cybersecurity program required it to ensure that only U.S. citizens “access or assist in the development, operation, management, or maintenance” of its IT systems, unless a waiver was granted. Justice’s IT specialists recommended pursuing GCC High, believing it could meet the elevated security needs, according to the official, who spoke on condition of anonymity because they were not authorized to discuss internal matters.
Pursuant to FedRAMP’s rules, Microsoft had GCC High evaluated by a so-called third-party assessment organization, which is supposed to provide an independent review of whether the product has met federal standards. The Justice Department then performed its own evaluation of GCC High using those standards and ruled the offering acceptable.
By early 2020, Melinda Rogers, Justice’s deputy chief information officer, made the decision official and soon deployed GCC High across the department.
It was a milestone for all involved. Rogers had ushered the Justice Department into the cloud, and Microsoft had gained a significant foothold in the cutthroat market for the federal government’s cloud computing business.
Moreover, Rogers’ decision placed GCC High on the FedRAMP Marketplace, the government’s influential online clearinghouse of all the cloud providers that are under review or already authorized. Its mere mention as “in process” was a boon for Microsoft, amounting to free advertising on a website used by organizations seeking to purchase cloud services bearing what is widely seen as the government’s cybersecurity seal of approval.
That April, GCC High landed at FedRAMP’s office for review, the final stop on its bureaucratic journey to full authorization.
Microsoft’s Missing Information
In theory, there shouldn’t have been much for FedRAMP’s team to do after the third-party assessor and Justice reviewed GCC High, because all parties were supposed to be following the same requirements.
But it was around this time that the Government Accountability Office, which investigates federal programs, discovered breakdowns in the process, finding that agency reviews sometimes were lacking in quality. Despite missing details, FedRAMP went on to authorize many of these packages. Acknowledging these shortcomings, FedRAMP began to take a harder look at new packages, a former reviewer said.
This was the environment in which Microsoft’s GCC High application entered the pipeline. The name GCC High was an umbrella covering many services and features within Office 365 that all needed to be reviewed. FedRAMP reviewers quickly noticed key material was missing.
The team homed in on what it viewed as a fundamental document called a “data flow diagram,” former members told ProPublica. The illustration is supposed to show how data travels from Point A to Point B — and, more importantly, how it’s protected as it hops from server to server. FedRAMP requires data to be encrypted while in transit to ensure that sensitive materials are protected even if they’re intercepted by hackers.
But when the FedRAMP team asked Microsoft to produce the diagrams showing how such encryption would happen for each service in GCC High, the company balked, saying the request was too challenging. So the reviewers suggested starting with just Exchange Online, the popular email platform.
“This was our litmus test to say, ‘This isn’t the only thing that’s required, but if you’re not doing this, we are not even close yet,’” said one reviewer who spoke on condition of anonymity because they were not authorized to discuss internal matters. Once they reached the appropriate level of detail, they would move from Exchange to other services within GCC High.
It was the kind of detail that other major cloud providers such as Amazon and Google routinely provided, members of the FedRAMP team told ProPublica. Yet Microsoft took months to respond. When it did, the former reviewer said, it submitted a white paper that discussed GCC High’s encryption strategy but left out the details of where on the journey data actually becomes encrypted and decrypted — so FedRAMP couldn’t assess that it was being done properly.
A Microsoft spokesperson acknowledged that the company had “articulated a challenge related to illustrating the volume of information being requested in diagram form” but “found alternate ways to share that information.”
Rogers, who was hired by Microsoft in 2025, declined to be interviewed. In response to emailed questions, the company provided a statement saying that she “stands by the rigorous evaluation that contributed to” her authorization of GCC High. A spokesperson said there was “absolutely no connection” between her hiring and the decisions in the GCC High process, and that she and the company complied with “all rules, regulations, and ethical standards.”
The Justice Department declined to respond to written questions from ProPublica.
A Fight Over “Spaghetti Pies”
As 2020 came to a close, a national security crisis hit Washington that underscored the consequences of cyber weakness. Russian state-sponsored hackers had been quietly working their way through federal computer systems for much of the year and vacuuming up sensitive data and emails from U.S. agencies — including the Justice Department.
At the time, most of the blame fell on a Texas-based company called SolarWinds, whose software provided hackers their initial opening and whose name became synonymous with the attack. But, as ProPublica has reported, the Russians leveraged that opening to exploit a long-standing weakness in a Microsoft product — one that the company had refused to fix for years, despite repeated warnings from one of its engineers. Microsoft has defended its decision not to address the flaw, saying that it received “multiple reviews” and that the company weighs a variety of factors when making security decisions.
In the aftermath, the Biden administration took steps to bolster the nation’s cybersecurity. Among them, the Justice Department announced a cyber-fraud initiative in 2021 to crack down on companies and individuals that “put U.S. information or systems at risk by knowingly providing deficient cybersecurity products or services, knowingly misrepresenting their cybersecurity practices or protocols, or knowingly violating obligations to monitor and report cybersecurity incidents and breaches.”
Deputy Attorney General Lisa Monaco said the department would use the False Claims Act to pursue government contractors “when they fail to follow required cybersecurity standards — because we know that puts all of us at risk.”
But if Microsoft felt any pressure from the SolarWinds attack or from the Justice Department’s announcement, it didn’t manifest in the FedRAMP talks, according to former members of the FedRAMP team.
The discourse between FedRAMP and Microsoft fell into a pattern. The parties would meet. Months would go by. Microsoft would return with a response that FedRAMP deemed incomplete or irrelevant. To bolster the chances of getting the information it wanted, the FedRAMP team provided Microsoft with a template, describing the level of detail it expected. But the diagrams Microsoft returned never met those expectations.
“We never got past Exchange,” one former reviewer said. “We never got that level of detail. We had no visibility inside.”
In an interview with ProPublica, John Bergin, the Microsoft official who became the government’s main contact, acknowledged the prolonged back-and-forth but blamed FedRAMP, equating its requests for diagrams to a “rock fetching exercise.”
“We were maybe incompetent in how we drew drawings because there was no standard to draw them to,” he said. “Did we not do it exactly how they wanted? Absolutely. There was always something missing because there was no standard.”
A Microsoft spokesperson said without such a standard, “cloud providers were left to interpret the level of abstraction and representation on their own,” creating “inconsistency and confusion, not an unwillingness to be transparent.”
But even Microsoft’s own engineers had struggled over the years to map the architecture of its products, according to two people involved in building cloud services used by federal customers. At issue, according to people familiar with Microsoft’s technology, was the decades-old code of its legacy software, which the company used in building its cloud services.
One FedRAMP reviewer compared it to a “pile of spaghetti pies.” The data’s path from Point A to Point B, the person said, was like traveling from Washington to New York with detours by bus, ferry and airplane rather than just taking a quick ride on Amtrak. And each one of those detours represents an opportunity for a hijacking if the data isn’t properly encrypted.
Other major cloud providers such as Amazon and Google built their systems from the ground up, said Sager, the former NSA computer scientist, who worked with all three companies during his time in government.
Microsoft’s system is “not designed for this kind of isolation of ‘secure’ from ‘not secure,’” Sager said.
A Microsoft spokesperson acknowledged the company faces a unique challenge but maintained that its cloud products meet federal security requirements.
“Unlike providers that started later with a narrower product scope, Microsoft operates one of the broadest enterprise and government platforms in the world, supporting continuity for millions of customers while simultaneously modernizing at scale,” the spokesperson said in emailed responses. “That complexity is not ‘spaghetti,’ but it does mean the work of disentangling, isolating, and hardening systems is continuous.”
The spokesperson said that since 2023, Microsoft has made “security‑first architectural redesign, legacy risk reduction, and stronger isolation guarantees a top, company‑wide priority.”
Assessors Back-Channel Cyber Concerns
The FedRAMP team was not the only party with reservations about GCC High. Microsoft’s third-party assessment organizations also expressed concerns.
The firms are supposed to be independent but are hired and paid by the company being assessed. Acknowledging the potential for conflicts of interest, FedRAMP has encouraged the assessment firms to confidentially back-channel to its reviewers any negative feedback that they were unwilling to bring directly to their clients or reflect in official reports.
In 2020, two third-party assessors hired by Microsoft, Coalfire and Kratos, did just that. They told FedRAMP that they were unable to get the full picture of GCC High, a former FedRAMP reviewer told ProPublica.
“Coalfire and Kratos both readily admitted that it was difficult to impossible to get the information required out of Microsoft to properly do a sufficient assessment,” the reviewer told ProPublica.
The back channel helped surface cybersecurity issues that otherwise might never have been known to the government, people who have worked with and for FedRAMP told ProPublica. At the same time, they acknowledged its existence undermined the very spirit and intent of having independent assessors.
A spokesperson for Coalfire, the firm that initially handled the GCC High assessment, requested written questions from ProPublica, then declined to respond.
A spokesperson for Kratos, which replaced Coalfire as the GCC High assessor, declined an interview request. In an emailed response to written questions, the spokesperson said the company stands by its official assessment and recommendation of GCC High and “absolutely refutes” that it “ever would sign off on a product we were unable to fully vet.” The company “has open and frank conversations” with all customers, including Microsoft, which “submitted all requisite diagrams to meet FedRAMP-defined requirements,” the spokesperson said.
Kratos said it “spent extensive time working collaboratively with FedRAMP in their review” and does not consider such discussions to be “backchanneling.”
FedRAMP, however, was dissatisfied with Kratos’ ongoing work and believed the firm “should be pushing back” on Microsoft more, the former reviewer said. It placed Kratos on a “corrective action plan,” which could eventually result in loss of accreditation. The company said it did not agree with FedRAMP’s action but provided “additional trainings for some internal assessors” in response to it.
The Microsoft spokesperson told ProPublica the company has “always been responsive to requests” from Kratos and FedRAMP. “We are not aware of any backchanneling, nor do we believe that backchanneling would have been necessary given our transparency and cooperation with auditor requests,” the spokesperson said.
In response to questions from ProPublica about the process, the GSA said in an email that FedRAMP’s system “does not create an inherent conflict of interest for professional auditors who meet ethical and contractual performance expectations.”
GSA did not respond to questions about back-channeling but said the “correct process” is for a third-party assessor to “state these problems formally in a finding during the security assessment so that the cloud service provider has an opportunity to fix the issue.”
FedRAMP Ends Talks
The back-and-forth between the FedRAMP reviewers and Microsoft’s team went on for years with little progress. Then, in the summer of 2023, the program’s interim director, Brian Conrad, got a call from the White House that would alter the course of the review.
Chinese state-sponsored hackers had infiltrated GCC, the lower-cost version of Microsoft’s government cloud, and stolen data and emails from the commerce secretary, the U.S. ambassador to China and other high-ranking government officials. In the aftermath, Chris DeRusha, the White House’s chief information security officer, wanted a briefing from FedRAMP, which had authorized GCC.
The decision predated Conrad’s tenure, but he told ProPublica that he left the conversation with several takeaways. First, FedRAMP must hold all cloud providers — including Microsoft — to the same standards. Second, he had the backing of the White House in standing firm. Finally, FedRAMP would feel the political heat if any cloud service with a FedRAMP authorization were hacked.
DeRusha confirmed Conrad’s account of the phone call but declined to comment further.
Within months, Conrad informed Microsoft that FedRAMP was ending the engagement on GCC High.
“After three years of collaboration with the Microsoft team, we still lack visibility into the security gaps because there are unknowns that Microsoft has failed to address,” Conrad wrote in an October 2023 email. This, he added, was not for FedRAMP’s lack of trying. Staffers had spent 480 hours of review time, had conducted 18 “technical deep dive” sessions and had numerous email exchanges with the company over the years. Yet they still lacked the data flow diagrams, crucial information “since visibility into the encryption status of all data flows and stores is so important,” he wrote.
If Microsoft still wanted FedRAMP authorization, Conrad wrote, it would need to start over.
A FedRAMP reviewer, explaining the decision to the Justice Department, said the team was “not asking for anything above and beyond what we’ve asked from every other” cloud service provider, according to meeting minutes reviewed by ProPublica. But the request was particularly justified in Microsoft’s case, the reviewer told the Justice officials, because “each time we’ve actually been able to get visibility into a black box, we’ve uncovered an issue.”
“We can’t even quantify the unknowns, which makes us very uncomfortable,” the reviewer said, according to the minutes.
Microsoft and the Justice Department Push Back
Microsoft was furious. Failing to obtain authorization and starting the process over would signal to the market that something was wrong with GCC High. Customers were already confused and concerned about the drawn-out review, which had become a hot topic in an online forum used by government and technology insiders. There, Wakeman, the Microsoft cybersecurity architect, deflected blame, saying the government had been “dragging their feet on it for years now.”
Meanwhile, to build support for Microsoft’s case, Bergin, the company’s point person for FedRAMP and a former Army official, reached out to government leaders, including one from the Justice Department.
The Justice official, who spoke on condition of anonymity because they were not authorized to discuss the matter, said Bergin complained that the delay was hampering Microsoft’s ability “to get this out into the market full sail.” Bergin then pushed the Justice Department to “throw around our weight” to help secure FedRAMP authorization, the official said.
That December, as the parties gathered to hash things out at GSA’s Washington headquarters, Justice did just that. Rogers, who by then had been promoted to the department’s chief information officer, sat beside Bergin — on the opposite side of the table from Conrad, the FedRAMP director.
Rogers and her Justice colleagues had a stake in the outcome. Since authorizing and deploying GCC High, she had receivedaccolades for her work modernizing the department’s IT and cybersecurity. But without FedRAMP’s stamp of approval, she would be the government official left holding the bag if GCC High were involved in a serious hack. At the same time, the Justice Department couldn’t easily back out of using GCC High because once a technology is widely deployed, pulling the plug can be costly and technically challenging. And from its perspective, the cloud was an improvement over the old government-run data centers.
Shortly after the meeting kicked off, Bergin interrupted a FedRAMP reviewer who had been presenting PowerPoint slides. He said the Justice Department and third-party assessor had already reviewed GCC High, according to meeting minutes. FedRAMP “should essentially just accept” their findings, he said.
Then, in a shock to the FedRAMP team, Rogers backed him up and went on to criticize FedRAMP’s work, according to two attendees.
In its statement, Microsoft said Rogers maintains that FedRAMP’s approach “was misguided and improperly dismissed the extensive evaluations performed by DOJ personnel.”
Bergin did not dispute the account, telling ProPublica that he had been trying to argue that it is the purview of third-party assessors such as Kratos — not FedRAMP — to evaluate the security of cloud products. And because FedRAMP must approve the third-party assessment firms, the program should have taken its issues up with Kratos.
“When you are the regulatory agency who determines who the auditors are and you refuse to accept your auditors’ answers, that’s not a ‘me’ problem,” Bergin told ProPublica.
The GSA did not respond to questions about the meeting. The Justice Department declined to comment.
Pressure Mounts on FedRAMP
If there was any doubt about the role of FedRAMP, the White House issued a memorandum in the summer of 2024 that outlined its views. FedRAMP, it said, “must be capable of conducting rigorous reviews” and requiring cloud providers to “rapidly mitigate weaknesses in their security architecture.” The office should “consistently assess and validate cloud providers’ complex architectures and encryption schemes.”
But by that point, GCC High had spread to other federal agencies, with the Justice Department’s authorization serving as a signal that the technology met federal standards.
It also spread to the defense sector, since the Pentagon required that cloud products used by its contractors meet FedRAMP standards. While it did not have FedRAMP authorization, Microsoft marketed GCC High as meeting the requirements, selling it to companies such as Boeing that research, develop and maintain military weapons systems.
But with the FedRAMP authorization up in the air, some contractors began to worry that by using GCC High, they were out of compliance. That could threaten their contracts, which, in turn, could impact Defense Department operations. Pentagon officials called FedRAMP to inquire about the authorization stalemate.
The Defense Department acknowledged but did not respond to written questions from ProPublica.
Rogers also kept pressing FedRAMP to “get this thing over the line,” former employees of the GSA and FedRAMP said. It was the “opinion of the staff and the contractors that she simply was not willing to put heat to Microsoft on this” and that the Justice Department “was too sympathetic to Microsoft’s claims,” Eric Mill, then GSA’s executive director for cloud strategy, told ProPublica.
Authorization Despite a “Damning” Assessment
In the summer of 2024, FedRAMP hired a new permanent director, government technology insider Pete Waterman. Within about a month of taking the job, he restarted the office’s review of GCC High with a new team, which put aside the debate over data flow diagrams and instead attempted to examine evidence from Microsoft. But these reviewers soon arrived at the same conclusion, with the team’s leader complaining about “getting stiff-armed” by Microsoft.
“He came back and said, ‘Yeah, this thing sucks,’” Mill recalled.
While the team was able to work through only two of the many services included in GCC High, Exchange Online and Teams, that was enough for it to identify “issues that are fundamental” to risk management, including “timely remediation of vulnerabilities and vulnerability scanning,” according to a summary of the team’s findings reviewed by ProPublica.
Those issues, as well as a lack of “proper detailed security documentation” from Microsoft, limit “visibility and understanding of the system” and “impair the ability to make informed risk decisions.”
The team concluded, “There is a lack of confidence in assessing the system’s overall security posture.”
A Microsoft spokesperson said in a statement that the company “never received this feedback in any of its communications with FedRAMP.”
When ProPublica read the findings to Bergin, the Microsoft liaison, he said he was surprised.
“That’s pretty damning,” Bergin said, adding that it sounded like language that “would’ve generally been associated with a finding of ‘not worthy.’ If an assessor wrote that, I would be nervous.”
Despite the findings, to the FedRAMP team, turning Microsoft down didn’t seem like an option. “Not issuing an authorization would impact multiple agencies that are already using GCC-H,” the summary document said. The team determined that it was a “better value” to issue an authorization with conditions for continued government oversight.
While authorizations with oversight conditions weren’t unusual, arriving at one under these circumstances was. GCC High reviewers saw problems everywhere, both in what they were able to evaluate and what they weren’t. To them, most of the package remained a vast wilderness of untold risk.
Nevertheless, FedRAMP and Microsoft reached an agreement, and the day after Christmas 2024, GCC High received its FedRAMP authorization. FedRAMP appended a cover report to the package laying out its deficiencies and noting it carried unknown risks, according to people familiar with the report.
It emphasized that agencies should carefully review the package and engage directly with Microsoft on any questions.
“Unknown Unknowns” Persist
Microsoft told ProPublica that it has met the conditions of the agreement and has “stayed within the performance metrics required by FedRAMP” to ensure that “risks are identified, tracked, remediated, and transparently communicated.”
But under the Trump administration, there aren’t many people left at FedRAMP to check.
While the Biden-era guidance said FedRAMP “must be an expert program that can analyze and validate the security claims” of cloud providers, the GSA told ProPublica that the program’s role is “not to determine if a cloud service is secure enough.” Rather, it is “to ensure agencies have sufficient information to make these risk decisions.”
The problem is that agencies often lack the staff and resources to do thorough reviews, which means the whole system is leaning on the claims of the cloud companies and the assessments of the third-party firms they pay to evaluate them. Under the current vision, critics say, FedRAMP has lost the plot.
“FedRAMP’s job is to watch the American people’s back when it comes to sharing their data with cloud companies,” said Mill, the former GSA official, who also co-authored the 2024 White House memo. “When there’s a security issue, the public doesn’t expect FedRAMP to say they’re just a paper-pusher.”
Meanwhile, at the Justice Department, officials are finding out what FedRAMP meant by the “unknown unknowns” in GCC High. Last year, for example, they discovered that Microsoft relied on China-based engineers to service their sensitive cloud systems despite the department’s prohibition against non-U.S. citizens assisting with IT maintenance.
Officials learned about this arrangement — which was also used in GCC High — not from FedRAMP or from Microsoft but from a ProPublica investigation into the practice, according to the Justice employee who spoke with us.
A Microsoft spokesperson acknowledged that the written security plan for GCC High that the company submitted to the Justice Department did not mention foreign engineers, though he said Microsoft did communicate that information to Justice officials before 2020. Nevertheless, Microsoft has since ended its use of China-based engineers in government systems.
Former and current government officials worry about what other risks may be lurking in GCC High and beyond.
The GSA told ProPublica that, in general, “if there is credible evidence that a cloud service provider has made materially false representations, that matter is then appropriately referred to investigative authorities.”
Ironically, the ultimate arbiter of whether cloud providers or their third-party assessors are living up to their claims is the Justice Department itself. The recent indictment of the former Accenture employee suggests it is willing to use this power. In a court document, the Justice Department alleges that the ex-employee made “false and misleading representations” about the cloud platform’s security to help the company “obtain and maintain lucrative federal contracts.” She is also accused of trying to “influence and obstruct” Accenture’s third-party assessors by hiding the product’s deficiencies and telling others to conceal the “true state of the system” during demonstrations, the department said. She has pleaded not guilty.
There is no public indication that such a case has been brought against Microsoft or anyone involved in the GCC High authorization. The Justice Department declined to comment. Monaco, the deputy attorney general who launched the department’s initiative to pursue cybersecurity fraud cases, did not respond to requests for comment.
She left her government position in January 2025. Microsoft hired her to become its president of global affairs.
A company spokesperson said Monaco’s hiring complied with “all rules, regulations, and ethical standards” and that she “does not work on any federal government contracts or have oversight over or involvement with any of our dealings with the federal government.”
Last month Walled Culture wrote about an important case at the Court of Justice of the European Union, (CJEU), the EU’s top court, that could determine how VPNs can be used in that region. Clarification in this area is particularly important because VPNs are currently under attack in various ways. For example, last year, the Danish government published draft legislation that many believed would make it illegal to use a VPN to access geoblocked streaming content or bypass restrictions on illegal websites. In the wake of a firestorm of criticism, Denmark’s Minister of Culture assured people that VPNs would not be banned. However, even though references to VPNs were removed from the text, the provisions are so broadly drafted that VPNs may well be affected anyway. Companies too are taking aim at VPNs. Leading the charge are those in France, which have been targeting VPN providers for over a year now. As TorrentFreak reported last February:
Canal+ and the football league LFP have requested court orders to compel NordVPN, ExpressVPN, ProtonVPN, and others to block access to pirate sites and services. The move follows similar orders obtained last year against DNS resolvers.
The VPN Trust Initiative (VTI) responded with a press release opposing what it called a “Misguided Legal Effort to Extend Website Blocking to VPNs”. It warned:
Such blocking can have sweeping consequences that might put the security and privacy of French citizens at risk.
Targeting VPNs opens the door to a dangerous censorship precedent, risking overreach into broader areas of content.
The VPN provider raised jurisdictional questions and also requested to see evidence that Canal+ owned all the rights at play. However, these concerns didn’t convince the court.
The same applies to Proton’s net neutrality defense, which argued that Article 333-10 of the French sports code, which is at the basis of all blocking orders, violates EU Open Internet Regulation. This defense was too vague, the court concluded, noting that Proton cited the regulation without specifying which provisions were actually breached.
ProtonVPN also argued that forcing a Swiss company to block sites for the French market is a restriction of cross-border trade in services, and that in any case, the blocking measures were “technically unrealizable, costly, and unnecessarily complex.” Despite this valiant defense, the court was unimpressed. At least ProtonVPN was allowed to contest the French court’s ruling. In a similar case in Spain, no such option was given. According to TorrentFreak:
The court orders were issued inaudita parte, which is Latin for “without hearing the other side.” Citing urgency, the Córdoba court did not give NordVPN and ProtonVPN the opportunity to contest the measures before they were granted.
Without a defense, the court reportedly concluded that both NordVPN and ProtonVPN actively advertise their ability to bypass geo-restrictions, citing match schedules in their marketing materials. The VPNs are therefore seen as active participants in the piracy chain rather than passive conduits, according to local media reports.
That’s pretty shocking, and shows once more how biased in favor of the copyright industry the law has become in some jurisdictions: other parties aren’t even allowed to present a defense. It’s a further reason why a definitive ruling from the CJEU on the right of people to use VPNs how they wish is so important.
Alongside these recent court cases, there is also another imminent attack on the use of VPNs, albeit in a slight different way. The UK government has announced wide-ranging plans that aim to “keep children safe online”. One of the ideas the government is proposing is “to age restrict or limit children’s VPN use where it undermines safety protections and changing the age of digital consent.” Although this is presented as a child protection measure, the effects will be much wider. The only way to bring in age restrictions for children is if all adult users of VPNs verify their own age. This inevitably leads to the creation of huge new online databases of personal information that are vulnerable to attack. As a side effect, the UK government’s misguided plans will also bolster the growing attempts by the copyright industry to demonize VPNs – a core element of the Internet’s plumbing – as unnecessary tools that are only used to break the law.
Call me crazy, but I don’t think an official government app should be loading executable code from a random person’s GitHub account. Or tracking your GPS location in the background. Or silently stripping privacy consent dialogs from every website you visit through its built-in browser. And yet here we are.
The White House released a new app last week for iOS and Android, promising “unparalleled access to the Trump Administration.” A security researcher, who goes by Thereallo, pulled the APKs and decompiled them — extracting the actual compiled code and examining what’s really going on under the hood. The propaganda stuff — cherry-picked news, a one-tap button to report your neighbors to ICE, a text that auto-populates “Greatest President Ever!” — which Engadget covered, is embarrassing enough. The code underneath is something else entirely.
Let’s start with the most alarming behavior. Every time you open a link in the app’s built-in browser, the app silently injects JavaScript and CSS into the page. Here’s what it does:
It hides:
Cookie banners
GDPR consent dialogs
OneTrust popups
Privacy banners
Login walls
Signup walls
Upsell prompts
Paywall elements
CMP (Consent Management Platform) boxes
It forces body { overflow: auto !important } to re-enable scrolling on pages where consent dialogs lock the scroll. Then it sets up a MutationObserver to continuously nuke any consent elements that get dynamically added.
An official United States government app is injecting CSS and JavaScript into third-party websites to strip away their cookie consent dialogs, GDPR banners, login gates, and paywalls.
Yiiiiiiiiiiiiikes.
And, yes, I can already hear a certain subset of readers thinking: “Sounds great, actually. Cookie banners are annoying.” And sure, there are good reasons why millions of people use browser extensions like uBlock Origin to do exactly this kind of thing. In fact, if you don’t use tools like that, you probably should. Those consent dialogs are frequently implemented as obnoxious dark patterns, and stripping them out is a perfectly reasonable personal choice.
But the key word there is choice. When you install an ad blocker or a consent-banner nuker, you’re making an informed decision about your own browsing experience. When the White House app does it silently, on every page load, without telling you — that’s the government making that decision for you in a deceptive and technically concerning way. And those consent dialogs exist in the first place because of legal requirements, in many cases requirements that governments themselves have enacted and enforce. There’s something almost comically stupid about the executive branch of the United States shipping code that silently destroys the legal compliance infrastructure of every website you visit through its app.
Then there’s the location tracking. The researcher found that OneSignal’s full GPS tracking pipeline is compiled into the app:
Latitude, longitude, accuracy, timestamp, whether the app was in the foreground or background, and whether it was fine (GPS) or coarse (network). All of it gets written into OneSignal’s PropertiesModel, which syncs to their backend.
The White House app. Tracking your location. Synced to a commercial third-party server. For press releases.
Oh and:
There’s also a background service that keeps capturing location even when the app isn’t active.
To be clear — and the researcher is careful to be precise about this — there are several gates before this tracking activates. The user has to grant location permissions, and a flag called _isShared has to be set to true in the code. Whether the JavaScript bundle currently flips that flag is something that can’t be determined from the decompiled native code alone. What can be determined is that, as the researcher puts it:
the entire pipeline including permission strings, interval constants, fused location requests, capture logic, background scheduling, and the sync to OneSignal’s API, all of them are fully compiled in and one setLocationShared(true) call away from activating. The withNoLocation Expo plugin clearly did not strip any of this.
So at best, the people who built this app tried to disable location tracking and failed. At worst, they have it set up to actually use. The plumbing is all there, fully functional, waiting to be turned on. And this is detailed, accurate GPS data, collected every four and a half minutes when you’re using the app and every nine and a half minutes when you’re not, synced to OneSignal’s commercial servers. For a government app. That’s supposed to show you press releases.
While it’s true that the continued lack of a federal privacy law probably means this is all technically legal, it’s still a wild thing for an app from the federal government to do.
And it gets better. Or worse, depending on your perspective. The app embeds YouTube videos by loading player HTML from… a random person’s GitHub Pages account:
The app embeds YouTube videos using the react-native-youtube-iframe library. This library loads its player HTML from:
That’s a personal GitHub Pages site. If the lonelycpp GitHub account gets compromised, whoever controls it can serve arbitrary HTML and JavaScript to every user of this app, executing inside the WebView context.
This is a government app loading code from a random person’s GitHub Pages.
Cool, cool. Totally normal dependency for critical government infrastructure.
It also loads JavaScript from Elfsight, a commercial SaaS widget company, with no sandboxing. It sends email addresses to Mailchimp. It hosts images on Uploadcare. It has a hardcoded Truth Social embed pulling from static CDN URLs. None of this is government-controlled infrastructure. The list goes on and on and on.
There’s way more in the full breakdown by Thereallo — this is just the highlights. The app is a toxic waste dump of code you should not trust.
Each of these findings individually might have a charitable explanation. Libraries ship with unused code all the time. Lots of apps use third-party services. Dev artifacts occasionally slip through. But stack them all together — the silent consent stripping, the fully compiled location tracking pipeline, the random GitHub dependency, the commercial third-party data flows, the dev artifacts in production, the zero certificate pinning — and the picture is software built by people who either don’t know or don’t care about the standards government software is supposed to meet.
Which brings us to the part that makes all of this even more inexcusable. The United States government used to have people whose entire job was to prevent exactly this kind of thing.
The U.S. Digital Service was created after the Healthcare.gov disaster during the Obama administration, specifically to bring real software engineering talent into the federal government. For over a decade, across three administrations — including Trump’s first term — USDS and its sibling organization 18F recruited experienced engineers, designers, and product managers from the private sector to build government technology that actually worked. These were people who would have caught a full GPS tracking pipeline sitting one function call from activation in what is supposed to be a press release reader, and who would never have loaded executable code from a random person’s GitHub account.
DOGE fired them. Elon Musk’s “Department of Government Efficiency” gutted USDS and 18F — the organizations that were actually doing what DOGE claimed to be doing — and replaced their expertise with… whatever this is. An app built by an outfit called “forty-five-press” according to the Expo config, running on WordPress, with “Greatest President Ever!” hardcoded in the source, loading code from some random person’s GitHub Pages, and shipping the developer’s home IP address to the public.
This is what you get when you fire the people who know what they’re doing and replace them with loyalists: a government app that strips privacy consent dialogs, has a GPS tracking pipeline ready to flip on, depends on infrastructure the government doesn’t control, and ships with the digital equivalent of leaving your house keys taped to the front door. But hey, at least it makes it easy to report your neighbors to ICE.
Taking a break from attacking the First Amendment, FCC boss Brendan Carr this week engaged in a strange bit of performance art: his FCC announced that they’d be effectively adding all foreign-made routers to the agency’s “covered list,” in a bid to ban their sale in the United States.
That is unless manufacturers obtain “conditional approval” (including all appropriate application fees and favors, of course) from the Trump administration via the Department of Defense or Department of Homeland Security. In other words, the Trump administration is attempting to shake down manufacturers of all routers manufactured outside the United States (which again, is nearly all of them) under the pretense of cybersecurity.
You can probably see how this might result in some looming legal action. And who knows what other “favors” to the Trump administration might be required to get conditional approval, like the inclusion of backdoors accessible by our current authoritarian government.
A fact sheet insists this was all necessary because many foreign routers have been exploited by foreign actors:
“Recently, malicious state and non-state sponsored cyber attackers have increasingly leveraged the vulnerabilities in small and home office routers produced abroad to carry out direct attacks against American civilians in their homes.”
We’ve discussed at length that while Brendan Carr loves to pretend he’s doing important things on cybersecurity, most of his policies have made the U.S. less secure. Like his mindless deregulation of the privacy and security standards of domestic telecoms and hardware makers. Or his destruction of smart home testing programs just because they had some operations in China.
Most of the Trump administration “cybersecurity” solutions have been indistinguishable from a foreign attack. They’ve gutted numerous government cybersecurity programs (including a board investigating Salt Typhoon), and dismantled the Cyber Safety Review Board (CSRB) (responsible for investigating significant cybersecurity incidents). The administration claims to be worried about cybersecurity, but then goes out of its way to ensure domestic telecoms see no meaningful oversight whatsoever.
I’d argue Trump administration destruction of corporate oversight of domestic telecom privacy/security standards is a much bigger threat to national security and consumer safety than 90% of foreign routers, but good luck finding any news outlet that brings that up in their coverage of the FCC’s latest move.
In reality, the biggest current threat to U.S. national security is the Trump administration’s rampant, historic corruption. Absolutely any time you see the Trump administration taking steps to “improve national security,” or “address cybersecurity” you can just easily assume there’s some ulterior motive of personal benefit to the president, as we saw when the great hyperventilation over TikTok was “fixed” by offloading the app to Trump’s dodgy billionaire friends.
From the very beginning of the DOGE saga, many of us raised alarms about what would happen when a bunch of inexperienced twenty-somethings were handed unfettered access to the most sensitive databases in the federal government with essentially zero oversight and zero adherence to the security protocols that exist for very good reasons. We wrote about it when a 25-year-old was pushing untested code into the Treasury’s $6 trillion payment system. We published a piece about it, originally reported by ProPublica, when DOGE operatives stormed into Social Security headquarters and demanded access to everything while ignoring the career staff who actually understood the systems.
That ProPublica deep dive painted a picture of 21-to-24-year-olds who didn’t understand the systems they were demanding access to, had “pre-ordained answers and weren’t interested in anything other than defending decisions they’d already made,” and were operating with essentially no accountability. The former acting commissioner described the operation as “a bunch of people who didn’t know what they were doing, with ideas of how government should run—thinking it should work like a McDonald’s or a bank—screaming all the time.”
These are the people who were handed the keys to the most sensitive databases the federal government holds.
And now we have what appears to be the entirely predictable consequence of all of that: direct exfiltration of data in a manner known to break the law, but zero concern over that fact, because of the assurances of a Trump pardon if caught.
The Washington Post has a stunning whistleblower report alleging that a former DOGE software engineer, who had been embedded at the Social Security Administration, walked out with databases containing records on more than 500 million living and dead Americans—on a thumb drive—and then allegedly tried to get colleagues at his new private sector job to help him upload the data to company systems.
According to the disclosure, the former DOGE software engineer, who worked at the Social Security Administration last year before starting a job at a government contractor in October, allegedly told several co-workers that he possessed two tightly restricted databases of U.S. citizens’ information, and had at least one on a thumb drive. The databases, called “Numident” and the “Master Death File,” include records for more than 500 million living and dead Americans, including Social Security numbers, places and dates of birth, citizenship, race and ethnicity, and parents’ names. The complaint does not include specific dates of when he is said to have told colleagues this information, but at least one of the alleged events unfolded around early January, according to the complaint. While working at DOGE, the engineer had approved access to Social Security data.
In the past, this was the kind of thing that the US government actually did a decent job protecting and keeping private. Now they have DOGE bros walking out the door with it on thumbdrives. Holy shit!
And here’s the detail that really tells you everything about the culture DOGE created inside these agencies:
He told another colleague, who refused to help him upload the data because of legal concerns, that he expected to receive a presidential pardon if his actions were deemed to be illegal, according to the complaint.
According to this complaint, this person allegedly understood that what he was doing might be illegal, did it anyway, and had already calculated that the political environment would protect him from consequences. The Elon Musk DOGE bros clearly believed they ran the show and that anyone associated with DOGE was entirely above the law on anything they did.
Perhaps just as troubling, the complaint also alleges that after leaving government employment, the DOGE bro claimed he still had his agency computer and credentials, which he described as carrying “God-level” security access to Social Security’s systems.
The complaint alleges that after leaving government employment, the former DOGE member told colleagues he had a thumb drive with Social Security data and had kept his agency computer and credentials, which he allegedly said carried largely unrestricted “God-level” security access to the agency’s systems — a level of access no other company employee had been granted in its work with SSA.
The Social Security Administration says he had turned in his laptop and lost his credential privileges when he departed. His lawyer denies all alleged wrongdoing, and both the agency and the company said they investigated the claims and didn’t find evidence to confirm them. The company said it conducted a “thorough” two-day internal investigation.
Two whole days! Investigating themselves. On an issue where ignoring it benefits them.
But the SSA’s inspector general is investigating, and has alerted Congress and the Government Accountability Office, which has its own audit of DOGE’s data access underway.
And this whistleblower complaint, filed back in January, surfaces alongside a separate complaint from the SSA’s former chief data officer, Charles Borges, which alleges that DOGE members improperly uploaded copies of Americans’ Social Security data to a digital cloud.
A separate complaint, made in August by the agency’s former chief data officer, Charles Borges, alleges members of DOGE improperly uploaded copies of Americans’ Social Security data to a digital cloud, putting individuals’ private information at risk. In January, the Trump administration acknowledged DOGE staffers were responsible for separate data breaches at the agency, including sharing data through an unapproved third-party service and that one of the DOGE staffers signed an agreement to share data with an unnamed political group aiming to overturn election results in several states.
We wrote about that other leak at the time, of a DOGE bro sharing data with an election denier group.
All of this just confirms what many people expected and none of this should surprise anyone who was paying attention: Donald Trump allowed Elon Musk and his crew of over-confident know-nothings to view federal government computer systems as their personal playthings, where they could access and exfiltrate any data they wanted for whatever ideological reason they wanted.
And we’re only hearing about this because a whistleblower came forward and because a former chief data officer had the courage to file a complaint. How many similar incidents happened at other agencies where no one spoke up? DOGE operatives were embedded across the entire federal government, accessing heavily restricted databases and, as the Washington Post puts it, “merging long-siloed repositories.” Every single one of those agencies had the same dynamic: young, inexperienced but overconfident engineers demanding unfettered access, career staff pushing back and being overruled, and essentially no security protocols being followed.
Former chief data officer Borges put it about as well as anyone could:
“This is absolutely the worst-case scenario,” Borges told The Post. “There could be one or a million copies of it, and we will never know now.”
Once it’s out, you can’t put it back. We’re going to be learning about the consequences of DOGE’s ransacking of federal systems for years, maybe decades. And we’re finding out that the waste, fraud, and abuse we were told DOGE was there to find, appears to have mostly been in their own actions.
We’ve been saying this for years now, and we’re going to keep saying it until the message finally sinks in: mandatory age verification creates massive, centralized honeypots of sensitive biometric data that will inevitably be breached. Every single time. And every single time it happens, the politicians who mandated these systems and the companies that built them act shocked—shocked!—that collecting enormous databases of government IDs, facial scans, and biometric data from millions of people turns out to be a security nightmare.
Well, here we go again.
A couple weeks ago, Discord announced it would launch “teen-by-default” settings for its global audience, meaning all users would be shunted into a restricted experience unless they verified their age through biometric scanning. The internet, predictably, was not thrilled. But while many users were busy venting their frustration, a group of security researchers decided to do something more useful: they took a look under the hood at Persona, one of the companies Discord was using for verification (specifically for users in the UK).
Together with two other researchers, they set out to look into Persona, the San Francisco-based startup that’s used by Discord for biometric identity verification – and found a Persona frontend exposed to the open internet on a US government authorized server.
In 2,456 publicly accessible files, the coderevealedthe extensive surveillance Persona software performs on its users, bundled in an interface that pairs facial recognition with financial reporting – and a parallel implementation that appears designed to serve federal agencies.
Let me say that again: 2,456 publicly accessible files sitting on a government-authorized server, exposed to the open internet. Files that revealed a system performing not a simple age check, but a ton of potentially intrusive checks:
Once a user verifies their identity with Persona,the software performs 269 distinct verification checks and scours the internet and government sources for potential matches, such as by matching your face to politically exposed persons (PEPs), and generating risk and similarity scores for each individual. IP addresses, browser fingerprints, device fingerprints, government ID numbers, phone numbers, names, faces, and even selfie backgrounds are analyzed and retained for up to three years.
The information the software evaluates on the images themselves includes “Selfie Suspicious Entity Detection,” a “Selfie Age Inconsistency Comparison,” similar background detection, which appears to be matched to other users in the database, and a “Selfie Pose Repeated Detection,” which seems to be used to determine whether you are using the same pose as in previous pictures.
This was the same company checking whether a teenager should be allowed to use voice chat on a gaming platform.
Beyond offering simple services to estimate your age, Persona’s exposed code compares your selfie to watchlist photos using facial recognition, screens you against 14 categories of adverse media from mentions of terrorism to espionage, and tags reports with codenames from active intelligence programs consisting of public-private partnerships to combat online child exploitative material, cannabis trafficking, fentanyl trafficking, romance fraud, money laundering, and illegal wildlife trade.
So you wanted to verify you’re old enough to use voice chat, and now there’s a permanent risk score somewhere documenting whether you might be involved in illegal wildlife trafficking.
What could go wrong?
As the researchers put it to The Rage:
“The internet was supposed to be the great equalizer. Information wants to be free, the network interprets censorship as damage and routes around it, all that beautiful optimism. And for a minute it was true.”
[….]
“The state wants to see everything. The corporations want to see everything. And they’ve learned to work together.”
Discord, to its credit, has now said it will not be proceeding with Persona for identity verification. And to be fair, Discord and similar internet companies are in an impossible position here—facing mounting regulatory pressure in multiple jurisdictions to verify ages while being handed a market of vendors who keep turning out to be security nightmares. But this is part of a pattern that should be deeply familiar by now.
Just last year, Discord’s previous third-party age verification partner suffered a breach that exposed 70,000 government ID photos, which were then held for ransom. Discord said it stopped using that vendor. Then it moved to Persona, which was already raising concerns due to connections to Peter Thiel. Now Persona’s frontend is found wide open on a government-authorized server, and Discord is dropping them too.
See the pattern? Discord keeps swapping vendors like someone frantically rotating buckets under a leaking roof, apparently hoping the next bucket won’t have a hole in it. But the problem was never the bucket. The problem is the hole in the roof — the never-ending stream of age-verification government mandates.
And this brings us to the bigger, more important point that almost nobody in the “protect the children” policy crowd seems willing to engage with honestly. Every single time you mandate age verification, you are mandating the creation of a centralized database of extraordinarily sensitive personal information. Government IDs. Biometric facial data. The kind of data that, once breached, cannot be “changed” like a password. You get one face. You get one government ID number. When those leak—and they will leak—the damage is permanent.
Even the IEEE Spectrum Magazine is now publishing articles that detail how age verification undermines any effort to protect children by putting their privacy at risk.
These systems fail in predictable ways.
False positives are common. Platforms identify as minors adults with youthful faces, or adults who are sharing family devices, or have otherwise unusual usage. They lock accounts, sometimes for days. False negatives also persist. Teenagers learn quickly how to evade checks by borrowing IDs, cycling accounts, or using VPNs.
The appeal process itself creates new privacy risks. Platforms must store biometric data, ID images, and verification logs long enough to defend their decisions to regulators. So if an adult who is tired of submitting selfies to verify their age finally uploads an ID, the system must now secure that stored ID. Each retained record becomes a potential breach target.
Scale that experience across millions of users, and you bake the privacy risk into how platforms work.
We have been cataloging these breaches for years. In 2024, Australia greenlit an age verification pilot, and hours later a mandated verification database for bars was breached. That same year, another ID verification service was breached, exposing private info collected on behalf of Uber, TikTok, and more. Then came the Discord vendor breach last year. And now Persona.
This keeps happening because it has to keep happening. It’s the inevitable result of a system designed to aggregate the exact kind of data that attackers most want to steal. Computer scientists and privacy experts have been sounding this alarm for years.
And what makes this even more galling is that these age verification systems don’t even accomplish what they claim to accomplish.
Take Australia’s infamous ban on social media for under-16s, the poster child for this approach. It’s been a complete failure on its own terms: plenty of kids have already figured out ways around the ban, while those who can’t—particularly kids with disabilities who relied on social platforms for community—are being actively harmed by their exclusion. As the security researcher who helped discover the Persona leak, Celeste, told The Rage:
“Normies won’t be able to bypass these,” while less benevolent people “will always find ways to exploit your system.”
So we’ve built a system that fails to keep out the people it’s supposedly targeting, while successfully creating permanent biometric dossiers on millions of law-abiding users. Not great!
Meanwhile, what’s happening at the legislative level is perhaps even more cynical. Governments around the world are pushing harder and harder for mandatory age verification online. And as these mandates create a captive market worth billions of dollars, a whole ecosystem of venture-backed “identity-as-a-service” startups has sprung up to serve it. Persona, valued at $2 billion and backed by Peter Thiel’s investment network, is just one of many. These companies make grand promises about privacy-preserving verification, get contracts with major platforms, and then — whoops — leave 2,456 files exposed on a government server.
And, of course, these very firms are now lobbying for stricter age verification mandates. They’ve positioned themselves as protectors of children while actively working to expand the legal requirements that guarantee their revenue stream.
Lawmakers mandate an impossible task, VC-backed startups pop up to sell a “solution,” those startups then lobby for even stricter mandates to protect their market, and the cycle repeats.
“Child safety” has simply become the marketing department for a rent-seeking surveillance industry.
As long as the law demands that these biometric gates exist, the “security” of the data they collect will always be a secondary concern to “compliance” with the mandate. Companies will keep rotating through vendors, each one promising that their system is the one that won’t leak, right up until it does. And the age verification industry will keep lobbying for stricter laws, because every new mandate is another guaranteed revenue stream.
The researchers who exposed Persona’s frontend hope their findings will serve as a wake-up call. Given the track record, it probably won’t be. Discord dropping Persona changes nothing—the next vendor will collect the same data, make the same promises, and eventually suffer the same breach. Because the problem was never which company holds your biometric data. The problem is that anyone is being forced to hand it over in the first place.
My biggest complaints with AI tend to be with the human beings who are rushing language learning models into mass adoption without doing their basic due diligence. Like AI toy maker Bondu, the creator of “AI” enabled stuffed animals, which recently left the stored chat logs children have with their polyester-filled automated friends openly available online to anybody with a Gmail account:
“[security researcher Joel Margolis] made a startling discovery: Bondu’s web-based portal, intended to allow parents to check on their children’s conversations and for Bondu’s staff to monitor the products’ use and performance, also let anyone with a Gmail account access transcripts of virtually every conversation Bondu’s child users have ever had with the toy.”
At this point there’s just no excuse for this sort of thing. We’ve been writing for more than a decade about how most “smart,” internet-connected toys were being rushed to market without adequate privacy and security safeguards, creating OpSec risks for kids before they’ve even been adequately potty trained.
Now, as we’ve done in sectors like health insurance and journalism, we’ve slathered half-cooked language learning models all over existing dysfunction we refused to address, called it innovation, and then ignored the fact we’ve introduced entirely new problems.
In this case, the included exposed data included kids’ names, birth dates, family member names, and even the detailed summaries and transcripts of every previous chat between the child and their Bondu stuffed animals.
On the plus side, once alerted, the company quickly fixed the issue in a matter of minutes. And when asked by journalists about it, didn’t try to lie about the problem (a low bar, but still):
“When WIRED reached out to the company, Bondu CEO Fateen Anam Rafid wrote in a statement that security fixes for the problem “were completed within hours, followed by a broader security review and the implementation of additional preventative measures for all users.” He added that Bondu “found no evidence of access beyond the researchers involved.”
If hackers are clever they don’t leave many footprints, so that last bit might not be worth much.
One recent survey found that 84 percent of Americans want tougher privacy laws. But corruption has ensured that the country still lacks even baseline internet-era privacy protections. The powers that be have decided, repeatedly, to prioritize mass commercialized surveillance over public safety, and it’s only a matter of time before those chickens come home to roost in ways we can’t even begin to consider.
Back in 2023 we noted how a company named Telly proclaimed it had come up with a new idea for a TV: a free TV, with a second small TV below it, that shows users ads pretty much all of the time. While the bottom TV could also be used for useful things (like weather or a stock tracker), the fact it was constantly bombarding you with ads was supposed to offset any need for a retail price.
But apparently there’s been trouble in innovation paradise.
Shortly after launch, Telly proclaimed that it expected to ship more than half a million of the ad-laden sets. Within a few months it had announced it had already received 250,000 pre-orders. But a recent report by Lowpass indicates that only 35,000 of the sets had made it to peoples’ homes.
What was the problem? Ars Technica, Lowpass and The Verge note that the problems began with a substandard shipping process that resulted in a lot of TVs showing up broken to folks who pre-ordered. Reddit is also full of complaints about general quality control issues, like color issues, ads being played too loudly, odd connectivity issues, remote controls randomly unpairing, and more.
Still, there’s evidence that the idea might still have legs, as the premise itself appears profitable:
“The investor update reportedly said Telly made $22 million in annualized revenue in Q3 2025. This could equate to about $52 in advertising revenue per Telly in use per month ($22 million divided by 35,000 TVs divided by 12 months in a year is $52.38).
That’s notably more than what other TV companies report, as Lowpass pointed out. As a comparison to other budget TV brands that rely heavily on ads and user tracking, Roku reported an average revenue per user (ARPU) of $41.49 for 2024. Vizio, meanwhile, reported an ARPU of $37.17 in 2024.”
The TV industry had already realized that they can make more money tracking your viewing and shopping behavior (and selling that information to dodgy data brokers) long term than they do on the retail value of the set. This just appears to be an extension of that concept, and if companies like Telly can get out of their own way on quality control, it’s likely you’ll see more of it.
In one sense that’s great if you can’t afford the newest and greatest TV set. It’s less great given that the United States is too corrupt to pass functional consumer privacy protections or keep its regulators staffed and functional, meaning there are increasingly fewer mechanisms preventing companies like this from exploiting all the microphone, input, and other data collected from users on a day-to-day basis.
I personally want the opposite experience; I’m willing to pay extra for a dumb television that’s little more than a display panel and some HDMI inputs. A device that has no real “smart” internals or bloated, badly designed GUI made by companies more interested in selling ads than quality control. Some business class TVs can sometimes fit the bill, but by and large it’s a segment the industry clearly isn’t interested in, because there’s much, much more money to be made spying on and monetizing your every decision.
You might recall how Republicans (with help from Democrats) suffered a three year embolism over the national security, privacy, and propaganda problems inherent with TikTok — only to turn around and let Trump sell the platform to his technofascist billionaire friends. Who are now already hard at work preparing to do all of the stuff they claimed the Chinese were doing. And probably worse.
“Before this update, the app did not collect the precise, GPS-derived location data of US users. Now, if you give TikTok permission to use your phone’s location services, then the app may collect granular information about your exact whereabouts.”
That’s not great in a country that’s too corrupt to pass even a baseline privacy law, or to regulate dodgy data brokers that hoover up this sensitive location data and then share it with pretty much any nitwit with two nickels to rub together (including domestic and foreign intelligence agencies).
The “new U.S. TikTok” is already seeing a bunch of weird technical problems. And there are already influencers saying that their criticism of ICE is more frequently running afoul of “community standards guidelines,” though I’ve yet to see a good report fleshing these claims out yet.
As we noted last December, this latest TikTok deal is kind of the worst of all worlds. The Chinese still have an ownership stake in the app, and the companies and individual investors who’ve taken over the app have a long, rich history of supporting authoritarianism and widespread privacy violations.
These Trump-linked billionaires clearly didn’t buy TikTok to protect national security, fix propaganda, or address consumer privacy. They clearly don’t support the kind of policies it would take to actually address those issues, like meaningful privacy laws, media consolidation limits, data broker regulation, media literacy education funding, or kicking corrupt authoritarians out of the White House.
And they didn’t just buy TikTok to make money or undermine a competitor they repeatedly failed to out-innovate in the short-form video space (though that’s certainly a lot of it). They did it to expand surveillance. And, as Musk did with Twitter, to control the modern information space in a way that will coddle their ideologies and marginalize or censor opposition voices they disagree with.
As men like Larry Ellison and Marc Andreessen have made abundantly clear to anyone paying attention, their ideologies are unchecked greed and far right wing anti-democratic extremism. Billionaires attempting to dominate media to confuse the public and protect their own, usually selfish best interests is a tale as old as time. And that is, contrary to their claims, the play here as well.
With a new board full of foundationally terrible people, it’s only a matter of time before they, like Elon Musk before them, inevitably start fiddling with the platform and its algorithms to shut down debate and ideology they don’t like. Larry Ellison in particular is clearly attempting to buy up what’s left of crumbling U.S. corporate media and turn it into a safe space for the planet’s unpopular autocrats.
It’s worth reiterating that this was all built on the back of four years of fear mongering about TikTok privacy, propaganda, and national security issues by Republicans who couldn’t actually give the slightest shit about any of those subjects. And aided by the bumbling Keystone Cops in the Democratic party, who actively helped Trump offload the platform to his billionaire buddies.
Then propped up by a lazy corporate press that’s increasingly incapable of explaining to the public what’s actually happening, especially if it involves rich right wingers trying to dominate media.
I suspect the company will try very hard for a year or so to insist that nothing whatsoever has changed to avoid a mass exodus of TikTok users. Especially in the wake of the promise of new, performative hearings by lawmakers who helped the whole mess happen in the first place.
But the ownership won’t be able to help themselves. Steadily and progressively things will get worse, driving users to another new pesky social media upstart, at which point the billionaire quest for total information control will start all over again.