Back in October, Meta announced that its new Instagram Teen Accounts would feature content moderation “guided by the PG-13 rating.” On its face, this made a certain kind of sense as a communication strategy: parents know what PG-13 means (or at least think they do), and Meta was clearly trying to borrow that cultural familiarity to signal that it was taking teen safety seriously.
The Motion Picture Association, however, was not amused. Within hours of the announcement, MPA Chairman Charles Rivkin fired off a statement. Then came a cease-and-desist letter. Then a Washington Post op-ed whining about the threat to its precious brand. The MPA was very protective of its trademark, and very unhappy that Meta was freeloading off the supposed credibility of its widely mocked rating system.
And now, this week, the two sides have announced a formal resolution in which Meta has agreed to “substantially reduce” its references to PG-13 and include a rather remarkable disclaimer:
“There are lots of differences between social media and movies. We didn’t work with the MPA when updating our content settings, and they’re not rating any content on Instagram, and they’re not endorsing or approving our content settings in any way. Rather, we drew inspiration from the MPA’s public guidelines, which are already familiar to parents. Our content moderation systems are not the same as a movie ratings board, so the experience may not be exactly the same.”
In Meta’s official response, you can practically hear the PR team gritting their teeth:
“We’re pleased to have reached an agreement with the MPA. By taking inspiration from a framework families know, our goal was to help parents better understand our teen content policies. We rigorously reviewed those policies against 13+ movie ratings criteria and parent feedback, updated them, and applied them to Teen Accounts by default. While that’s not changing, we’ve taken the MPA’s feedback on how we talk about that work. We’ll keep working to support parents and provide age-appropriate experiences for teens,” said a Meta spokesperson.
Translation: we’re still doing the same thing, we’re just no longer allowed to call it what we were calling it.
There are several layers of nonsense worth unpacking here. First, there’s the MPA getting all high and mighty about its rating system. Let’s remember how the MPA’s film rating system came into existence in the first place: it was a voluntary self-regulation scheme created in the late 1960s specifically to head off government regulation after the government started making noises about the harm Hollywood was doing to children with the content it platformed. Sound familiar? The studios decided that if they rated their own content, maybe Congress would leave them alone. As the MPA explains in their own boilerplate:
For nearly 60 years, the MPA’s Classification and Rating Administration’s (CARA) voluntary film rating system has helped American parents make informed decisions about what movies their children can watch… CARA does not rate user-generated content. CARA-rated films are professionally produced and reviewed under a human-centered system, while user-generated posts on platforms like Instagram are not subject to the same rating process.
Sure, there’s a trademark issue here, but let’s be real: no one thought Instagram was letting a panel of Hollywood parents rate the latest influencer videos.
Next, the PG-13 analogy never actually made much sense for social media. As we discussed on Ctrl-Alt-Speech back when this whole thing started, the context and scale are just completely different. At the time, I pointed out that a system designed to rate a 90-minute professionally produced film — reviewed in its entirety by a panel of parents — is a wholly different beast than moderating hundreds of millions of short-form posts generated by individuals (and AI) every single day.
So, yes, calling the system “PG-13” was a marketing gimmick, meant to trade on a familiar brand while obscuring how differently social media actually works — but the idea that this somehow dilutes the MPA’s marks is still pretty silly.
Then there’s the rating system’s well-documented arbitrariness. The MPA’s ratings have been criticized for decades for their seemingly incoherent standards. On that same podcast, I noted that the rating system is famous for its selective prudishness — nudity gets you an R rating, but two hours of violence can skate by with a PG-13.
There was a whole documentary about this — This Film Is Not Yet Rated — that exposed just how subjective and inconsistent the whole process was. Meta was effectively borrowing credibility from a system that was itself created as a regulatory dodge, is famously inconsistent, and was designed for an entirely different medium. And the MPA’s response was essentially: “Hey, that’s our famously inconsistent regulatory dodge, and you can’t have it.”
The whole thing was silly. And now it’s been formally resolved with Meta agreeing to stop doing the thing it had already mostly stopped doing back in December. So even the resolution is anticlimactic.
But there’s a more substantive point buried under all this trademark squabbling: the whole approach reflects a flawed assumption that one company can set a universal standard for every teen on the planet.
As I argued on the podcast, the deeper issue is that the whole framework is wrong for the medium. The MPA’s rating system was built to evaluate a single 90-minute film, reviewed in its entirety by a panel of parents. Applying that logic to hundreds of millions of short-form posts generated by people across wildly different cultural contexts — a kid in rural Kansas, a teenager in Berlin, a twelve-year-old in Lagos — was never going to produce anything coherent. Different kids, different families, different communities have different standards, and no single company should be setting a universal threshold for all of them. The smarter approach is giving parents and users real controls with customizable defaults, rather than having Zuckerberg (or a Hollywood trade association) decide what counts as age-appropriate for every teenager on the planet.
This whole dispute was silly from start to finish.
Opusonix is the workflow-first platform built for music producers and engineers who are tired of endless email chains and scattered files. By centralizing feedback, versions, and tasks in one structured workspace, it helps you cut email traffic by up to 90% so you can focus more on creating and less on chasing approvals. From time-coded comments and version testing to album planning and client-friendly demo pages, Opusonix gives you the tools to manage every mix, project, and album with clarity and speed. It’s on sale for $50.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Trump’s do everything all at once approach to immigration enforcement is starting to go off the rails. Trump’s plainly stated hatred of “shithole countries” and their inhabitants manifested in early wins for his bigoted “remove the brown people” programs. Then Stephen Miller (the man who answers the “what if a lightbulb had eyebrows and was also a white nationalist” question no one asked) showed up and amped things up. 3,000 arrests per day! he screamed into the void. (The void did not respond to our request for comment before press time.)
A lot of wrenches approached the anti-migrant works and immediately threw themselves into it. First, ICE didn’t have enough officers to staff a surge. No problem, said the administration. Here’s $50,000 and almost no training to get you started! Here’s several (more!) billion dollars to keep it going! Here’s everyone we actually can’t spare from multiple federal agencies!
Bang! Into the blue cities they went, kidnapping and murdering their way towards Miller’s arrest quota. All well and good but at the end of the day, you’ve still got to have some lawyers left to fight the lawsuits these surges generated, as well as to handle challenges against detentions, removals, and direct flights to foreign torture prisons.
Well, the Trump administration no longer has enough lawyers left to do its dirty work. Whoever hasn’t been purged for not being loyal enough or exited ahead of the purges has been asked to clean up a mess with extremely limited amounts of resources and manpower. To make things worse, Trump’s handpicked prosecutors keep being kicked out of court because Trump bypassed the appointment process essential to them remaining employed.
Then there’s the self-inflicted reputational damage Trump’s DOJ has done. The government, for the most part, is no longer granted the presumption of good faith. Courts across the land are not only aware this government isn’t acting in good faith, but they’re refusing to pretend it is, no matter how much copy-pasted boilerplate appears in DOJ filings.
Hundreds of adverse rulings have already been handed down. Hundreds more are on the horizon, especially now that the DOJ has admitted pretty much every arrest that took place in an immigration court was illegal.
It all adds up to the long tail of “flooding the zone.” If you can’t bail water fast enough, you’re going to drown. Here’s how this is working out for the DOJ now, as reported by Kyle Cheney for Politico:
In dozens of cases over the past several weeks, Justice Department lawyers have declined to push back on detainees’ claims that they’re owed a chance to make a case for their release. In those cases, the administration has simply agreed to provide a bond hearing, or even outright release, telling judges that officials “do not have an opposition argument to present” or saying they couldn’t cobble together enough information to mount a defense.
[…]
The new phenomenon is the latest manifestation of the extraordinary strain that the administration’s mass deportation effort — compounded by the mass detention of people who have lived for years without incident in the U.S. interior — has exacted on the justice system.
While ICE bathes in newly awarded billions, the problems its efforts have created are being attended to by a skeleton crew that can’t keep up with Trump’s rights-violating fire hose. That’s created some pretty gaudy numbers, which certainly isn’t a compliment.
Federal judges have ruled more than 7,000 times in recent months that ICE has illegally locked people up without — at the very least — a chance to prove they can live safely in the community.
That’s a lot. This administration is setting judicial records that hopefully will never be broken. It’s not just the government losing cases on the merits. Many of these losses are the result of the DOJ simply being unable to respond at all to legal challenges by people ICE has arrested, detained, or deported.
If there’s a silver lining in this bigoted war on non-white people, it’s everything listed above. Trump’s administration may be evil and stupid in equal measures, but those aspects are being held in check by its inability or unwillingness to anticipate the natural side effects of sending wave after wave of masked goons into cities to kidnap anyone who looks a little bit foreign. The administration is a defective centrifuge that edges closer to disintegration with every rotation. What remains to be seen is who’s going to get hit with the majority of the shrapnel when it finally falls apart. We can only hope it’s the people that started it spinning in the first place.
Last election season the Trump campaign lied to everyone repeatedly about how his second administration would “rein in big tech,” and be a natural extension of the Lina Khan antitrust movement. As we noted at the time, that was always an obvious fake populist lie, but it was propped up anyway by a lazy U.S. press and a long line of useful idiots (including some purported “antitrust experts“.)
The Wall Street Journal last week published a new interesting story about that last bit. Specifically, it’s about how Mike Davis, a radical Trump loyalist and corporate lobbyist, found it relatively trivial to oust the small handful of actual antitrust reformers embedded within the MAGA coalition who occasionally cared about the public interest (Gail Slater and Mark Hamer):
“A Journal investigation found that Davis pushed antitrust officials at the Justice Department to approve his deals—and he went over their heads when they wouldn’t comply, according to interviews with more than three dozen DOJ employees, lobbyists, lawyers and others familiar with the antitrust division.”
Davis, who opportunistically pivoted to pseudo-big-tech criticism after being refused a job in the industry, is a transactional bully who was very excited about Trump’s plan to put minority children in cages last election season. He’s also, according to the Journal, been pivotal in elbowing out any remaining real antitrust enforcers to help Trump operate an even more “pay to play” government:
“Davis, despite having little experience practicing antitrust law, is one of the most visible practitioners of a change playing out across the division. Current and former antitrust officials said some mergers now get approval or draw mild settlements based on political ties rather than public interest. The new dynamic casts a shadow over the Justice Department’s integrity, they said, and has alarmed even some Trump loyalists in the department.”
And this is the Rupert Murdoch owned Wall Street Journal; not exactly the bastion of progressive left wing thought. In Davis’ head, he’s not easily exploiting the comical levels of corruption in the Trump White House, he’s just exceptional, according to comments he made to the Journal:
“I’m the best fixer in Washington, period. Full stop,” said the 48-year-old Iowan. “I know the people. I know the process. I know their pressure points. I know how to win.”
That Trump 2.0 was going to be a corrupt shitshow–and that the movement’s fake dedication to “reining in big tech” and “antitrust reform would be completely hollow–was one of the easier election season predictions I’d ever had to make. It should have been particularly and abundantly obvious to the ostensible fans of antitrust still peppered within the administration.
Even these “antitrust enforcers” within MAGA weren’t what you’d call remotely consistent when it comes to reining in corporate power. And while the Journal sort of romanticizes the first Trump term for “having guardrails,” it too was full of all manner of mindless rubber stamping of harmful deals that eroded competition and drove up costs (like the Sprint T-Mobile merger).
Yet, again, there were no shortage of press outlets (and supposed progressive antitrust experts like Matt Stoller) that spent much of last election season insisting that while Trump 2.0 might be problematic, it would feature ample populist checks on corporate power. You were to believe a sizeable chunk of the GOP had suddenly and uncharacteristically seen the light on antitrust reform.
Building meaningful and productive alliances with authoritarians is like trying to cultivate an intimate relationship with a running chainsaw. And the act of treating them as serious actors on antitrust reform (something Stoller and the press broadly did, repeatedly, with everyone from JD Vance to Josh Hawley) gave them press and policy credibility they never had to earn.
MAGA leadership is largely comprised of transactional bullies whose primary interest is in wealth accumulation and power. Everything else, whether it’s MAHA, or the administration’s purported antiwar stance, or its love of “antitrust reform” was an obvious populist lie, designed to convince a broadly befuddled electorate that dim, violent, and corrupt autocracy would be good for them.
In late 2024, the federal government’s cybersecurity evaluators rendered a troubling verdict on one of Microsoft’s biggest cloud computing offerings.
The tech giant’s “lack of proper detailed security documentation” left reviewers with a “lack of confidence in assessing the system’s overall security posture,” according to an internal government report reviewed by ProPublica.
Or, as one member of the team put it: “The package is a pile of shit.”
For years, reviewers said, Microsoft had tried and failed to fully explain how it protects sensitive information in the cloud as it hops from server to server across the digital terrain. Given that and other unknowns, government experts couldn’t vouch for the technology’s security.
Such judgments would be damning for any company seeking to sell its wares to the U.S. government, but it should have been particularly devastating for Microsoft. The tech giant’s products had been at the heart of two major cybersecurity attacks against the U.S. in three years. In one, Russian hackers exploited a weakness to steal sensitive data from a number of federal agencies, including the National Nuclear Security Administration. In the other, Chinese hackers infiltrated the email accounts of a Cabinet member and other senior government officials.
The federal government could be further exposed if it couldn’t verify the cybersecurity of Microsoft’s Government Community Cloud High, a suite of cloud-based services intended to safeguard some of the nation’s most sensitive information.
Yet, in a highly unusual move that still reverberates across Washington, the Federal Risk and Authorization Management Program, or FedRAMP, authorized the product anyway, bestowing what amounts to the federal government’s cybersecurity seal of approval. FedRAMP’s ruling — which included a kind of “buyer beware” notice to any federal agency considering GCC High — helped Microsoft expand a government business empire worth billions of dollars.
“BOOM SHAKA LAKA,” Richard Wakeman, one of the company’s chief security architects, boasted in an online forum, celebrating the milestone with a meme of Leonardo DiCaprio in “The Wolf of Wall Street.” Wakeman did not respond to requests for comment.
It was not the type of outcome that federal policymakers envisioned a decade and a half ago when they embraced the cloud revolution and created FedRAMP to help safeguard the government’s cybersecurity. The program’s layers of review, which included an assessment by outside experts, were supposed to ensure that service providers like Microsoft could be entrusted with the government’s secrets. But ProPublica’s investigation — drawn from internal FedRAMP memos, logs, emails, meeting minutes, and interviews with seven former and current government employees and contractors — found breakdowns at every juncture of that process. It also found a remarkable deference to Microsoft, even as the company’s products and practices were central to two of the most damaging cyberattacks ever carried out against the government.
FedRAMP first raised questions about GCC High’s security in 2020 and asked Microsoft to provide detailed diagrams explaining its encryption practices. But when the company produced what FedRAMP considered to be only partial information in fits and starts, program officials did not reject Microsoft’s application. Instead, they repeatedly pulled punches and allowed the review to drag out for the better part of five years. And because federal agencies were allowed to deploy the product during the review, GCC High spread across the government as well as the defense industry. By late 2024, FedRAMP reviewers concluded that they had little choice but to authorize the technology — not because their questions had been answered or their review was complete, but largely on the grounds that Microsoft’s product was already being used across Washington.
Today, key parts of the federal government, including the Justice and Energy departments, and the defense sector rely on this technology to protect highly sensitive information that, if leaked, “could be expected to have a severe or catastrophic adverse effect” on operations, assets and individuals, the government has said.
“This is not a happy story in terms of the security of the U.S.,” said Tony Sager, who spent more than three decades as a computer scientist at the National Security Agency and now is an executive at the nonprofit Center for Internet Security.
For years, the FedRAMP process has been equated with actual security, Sager said. ProPublica’s findings, he said, shatter that facade.
“This is not security,” he said. “This is security theater.”
ProPublica is exposing the government’s reservations about this popular product for the first time. We are also revealing Microsoft’s yearslong inability to provide the encryption documentation and evidence the federal reviewers sought.
The revelations come as the Justice Department ramps up scrutiny of the government’s technology contractors. In December, the department announced the indictment of a former employee of Accenture who allegedly misled federal agencies about the security of the company’s cloud platform and its compliance with FedRAMP’s standards. She has pleaded not guilty. Accenture, which was not charged with wrongdoing, has said that it “proactively brought this matter to the government’s attention” and that it is “dedicated to operating with the highest ethical standards.”
Microsoft has also faced questions about its disclosures to the government. As ProPublica reported last year, the company failed to inform the Defense Department about its use of China-based engineers to maintain the government’s cloud systems, despite Pentagon rules stipulating that “No Foreign persons may have” access to its most sensitive data. The department is investigating the practice, which officials say could have compromised national security.
Microsoft has defended its program as “tightly monitored and supplemented by layers of security mitigations,” but after ProPublica’s story published last July, the company announced that it would stop using China-based engineers for Defense Department work.
In response to written questions for this story and in an interview, Microsoft acknowledged the yearslong confrontation with FedRAMP but also said it provided “comprehensive documentation” throughout the review process and “remediated findings where possible.”
“We stand by our products and the comprehensive steps we’ve taken to ensure all FedRAMP-authorized products meet the security and compliance requirements necessary,” a spokesperson said in a statement, adding that the company would “continue to work with FedRAMP to continuously review and evaluate our services for continued compliance.”
The program was an early target of the Trump administration’s Department of Government Efficiency, which slashed its staff and budget. Even FedRAMP acknowledges it is operating “with an absolute minimum of support staff” and “limited customer service.” The roughly two dozen employees who remain are “entirely focused on” delivering authorizations at a record pace, FedRAMP’s director has said. Today, its annual budget is just $10 million, its lowest in a decade, even as it has boasted record numbers of new authorizations for cloud products.
The consequence of all this, people who have worked for FedRAMP told ProPublica, is that the program now is little more than a rubber stamp for industry. The implications of such a downsizing for federal cybersecurity are far-reaching, especially as the administration encourages agencies to adopt cloud-based artificial intelligence tools, which draw upon reams of sensitive information.
The General Services Administration, which houses FedRAMP, defended the program, saying it has undergone “significant reforms to strengthen governance” since GCC High arrived in 2020. “FedRAMP’s role is to assess if cloud services have provided sufficient information and materials to be adequate for agency use, and the program today operates with strengthened oversight and accountability mechanisms to do exactly that,” a GSA spokesperson said in an emailed statement.
The agency did not respond to written questions regarding GCC High.
A “Cloud First” World
About two decades ago, federal officials predicted that the cloud revolution, providing on-demand access to shared computing via the internet, would usher in an era of cheaper, more secure and more efficient information technology.
Moving to the cloud meant shifting away from on-premises servers owned and operated by the government to those in massive data centers maintained by tech companies. Some agency leaders were reluctant to relinquish control, while others couldn’t wait to.
In an effort to accelerate the transition, the Obama administration issued its “Cloud First” policy in 2011, requiring all agencies to implement cloud-based tools “whenever a secure, reliable, cost-effective” option existed. To facilitate adoption, the administration created FedRAMP, whose job was to ensure the security of those tools.
FedRAMP’s “do once, use many times” system was intended to streamline and strengthen the government procurement process. Previously, each agency using a cloud service vetted it separately, sometimes applying different interpretations of federal security requirements. Under the new program, agencies would be able to skip redundant security reviews because FedRAMP authorization indicated that the product had already met standardized requirements. Authorized products would be listed on a government website known as the FedRAMP Marketplace.
On paper, the program was an exercise in efficiency. But in practice, the small FedRAMP team could not keep up with the flood of demand from tech companies that wanted their products authorized.
The slow approval process frustrated both the tech industry, eager for a share in the billions of federal dollars up for grabs, and government agencies that were under pressure to migrate to the cloud. These dynamics sometimes pitted the cloud industry and agency officials together against FedRAMP. The backlog also prompted many agencies to take an alternative path: performing their own reviews of the products they wanted to adopt, using FedRAMP’s standards.
It was through this “agency path” that GCC High entered the federal bloodstream, with the Justice Department paving the way. Initially, some Justice officials were nervous about the cloud and who might have access to its information, which includes highly sensitive court and law enforcement records, a Justice Department official involved in the decision told ProPublica. The department’s cybersecurity program required it to ensure that only U.S. citizens “access or assist in the development, operation, management, or maintenance” of its IT systems, unless a waiver was granted. Justice’s IT specialists recommended pursuing GCC High, believing it could meet the elevated security needs, according to the official, who spoke on condition of anonymity because they were not authorized to discuss internal matters.
Pursuant to FedRAMP’s rules, Microsoft had GCC High evaluated by a so-called third-party assessment organization, which is supposed to provide an independent review of whether the product has met federal standards. The Justice Department then performed its own evaluation of GCC High using those standards and ruled the offering acceptable.
By early 2020, Melinda Rogers, Justice’s deputy chief information officer, made the decision official and soon deployed GCC High across the department.
It was a milestone for all involved. Rogers had ushered the Justice Department into the cloud, and Microsoft had gained a significant foothold in the cutthroat market for the federal government’s cloud computing business.
Moreover, Rogers’ decision placed GCC High on the FedRAMP Marketplace, the government’s influential online clearinghouse of all the cloud providers that are under review or already authorized. Its mere mention as “in process” was a boon for Microsoft, amounting to free advertising on a website used by organizations seeking to purchase cloud services bearing what is widely seen as the government’s cybersecurity seal of approval.
That April, GCC High landed at FedRAMP’s office for review, the final stop on its bureaucratic journey to full authorization.
Microsoft’s Missing Information
In theory, there shouldn’t have been much for FedRAMP’s team to do after the third-party assessor and Justice reviewed GCC High, because all parties were supposed to be following the same requirements.
But it was around this time that the Government Accountability Office, which investigates federal programs, discovered breakdowns in the process, finding that agency reviews sometimes were lacking in quality. Despite missing details, FedRAMP went on to authorize many of these packages. Acknowledging these shortcomings, FedRAMP began to take a harder look at new packages, a former reviewer said.
This was the environment in which Microsoft’s GCC High application entered the pipeline. The name GCC High was an umbrella covering many services and features within Office 365 that all needed to be reviewed. FedRAMP reviewers quickly noticed key material was missing.
The team homed in on what it viewed as a fundamental document called a “data flow diagram,” former members told ProPublica. The illustration is supposed to show how data travels from Point A to Point B — and, more importantly, how it’s protected as it hops from server to server. FedRAMP requires data to be encrypted while in transit to ensure that sensitive materials are protected even if they’re intercepted by hackers.
But when the FedRAMP team asked Microsoft to produce the diagrams showing how such encryption would happen for each service in GCC High, the company balked, saying the request was too challenging. So the reviewers suggested starting with just Exchange Online, the popular email platform.
“This was our litmus test to say, ‘This isn’t the only thing that’s required, but if you’re not doing this, we are not even close yet,’” said one reviewer who spoke on condition of anonymity because they were not authorized to discuss internal matters. Once they reached the appropriate level of detail, they would move from Exchange to other services within GCC High.
It was the kind of detail that other major cloud providers such as Amazon and Google routinely provided, members of the FedRAMP team told ProPublica. Yet Microsoft took months to respond. When it did, the former reviewer said, it submitted a white paper that discussed GCC High’s encryption strategy but left out the details of where on the journey data actually becomes encrypted and decrypted — so FedRAMP couldn’t assess that it was being done properly.
A Microsoft spokesperson acknowledged that the company had “articulated a challenge related to illustrating the volume of information being requested in diagram form” but “found alternate ways to share that information.”
Rogers, who was hired by Microsoft in 2025, declined to be interviewed. In response to emailed questions, the company provided a statement saying that she “stands by the rigorous evaluation that contributed to” her authorization of GCC High. A spokesperson said there was “absolutely no connection” between her hiring and the decisions in the GCC High process, and that she and the company complied with “all rules, regulations, and ethical standards.”
The Justice Department declined to respond to written questions from ProPublica.
A Fight Over “Spaghetti Pies”
As 2020 came to a close, a national security crisis hit Washington that underscored the consequences of cyber weakness. Russian state-sponsored hackers had been quietly working their way through federal computer systems for much of the year and vacuuming up sensitive data and emails from U.S. agencies — including the Justice Department.
At the time, most of the blame fell on a Texas-based company called SolarWinds, whose software provided hackers their initial opening and whose name became synonymous with the attack. But, as ProPublica has reported, the Russians leveraged that opening to exploit a long-standing weakness in a Microsoft product — one that the company had refused to fix for years, despite repeated warnings from one of its engineers. Microsoft has defended its decision not to address the flaw, saying that it received “multiple reviews” and that the company weighs a variety of factors when making security decisions.
In the aftermath, the Biden administration took steps to bolster the nation’s cybersecurity. Among them, the Justice Department announced a cyber-fraud initiative in 2021 to crack down on companies and individuals that “put U.S. information or systems at risk by knowingly providing deficient cybersecurity products or services, knowingly misrepresenting their cybersecurity practices or protocols, or knowingly violating obligations to monitor and report cybersecurity incidents and breaches.”
Deputy Attorney General Lisa Monaco said the department would use the False Claims Act to pursue government contractors “when they fail to follow required cybersecurity standards — because we know that puts all of us at risk.”
But if Microsoft felt any pressure from the SolarWinds attack or from the Justice Department’s announcement, it didn’t manifest in the FedRAMP talks, according to former members of the FedRAMP team.
The discourse between FedRAMP and Microsoft fell into a pattern. The parties would meet. Months would go by. Microsoft would return with a response that FedRAMP deemed incomplete or irrelevant. To bolster the chances of getting the information it wanted, the FedRAMP team provided Microsoft with a template, describing the level of detail it expected. But the diagrams Microsoft returned never met those expectations.
“We never got past Exchange,” one former reviewer said. “We never got that level of detail. We had no visibility inside.”
In an interview with ProPublica, John Bergin, the Microsoft official who became the government’s main contact, acknowledged the prolonged back-and-forth but blamed FedRAMP, equating its requests for diagrams to a “rock fetching exercise.”
“We were maybe incompetent in how we drew drawings because there was no standard to draw them to,” he said. “Did we not do it exactly how they wanted? Absolutely. There was always something missing because there was no standard.”
A Microsoft spokesperson said without such a standard, “cloud providers were left to interpret the level of abstraction and representation on their own,” creating “inconsistency and confusion, not an unwillingness to be transparent.”
But even Microsoft’s own engineers had struggled over the years to map the architecture of its products, according to two people involved in building cloud services used by federal customers. At issue, according to people familiar with Microsoft’s technology, was the decades-old code of its legacy software, which the company used in building its cloud services.
One FedRAMP reviewer compared it to a “pile of spaghetti pies.” The data’s path from Point A to Point B, the person said, was like traveling from Washington to New York with detours by bus, ferry and airplane rather than just taking a quick ride on Amtrak. And each one of those detours represents an opportunity for a hijacking if the data isn’t properly encrypted.
Other major cloud providers such as Amazon and Google built their systems from the ground up, said Sager, the former NSA computer scientist, who worked with all three companies during his time in government.
Microsoft’s system is “not designed for this kind of isolation of ‘secure’ from ‘not secure,’” Sager said.
A Microsoft spokesperson acknowledged the company faces a unique challenge but maintained that its cloud products meet federal security requirements.
“Unlike providers that started later with a narrower product scope, Microsoft operates one of the broadest enterprise and government platforms in the world, supporting continuity for millions of customers while simultaneously modernizing at scale,” the spokesperson said in emailed responses. “That complexity is not ‘spaghetti,’ but it does mean the work of disentangling, isolating, and hardening systems is continuous.”
The spokesperson said that since 2023, Microsoft has made “security‑first architectural redesign, legacy risk reduction, and stronger isolation guarantees a top, company‑wide priority.”
Assessors Back-Channel Cyber Concerns
The FedRAMP team was not the only party with reservations about GCC High. Microsoft’s third-party assessment organizations also expressed concerns.
The firms are supposed to be independent but are hired and paid by the company being assessed. Acknowledging the potential for conflicts of interest, FedRAMP has encouraged the assessment firms to confidentially back-channel to its reviewers any negative feedback that they were unwilling to bring directly to their clients or reflect in official reports.
In 2020, two third-party assessors hired by Microsoft, Coalfire and Kratos, did just that. They told FedRAMP that they were unable to get the full picture of GCC High, a former FedRAMP reviewer told ProPublica.
“Coalfire and Kratos both readily admitted that it was difficult to impossible to get the information required out of Microsoft to properly do a sufficient assessment,” the reviewer told ProPublica.
The back channel helped surface cybersecurity issues that otherwise might never have been known to the government, people who have worked with and for FedRAMP told ProPublica. At the same time, they acknowledged its existence undermined the very spirit and intent of having independent assessors.
A spokesperson for Coalfire, the firm that initially handled the GCC High assessment, requested written questions from ProPublica, then declined to respond.
A spokesperson for Kratos, which replaced Coalfire as the GCC High assessor, declined an interview request. In an emailed response to written questions, the spokesperson said the company stands by its official assessment and recommendation of GCC High and “absolutely refutes” that it “ever would sign off on a product we were unable to fully vet.” The company “has open and frank conversations” with all customers, including Microsoft, which “submitted all requisite diagrams to meet FedRAMP-defined requirements,” the spokesperson said.
Kratos said it “spent extensive time working collaboratively with FedRAMP in their review” and does not consider such discussions to be “backchanneling.”
FedRAMP, however, was dissatisfied with Kratos’ ongoing work and believed the firm “should be pushing back” on Microsoft more, the former reviewer said. It placed Kratos on a “corrective action plan,” which could eventually result in loss of accreditation. The company said it did not agree with FedRAMP’s action but provided “additional trainings for some internal assessors” in response to it.
The Microsoft spokesperson told ProPublica the company has “always been responsive to requests” from Kratos and FedRAMP. “We are not aware of any backchanneling, nor do we believe that backchanneling would have been necessary given our transparency and cooperation with auditor requests,” the spokesperson said.
In response to questions from ProPublica about the process, the GSA said in an email that FedRAMP’s system “does not create an inherent conflict of interest for professional auditors who meet ethical and contractual performance expectations.”
GSA did not respond to questions about back-channeling but said the “correct process” is for a third-party assessor to “state these problems formally in a finding during the security assessment so that the cloud service provider has an opportunity to fix the issue.”
FedRAMP Ends Talks
The back-and-forth between the FedRAMP reviewers and Microsoft’s team went on for years with little progress. Then, in the summer of 2023, the program’s interim director, Brian Conrad, got a call from the White House that would alter the course of the review.
Chinese state-sponsored hackers had infiltrated GCC, the lower-cost version of Microsoft’s government cloud, and stolen data and emails from the commerce secretary, the U.S. ambassador to China and other high-ranking government officials. In the aftermath, Chris DeRusha, the White House’s chief information security officer, wanted a briefing from FedRAMP, which had authorized GCC.
The decision predated Conrad’s tenure, but he told ProPublica that he left the conversation with several takeaways. First, FedRAMP must hold all cloud providers — including Microsoft — to the same standards. Second, he had the backing of the White House in standing firm. Finally, FedRAMP would feel the political heat if any cloud service with a FedRAMP authorization were hacked.
DeRusha confirmed Conrad’s account of the phone call but declined to comment further.
Within months, Conrad informed Microsoft that FedRAMP was ending the engagement on GCC High.
“After three years of collaboration with the Microsoft team, we still lack visibility into the security gaps because there are unknowns that Microsoft has failed to address,” Conrad wrote in an October 2023 email. This, he added, was not for FedRAMP’s lack of trying. Staffers had spent 480 hours of review time, had conducted 18 “technical deep dive” sessions and had numerous email exchanges with the company over the years. Yet they still lacked the data flow diagrams, crucial information “since visibility into the encryption status of all data flows and stores is so important,” he wrote.
If Microsoft still wanted FedRAMP authorization, Conrad wrote, it would need to start over.
A FedRAMP reviewer, explaining the decision to the Justice Department, said the team was “not asking for anything above and beyond what we’ve asked from every other” cloud service provider, according to meeting minutes reviewed by ProPublica. But the request was particularly justified in Microsoft’s case, the reviewer told the Justice officials, because “each time we’ve actually been able to get visibility into a black box, we’ve uncovered an issue.”
“We can’t even quantify the unknowns, which makes us very uncomfortable,” the reviewer said, according to the minutes.
Microsoft and the Justice Department Push Back
Microsoft was furious. Failing to obtain authorization and starting the process over would signal to the market that something was wrong with GCC High. Customers were already confused and concerned about the drawn-out review, which had become a hot topic in an online forum used by government and technology insiders. There, Wakeman, the Microsoft cybersecurity architect, deflected blame, saying the government had been “dragging their feet on it for years now.”
Meanwhile, to build support for Microsoft’s case, Bergin, the company’s point person for FedRAMP and a former Army official, reached out to government leaders, including one from the Justice Department.
The Justice official, who spoke on condition of anonymity because they were not authorized to discuss the matter, said Bergin complained that the delay was hampering Microsoft’s ability “to get this out into the market full sail.” Bergin then pushed the Justice Department to “throw around our weight” to help secure FedRAMP authorization, the official said.
That December, as the parties gathered to hash things out at GSA’s Washington headquarters, Justice did just that. Rogers, who by then had been promoted to the department’s chief information officer, sat beside Bergin — on the opposite side of the table from Conrad, the FedRAMP director.
Rogers and her Justice colleagues had a stake in the outcome. Since authorizing and deploying GCC High, she had receivedaccolades for her work modernizing the department’s IT and cybersecurity. But without FedRAMP’s stamp of approval, she would be the government official left holding the bag if GCC High were involved in a serious hack. At the same time, the Justice Department couldn’t easily back out of using GCC High because once a technology is widely deployed, pulling the plug can be costly and technically challenging. And from its perspective, the cloud was an improvement over the old government-run data centers.
Shortly after the meeting kicked off, Bergin interrupted a FedRAMP reviewer who had been presenting PowerPoint slides. He said the Justice Department and third-party assessor had already reviewed GCC High, according to meeting minutes. FedRAMP “should essentially just accept” their findings, he said.
Then, in a shock to the FedRAMP team, Rogers backed him up and went on to criticize FedRAMP’s work, according to two attendees.
In its statement, Microsoft said Rogers maintains that FedRAMP’s approach “was misguided and improperly dismissed the extensive evaluations performed by DOJ personnel.”
Bergin did not dispute the account, telling ProPublica that he had been trying to argue that it is the purview of third-party assessors such as Kratos — not FedRAMP — to evaluate the security of cloud products. And because FedRAMP must approve the third-party assessment firms, the program should have taken its issues up with Kratos.
“When you are the regulatory agency who determines who the auditors are and you refuse to accept your auditors’ answers, that’s not a ‘me’ problem,” Bergin told ProPublica.
The GSA did not respond to questions about the meeting. The Justice Department declined to comment.
Pressure Mounts on FedRAMP
If there was any doubt about the role of FedRAMP, the White House issued a memorandum in the summer of 2024 that outlined its views. FedRAMP, it said, “must be capable of conducting rigorous reviews” and requiring cloud providers to “rapidly mitigate weaknesses in their security architecture.” The office should “consistently assess and validate cloud providers’ complex architectures and encryption schemes.”
But by that point, GCC High had spread to other federal agencies, with the Justice Department’s authorization serving as a signal that the technology met federal standards.
It also spread to the defense sector, since the Pentagon required that cloud products used by its contractors meet FedRAMP standards. While it did not have FedRAMP authorization, Microsoft marketed GCC High as meeting the requirements, selling it to companies such as Boeing that research, develop and maintain military weapons systems.
But with the FedRAMP authorization up in the air, some contractors began to worry that by using GCC High, they were out of compliance. That could threaten their contracts, which, in turn, could impact Defense Department operations. Pentagon officials called FedRAMP to inquire about the authorization stalemate.
The Defense Department acknowledged but did not respond to written questions from ProPublica.
Rogers also kept pressing FedRAMP to “get this thing over the line,” former employees of the GSA and FedRAMP said. It was the “opinion of the staff and the contractors that she simply was not willing to put heat to Microsoft on this” and that the Justice Department “was too sympathetic to Microsoft’s claims,” Eric Mill, then GSA’s executive director for cloud strategy, told ProPublica.
Authorization Despite a “Damning” Assessment
In the summer of 2024, FedRAMP hired a new permanent director, government technology insider Pete Waterman. Within about a month of taking the job, he restarted the office’s review of GCC High with a new team, which put aside the debate over data flow diagrams and instead attempted to examine evidence from Microsoft. But these reviewers soon arrived at the same conclusion, with the team’s leader complaining about “getting stiff-armed” by Microsoft.
“He came back and said, ‘Yeah, this thing sucks,’” Mill recalled.
While the team was able to work through only two of the many services included in GCC High, Exchange Online and Teams, that was enough for it to identify “issues that are fundamental” to risk management, including “timely remediation of vulnerabilities and vulnerability scanning,” according to a summary of the team’s findings reviewed by ProPublica.
Those issues, as well as a lack of “proper detailed security documentation” from Microsoft, limit “visibility and understanding of the system” and “impair the ability to make informed risk decisions.”
The team concluded, “There is a lack of confidence in assessing the system’s overall security posture.”
A Microsoft spokesperson said in a statement that the company “never received this feedback in any of its communications with FedRAMP.”
When ProPublica read the findings to Bergin, the Microsoft liaison, he said he was surprised.
“That’s pretty damning,” Bergin said, adding that it sounded like language that “would’ve generally been associated with a finding of ‘not worthy.’ If an assessor wrote that, I would be nervous.”
Despite the findings, to the FedRAMP team, turning Microsoft down didn’t seem like an option. “Not issuing an authorization would impact multiple agencies that are already using GCC-H,” the summary document said. The team determined that it was a “better value” to issue an authorization with conditions for continued government oversight.
While authorizations with oversight conditions weren’t unusual, arriving at one under these circumstances was. GCC High reviewers saw problems everywhere, both in what they were able to evaluate and what they weren’t. To them, most of the package remained a vast wilderness of untold risk.
Nevertheless, FedRAMP and Microsoft reached an agreement, and the day after Christmas 2024, GCC High received its FedRAMP authorization. FedRAMP appended a cover report to the package laying out its deficiencies and noting it carried unknown risks, according to people familiar with the report.
It emphasized that agencies should carefully review the package and engage directly with Microsoft on any questions.
“Unknown Unknowns” Persist
Microsoft told ProPublica that it has met the conditions of the agreement and has “stayed within the performance metrics required by FedRAMP” to ensure that “risks are identified, tracked, remediated, and transparently communicated.”
But under the Trump administration, there aren’t many people left at FedRAMP to check.
While the Biden-era guidance said FedRAMP “must be an expert program that can analyze and validate the security claims” of cloud providers, the GSA told ProPublica that the program’s role is “not to determine if a cloud service is secure enough.” Rather, it is “to ensure agencies have sufficient information to make these risk decisions.”
The problem is that agencies often lack the staff and resources to do thorough reviews, which means the whole system is leaning on the claims of the cloud companies and the assessments of the third-party firms they pay to evaluate them. Under the current vision, critics say, FedRAMP has lost the plot.
“FedRAMP’s job is to watch the American people’s back when it comes to sharing their data with cloud companies,” said Mill, the former GSA official, who also co-authored the 2024 White House memo. “When there’s a security issue, the public doesn’t expect FedRAMP to say they’re just a paper-pusher.”
Meanwhile, at the Justice Department, officials are finding out what FedRAMP meant by the “unknown unknowns” in GCC High. Last year, for example, they discovered that Microsoft relied on China-based engineers to service their sensitive cloud systems despite the department’s prohibition against non-U.S. citizens assisting with IT maintenance.
Officials learned about this arrangement — which was also used in GCC High — not from FedRAMP or from Microsoft but from a ProPublica investigation into the practice, according to the Justice employee who spoke with us.
A Microsoft spokesperson acknowledged that the written security plan for GCC High that the company submitted to the Justice Department did not mention foreign engineers, though he said Microsoft did communicate that information to Justice officials before 2020. Nevertheless, Microsoft has since ended its use of China-based engineers in government systems.
Former and current government officials worry about what other risks may be lurking in GCC High and beyond.
The GSA told ProPublica that, in general, “if there is credible evidence that a cloud service provider has made materially false representations, that matter is then appropriately referred to investigative authorities.”
Ironically, the ultimate arbiter of whether cloud providers or their third-party assessors are living up to their claims is the Justice Department itself. The recent indictment of the former Accenture employee suggests it is willing to use this power. In a court document, the Justice Department alleges that the ex-employee made “false and misleading representations” about the cloud platform’s security to help the company “obtain and maintain lucrative federal contracts.” She is also accused of trying to “influence and obstruct” Accenture’s third-party assessors by hiding the product’s deficiencies and telling others to conceal the “true state of the system” during demonstrations, the department said. She has pleaded not guilty.
There is no public indication that such a case has been brought against Microsoft or anyone involved in the GCC High authorization. The Justice Department declined to comment. Monaco, the deputy attorney general who launched the department’s initiative to pursue cybersecurity fraud cases, did not respond to requests for comment.
She left her government position in January 2025. Microsoft hired her to become its president of global affairs.
A company spokesperson said Monaco’s hiring complied with “all rules, regulations, and ethical standards” and that she “does not work on any federal government contracts or have oversight over or involvement with any of our dealings with the federal government.”
Because South Dakota governor Larry Rhoden is forever obligated to serve Kristi Noem and Kristi Noem is forever obligated to serve Donald Trump, he and his GOP buddies are making America MAGA again, starting with his home turf.
Non-citizens have never really disrupted voting. But they’re the convenient scapegoat for a party that’s justifiably worried it’s going to lose its majority during the mid-terms. Multiple efforts are being made all over the nation to disenfranchise anyone that’s not part of Trump’s most rabid voting base. Pretending people not allowed to legally vote are somehow flipping elections for the Democratic Party is more than merely obnoxious. It’s actually harming the democratic process.
Here in South Dakota, two laws have been passed in recent weeks with the express purpose of keeping non-white people from showing up to vote. The first, passed at the beginning of this month, allows any rando to claim a person they saw voting shouldn’t be allowed to vote.
Voters in South Dakota will soon be able to challenge other voters’ citizenship.
Republican Gov. Larry Rhoden signed legislation into law last week that authorizes challenges by individuals and election officials.
[…]
State law already allows challenges to a voter’s registration up to the 90th day before an election, if a person is suspected of lacking South Dakota residency, voting in another state or being registered to vote in another state. The new law adds citizenship as a justification for a challenge.
Challenges may be filed by the South Dakota Secretary of State’s Office, the auditor in the county where the voter is registered, or a voter in the same county. The challenge must be in the form of a signed, sworn statement and must include what the law describes as “documented evidence.”
Now, we can all see what the law is. But we all know how it will be applied. State employees with access to voter rolls will raise challenges against anyone with a foreign-sounding last name. While it’s unlikely few citizens will actually file challenges, they’ll certainly feel comfortable accosting anyone standing in line to vote whose skin is darker than their own. Given the inevitability of these responses, it’s easy to see the law accomplishing exactly what it’s supposed to: limit the number of non-white voters at the polls during the mid-terms and beyond.
But that’s not the only suppression effort signed into law this month. There’s also this one, which raises the bar for participating in the democratic process with the obvious intention of limiting participation to the sort of voters the GOP thinks with vote for it:
New voters in South Dakota will have to prove that they are United States citizens in order to cast a ballot in state and local races under a bill signed on Thursday by Gov. Larry Rhoden.
The new law, which does not apply to South Dakotans already on the voter rolls, comes amid a national push by Republicans to tighten voting rules and root out voting by noncitizens, which is already illegal and believed to be rare.
“This bill ensures only citizens vote in state elections, keeping our elections safe and secure,” said Mr. Rhoden, who is seeking election to a full term this year and is facing a crowded Republican primary field.
It’s already illegal in South Dakota to vote if you’re not a citizen. This bill addresses a completely imaginary “problem.” And it forces voters to provide a passport, birth certificate, and other documents proving citizenship before they’re allowed to vote. While it may be easy for many people to present these documents, the simple fact is that they’ve never been asked to do this before, and anyone who’s not aware this law has been passed will be denied the opportunity to vote because the GOP decided to move the goalposts during an election year.
Non-citizens voting in South Dakota has never been an issue. The fact that 273 non-citizens were recently removed from the state’s voting rolls may seem a bit sketchy but there’s a good reason there might be a few hundred non-citizens with voter registrations:
Noncitizens can obtain a driver’s license or state ID if they are lawful permanent residents or have temporary legal status. There’s a part of the driver’s license form that allows an applicant to register to vote. That part says voters must be citizens.
The problem is that this is all on the same form. The voter registration part of the form has a signature line, which many applicants will fill out and sign even if their intention is only to get a drivers license or ID card, especially since it appears before the final signature block for the entire application.
If applicants are not asked to affirmatively state their intention to register to vote (as the Department of Public Safety employees ask now, along with asking applicants to write “vote” on the form to signal their affirmation), their applications might be processed, along with the voter registration applicants didn’t realize they enabling.
The Secretary of State’s office (the office that’s supposed to be reviewing voter registrations for eligibility) threw the Department of Public Safety under the bus:
Rachel Soulek, director of the Division of Elections in the Secretary of State’s Office, placed blame on the department in her response to South Dakota Searchlight questions about the situation.
“These non U.S. citizens had marked ‘no’ to the citizenship question on their driver’s license application but were incorrectly processed as U.S. citizens due to human error by the Department of Public Safety,” Soulek wrote.
That’s not what happened. Their ID applications were processed and the Soulek’s department failed to catch the inadvertent errors. And it doesn’t really even matter who’s at fault because despite the errors, this is still a non-issue.
Soulek said only one of the 273 noncitizens had ever cast a ballot. That was during the 2016 general election.
A handful of clerical errors that resulted in a single illegal vote in the past decade cannot be a rational basis for a new law. And there’s a good chance the sole vote was made in error, rather than maliciously. After all, if the state told this person they could vote, who were they to question that determination?
This is nothing more than state governments stepping up to do what Trump can’t. His SAVE Act is stalled and lots of last-minute gerrymandering at the behest of the president is tied up in court. His loyalists are doing what they can to make his perverted dreams a reality in states that are most likely to lean Republican in the first place, which makes all of this as pointless as it is stupid. But the underlying threat to democracy remains, ever propelled forward by the people who claim to love America the most.
Last week, the European Parliament voted to let a temporary exemption lapse that had allowed tech companies to scan their services for child sexual abuse material (CSAM) without running afoul of strict EU privacy regulations. Meanwhile, here in the US, West Virginia’s Attorney General continues to press forward with a lawsuit designed to force Apple to scan iCloud for CSAM, apparently oblivious to the fact that succeeding would hand defense attorneys the best gift they’ve ever received.
Two different jurisdictions. Two diametrically opposed approaches, both claiming to protect children, and both making it harder to actually do so.
I’ll be generous and assume people pushing both of these views genuinely think they’re doing what’s best for children. This is a genuinely complex topic with real, painful tradeoffs, and reasonable people can weigh them differently. What’s frustrating is watching policymakers on both sides of the Atlantic charge forward with approaches that seem driven more by vibes than by any serious engagement with how the current system actually works — or why it was built the way it was.
The European Parliament just voted against extending a temporary regulation that had exempted tech platforms from GDPR-style privacy rules when they voluntarily scanned for CSAM. This exemption had been in place (and repeatedly extended) for years while Parliament tried to negotiate a permanent framework. Those negotiations have been going on since November 2023 without resolution, and on Thursday MEPs decided they were done extending the stopgap.
To be clear, Parliament didn’t pass a law banning CSAM scanning. Companies can still technically scan if they want to. But without the exemption, they’re now exposed to massive privacy liability under EU law for doing so. Scanning private messages and stored content to look for CSAM is, after all, mass surveillance — and European privacy law treats mass surveillance seriously (which, in most cases, it should!). So the practical effect is a chilling one: companies that were voluntarily scanning now face significant legal risk if they continue.
The digital rights organization eDRI framed the issue in stark terms:
“This is actually just enabling big tech companies to scan all of our private messages, our most intimate details, all our private chats so it constitutes a really, really serious interference with our right to privacy. It’s not targeted against people that are suspected of child abuse — It’s just targeting everyone, potentially all of the time.”
And that argument is compelling. Hash-matching systems that compare uploaded images against databases of known CSAM are more targeted than, say, keyword scanning of every message, but they still fundamentally involve examining every unencrypted piece of content that passes through the system. When eDRI says it targets “everyone, potentially all of the time,” that’s an accurate description of how the technology works.
But… the technology also works to find and catch CSAM. Europol’s executive director, Catherine De Bolle, pointed to concrete numbers:
Last year alone, Europol processed around 1.1 million of so-called CyberTips, originating from the National Center for Missing & Exploited Children (NCMEC), of relevance to 24 European countries. CyberTips contain multiple entities (files, videos, photos etc.) supporting criminal investigation efforts into child sexual abuse online.
If the current legal basis for voluntary detection by online platforms were to be removed, this is expected to result in a serious reduction of CyberTip referrals. This would undermine the capability to detect relevant investigative leads on CSAM, which in turn will severely impair the EU’s security interests of identifying victims and safeguarding children.
The companies that have been doing this scanning — Google, Microsoft, Meta, Snapchat, TikTok — released a joint statement saying they are “deeply concerned” and warning that the lapse will leave “children across Europe and around the world with fewer protections than they had before.”
So the EU’s privacy advocates aren’t wrong about the surveillance problem. Europol isn’t wrong about the child safety consequences. Both things are true — which is what makes this genuinely tricky rather than a case of one side being obviously right.
Now flip to the United States, where the problem is precisely inverted.
In the US, the existing system has been carefully constructed around a single, critical principle: companies voluntarily choose to scan for CSAM, and when they find it, they’re legally required to report it to NCMEC. The word “voluntarily” is doing enormous load-bearing work in that sentence — and most of the people currently shouting about CSAM don’t seem to know it. As Stanford’s Riana Pfefferkorn explained in detail on Techdirt when a private class action lawsuit against Apple tried to compel CSAM scanning:
While the Fourth Amendment applies only to the government and not to private actors, the government can’t use a private actor to carry out a search it couldn’t constitutionally do itself. If the government compels or pressures a private actor to search, or the private actor searches primarily to serve the government’s interests rather than its own, then the private actor counts as a government agent for purposes of the search, which must then abide by the Fourth Amendment, otherwise the remedy is exclusion.
If the government – legislative, executive, or judiciary – forces a cloud storage provider to scan users’ files for CSAM, that makes the provider a government agent, meaning the scans require a warrant, which a cloud services company has no power to get, making those scans unconstitutional searches. Any CSAM they find (plus any other downstream evidence stemming from the initial unlawful scan) will probably get excluded, but it’s hard to convict people for CSAM without using the CSAM as evidence, making acquittals likelier. Which defeats the purpose of compelling the scans in the first place.
In the US, if the government forces Apple to scan, that makes Apple a government agent. Government agents need warrants. Apple can’t get warrants. So the scans are unconstitutional. So the evidence gets thrown out. So the predators walk free. All because someone thought “just make them scan!” was a simple solution to a complex problem.
Congress apparently understood this when it wrote the federal reporting statute — that’s why the law explicitly disclaims any requirement that providers proactively search for CSAM. The voluntariness of the scanning is what preserves its legal viability. Everyone involved in the actual work of combating CSAM — prosecutors, investigators, NCMEC, trust and safety teams — understands this and takes great care to preserve it.
Everyone, apparently, except the Attorney General of West Virginia. As we discussed recently, West Virginia just filed a lawsuit demanding that a court order Apple to “implement effective CSAM detection measures” on iCloud. The remedy West Virginia seeks — a court order compelling scanning — would spring the constitutional trap that everyone who actually works on this issue has been carefully avoiding for years.
As Pfefferkorn put it:
Any competent plaintiff’s counsel should have figured this out before filing a lawsuit asking a federal court to make Apple start scanning iCloud for CSAM, thereby making Apple a government agent, thereby turning the compelled iCloud scans into unconstitutional searches, thereby making it likelier for any iCloud user who gets caught to walk free, thereby shooting themselves in the foot, doing a disservice to their client, making the situation worse than the status quo, and causing a major setback in the fight for child safety online.
The reason nobody’s filed a lawsuit like this against Apple to date, despite years of complaints from left, right, and center about Apple’s ostensibly lackadaisical approach to CSAM detection in iCloud, isn’t because nobody’s thought of it before. It’s because they thought of it and they did their fucking legal research first. And then they backed away slowly from the computer, grateful to have narrowly avoided turning themselves into useful idiots for pedophiles.
The West Virginia complaint also treats Apple’s abandoned NeuralHash client-side scanning project as evidence that Apple could scan but simply chose not to. What it skips over is why the security community reacted so strongly to NeuralHash in the first place. Apple’s own director of user privacy and child safety laid out the problem:
Scanning every user’s privately stored iCloud content would in our estimation pose serious unintended consequences for our users… Scanning for one type of content, for instance, opens the door for bulk surveillance and could create a desire to search other encrypted messaging systems across content types (such as images, videos, text, or audio) and content categories. How can users be assured that a tool for one type of surveillance has not been reconfigured to surveil for other content such as political activity or religious persecution? Tools of mass surveillance have widespread negative implications for freedom of speech and, by extension, democracy as a whole.
Once you create infrastructure capable of scanning every user’s private content for one category of material, you’ve created infrastructure capable of scanning for anything. The pipe doesn’t care what flows through it. Governments around the world — some of them not exactly champions of human rights — have a well-documented habit of demanding expanded use of existing surveillance capabilities. This connects directly to the perennial fights over end-to-end encryption backdoors, where the same argument applies: you cannot build a door that only the good guys can walk through.
And then there’s the scale problem. Even the best hash-matching systems can produce false positives, and at the scale of major platforms, even tiny error rates translate into enormous numbers of wrongly flagged users.
This is one of those frustrating stories where you can… kinda see all sides, and there’s no easy or obvious answer:
Scanning works, at least somewhat. 1.1 million CyberTips from Europol in a single year. Some number of children identified and rescued because platforms voluntarily detected CSAM and reported it. The system produces real results.
Scanning is mass surveillance. Every image, every message gets examined (algorithmically), not just those belonging to suspected offenders. The privacy intrusion is real, not hypothetical, and it falls on everyone.
Compelled scanning breaks prosecutions. In the US, the Fourth Amendment means that government-ordered scanning creates a get-out-of-jail card for the very predators everyone claims to be targeting. The voluntariness of the system is what makes it legally functional.
Scanning infrastructure is repurposable. A system built to detect CSAM can be retooled to detect political speech, religious content, or anything else. This concern is not paranoid; it’s an engineering reality.
False positives at scale are inevitable. Even highly accurate systems will flag innocent content when processing billions of items, and the consequences for wrongly accused individuals are severe.
People can and will weigh these tradeoffs differently, and that’s legitimate. The tension described in all this is real and doesn’t resolve neatly.
But what both the EU Parliament’s vote and West Virginia’s lawsuit share is an unwillingness to sit with that tension. The EU stripped legal cover from the voluntary system that was actually producing results, without having a workable replacement ready. West Virginia is trying to compel what must remain voluntary, apparently without bothering to read the constitutional case law that makes compelled scanning self-defeating. From opposite directions, both approaches attack the same fragile voluntary architecture that currently threads the needle between these competing interests.
The status quo in the United States — voluntary scanning, mandatory reporting, no government compulsion to search — is far from perfect. But the system functions: it produces leads, preserves prosecutorial viability, and does so precisely because it was designed by people who understood the tradeoffs and built accordingly.
It would be nice if more policymakers engaged with why the system works the way it does before trying to blow it up from either direction. In tech policy, the loudest voices in the room are rarely the ones who’ve done the reading.
Last month Walled Culture wrote about an important case at the Court of Justice of the European Union, (CJEU), the EU’s top court, that could determine how VPNs can be used in that region. Clarification in this area is particularly important because VPNs are currently under attack in various ways. For example, last year, the Danish government published draft legislation that many believed would make it illegal to use a VPN to access geoblocked streaming content or bypass restrictions on illegal websites. In the wake of a firestorm of criticism, Denmark’s Minister of Culture assured people that VPNs would not be banned. However, even though references to VPNs were removed from the text, the provisions are so broadly drafted that VPNs may well be affected anyway. Companies too are taking aim at VPNs. Leading the charge are those in France, which have been targeting VPN providers for over a year now. As TorrentFreak reported last February:
Canal+ and the football league LFP have requested court orders to compel NordVPN, ExpressVPN, ProtonVPN, and others to block access to pirate sites and services. The move follows similar orders obtained last year against DNS resolvers.
The VPN Trust Initiative (VTI) responded with a press release opposing what it called a “Misguided Legal Effort to Extend Website Blocking to VPNs”. It warned:
Such blocking can have sweeping consequences that might put the security and privacy of French citizens at risk.
Targeting VPNs opens the door to a dangerous censorship precedent, risking overreach into broader areas of content.
The VPN provider raised jurisdictional questions and also requested to see evidence that Canal+ owned all the rights at play. However, these concerns didn’t convince the court.
The same applies to Proton’s net neutrality defense, which argued that Article 333-10 of the French sports code, which is at the basis of all blocking orders, violates EU Open Internet Regulation. This defense was too vague, the court concluded, noting that Proton cited the regulation without specifying which provisions were actually breached.
ProtonVPN also argued that forcing a Swiss company to block sites for the French market is a restriction of cross-border trade in services, and that in any case, the blocking measures were “technically unrealizable, costly, and unnecessarily complex.” Despite this valiant defense, the court was unimpressed. At least ProtonVPN was allowed to contest the French court’s ruling. In a similar case in Spain, no such option was given. According to TorrentFreak:
The court orders were issued inaudita parte, which is Latin for “without hearing the other side.” Citing urgency, the Córdoba court did not give NordVPN and ProtonVPN the opportunity to contest the measures before they were granted.
Without a defense, the court reportedly concluded that both NordVPN and ProtonVPN actively advertise their ability to bypass geo-restrictions, citing match schedules in their marketing materials. The VPNs are therefore seen as active participants in the piracy chain rather than passive conduits, according to local media reports.
That’s pretty shocking, and shows once more how biased in favor of the copyright industry the law has become in some jurisdictions: other parties aren’t even allowed to present a defense. It’s a further reason why a definitive ruling from the CJEU on the right of people to use VPNs how they wish is so important.
Alongside these recent court cases, there is also another imminent attack on the use of VPNs, albeit in a slight different way. The UK government has announced wide-ranging plans that aim to “keep children safe online”. One of the ideas the government is proposing is “to age restrict or limit children’s VPN use where it undermines safety protections and changing the age of digital consent.” Although this is presented as a child protection measure, the effects will be much wider. The only way to bring in age restrictions for children is if all adult users of VPNs verify their own age. This inevitably leads to the creation of huge new online databases of personal information that are vulnerable to attack. As a side effect, the UK government’s misguided plans will also bolster the growing attempts by the copyright industry to demonize VPNs – a core element of the Internet’s plumbing – as unnecessary tools that are only used to break the law.
The Modern No-Code Creator Bundle is an extensive online curriculum specifically developed to enable individuals to construct professional websites, applications & automated workflows without the necessity of writing any code. It has five courses, covering leading no-code platforms and tools like ChatGPT, Mendix, and Tabnine. It is ideally suited for novices and non-technical professionals, empowering users to successfully launch digital products independently of developer assistance. It’s on sale for $20.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
This is big. This is going to cause a whole lot of problems for the administration in the hundreds of ICE-related lawsuits it’s defending itself against. It’s a Perry Mason moment, albeit one that implicates the entity delivering it, rather than the other way around. (h/t Chris Geidner on Bluesky)
As we are all painfully aware, ICE operations since Trump returned to office have immediately strayed from the stated “worst of the worst” purpose to going after pretty much anyone who isn’t white. That means ICE officers are staking out any place day laborers might be hanging out, raiding any business that might employ migrant labor, roaming the streets in unmarked cars and masks to snatch up foreign-looking people, and — in what has always been extremely controversial — hanging around immigration courts to arrest migrants engaging in their court-ordered check-ins.
All of it is awful, but deliberately targeting people who are following all of the rules that allow them to remain in the US is particularly despicable. That’s what ICE and other DHS components have been doing: making the easiest, laziest arrests possible to satisfy White House advisor Stephen Miller’s ever-escalating arrest quota.
The administration has spent the last year claiming immigration court arrests are not only legal, but fully supported by ICE policy. Officials (and DOJ lawyers) have said this despite this never being the case before Trump’s return to office.
Now, we know it isn’t true. Bizarrely, this revelation isn’t the result of FOIA requests or court discovery orders. It comes from the DOJ itself, which delivered this unexpected twist in the mass deportation saga in a March 24 filing in a case being handled by the Southern District of New York.
Here’s the essence of the admission made by the DOJ in its letter to the court [PDF]:
We write respectfully and regrettably to correct a material mistaken statement of fact that the Government made to the Court and Plaintiffs. Specifically, this morning, counsel from U.S. Immigration and Customs Enforcement (“ICE”) informed the undersigned of the following: the memorandum entitled Civil Immigration Enforcement Actions in or Near Courthouses, dated May 27, 2025 – which the Government relied on in presenting its arguments in this case and referred to as the “2025 ICE Guidance” – does not and has never applied to civil immigration enforcement actions in or near Executive Office for Immigration Review (“EOIR”) immigration courts.
Holy shit. That’s huge. And the DOJ knows it. The letter goes on to inform the court that the DOJ will be reversing the stance it took in several filings in this case. It also acknowledges that the court opinion based on its previous (and perhaps unknowing) misrepresentations will need to rescinded and re-briefed.
The ACLU’s response to the DOJ’s filing drives the point home further:
[T]he government now concedes the May 2025 ICE memorandum—which it previously asserted authorized arrests at immigration courthouses, provided guidance minimizing the harms of such arrests, and explained the agency’s reasoning for abandoning a prior policy largely prohibiting such arrests—in fact has never applied to such arrests. Accordingly, it further concedes the government’s primary defense to Plaintiffs’ claim that the Immigration Court Arrest Policy is arbitrary and capricious in violation of the Administrative Procedure Act must be “withdraw[n]…”
[…]
The implications of this development are far-reaching. In the months since the Court relied on the government’s representation to deny Plaintiffs preliminary relief, Defendants have continued arresting noncitizens at their immigration court hearings, resulting in their detention—often in facilities hundreds of miles away.
The email cited in the DOJ’s letter was issued by Liana J. Castano, the assistant direct of ICE field operations on March 19. In bold print, the memo says this:
This broadcast serves as a reminder that the May 27, 2025, Guidance does not apply to Executive Office for Immigration Review (Immigration) courts, regardless of their location. As stated in the Guidance, it also does not apply to criminal immigration enforcement actions inside courthouses.
Out of context, “does not apply” might seem like it contradicts the DOJ’s assertion. It doesn’t. Here’s the context, provided by the original memo [PDF], which has been posted to ICE’s website:
ICE officers or agents may conduct civil immigration enforcement actions in or near courthouses when they have credible information that leads them to believe the targeted alien(s) is or will be present at a specific location.
Additionally, civil immigration enforcement actions in or near courthouses should, to the extent practicable, continue to take place in non-public areas of the courthouse, be conducted in collaboration with court security staff, and utilize the court building’s non-public entrances and exits. When practicable, ICE officers and agents will conduct civil immigration enforcement actions against targeted aliens discreetly to minimize their impact on court proceedings.
You can see the problem here: the original memo (issued May 27, 2025) says ICE officers can engage in enforcement efforts “in or near courthouses.” There’s a single caveat, but not one that specifically says immigration courts are off-limits:
ICE officers and agents should generally avoid enforcement actions in or near courthouses, or areas within courthouses that are wholly dedicated to non-criminal proceedings (e.g., family court, small claims court).
That doesn’t specifically exclude immigration courts, although those courts only handle non-criminal proceedings because immigration law violations are civil violations. There’s other language in the memo that further muddies the water:
Other aliens encountered during a civil immigration enforcement action in or near a courthouse, such as family members or friends accompanying the target alien to court appearances or serving as a witness in a proceeding, may be subject to civil immigration enforcement action on a case-by-case basis considering the totality of the circumstances.
This doesn’t specify whether these court appearances are criminal or civil. It just says ICE officers can take advantage of the situation to rack up some ancillary arrests.
I’m not sure what happened recently that would have prompted this clarification. Maybe there’s been an internal change of heart by ICE leadership. Maybe ICE’s legal team was unable to find a way to make these courthouse arrests legally defensible. In any event, the clarification was issued, well after tons of damage has already been done.
While it kind of looks like ICE leadership is throwing front line officers under the bus by issuing after-the-fact clarification of a vaguely worded memo issued 10 months ago, I wouldn’t worry about the ICE officers. It’s mostly an imaginary bus, since it’s almost impossible to sue federal officers and the original memo provides enough plausible deniability that qualified immunity would foreclose any lawsuit that managed to make its way past the initial Bivens barrier.
As irritating as that is, the important thing is that the DOJ has stated, in court, that pretty much any immigration courthouse arrest performed by federal officers was illegal. And that’s going to make it way easier to sue the government itself over its mass deportation program.