However, Congress clearly did not continue this work. In fact, it now appears that Congress is poised to consider another extension of this program without even attempting to include necessary and common sense reforms. Most notably, Congress is not considering a requirement to obtain a warrant before looking at data on U.S. persons that was indiscriminately and warrantlessly collected. House Speaker Mike Johnson confirmed that “the plan is to move a clean extension of FISA … for at least 18 months.”
Even more disappointing, House Judiciary Chair Jim Jordan, who has previously been a champion of both the warrant requirement and closing the data broker loophole, told the press he would vote for a clean extension of FISA, claiming that RISAA included enough reforms for the moment.
It’s important to note RISAA was just a reauthorization of this mass surveillance program with a long history of abuse. Prior to the 2024 reauthorization, Section 702 was already misused to run improper queries on peaceful protesters, federal and state lawmakers, Congressional staff, thousands of campaign donors, journalists, and a judge reporting civil rights violations by local police. RISAA further expanded the government’s authority by allowing it to compel a much larger group of people and providers into assisting with this surveillance. As we said when it passed, overall, RISAA is a travesty for Americans who deserve basic constitutional rights and privacy whether they are communicating with people and services inside or outside of the US.
Section 702 should not be reauthorized without any additional safeguards or oversight. Fortunately, there are currently three reform bills for Congress to consider: SAFE, PLEWSA, and GSRA. While none of these bills are perfect, they are all significantly better than the status quo, and should be considered instead of a bill that attempts no reform at all.
Mass spying—accessing a massive amount of communications by and with Americans first and sorting out targets second and secretly—has always been a problem for our rights. It was a problem at first when President George W. Bush authorized it in secret without Congressional or court oversight. And it remained a problem even after the passage of Section 702 in 2008 created the possibility of some oversight. Congress was right that this surveillance is dangerous, and that’s why it set Section 702 up for regular reconsideration. That reconsideration has not occurred, even as the circumstances of the NSA, Justice Department, and FBI leadership, have radically changed. Reform is long overdue, and now it’s urgent.
We’ve all had the unsettling experience of seeing an ad online that reveals just how much advertisers know about our lives. You’re right to be disturbed. Those very same online ad systems have been used by the government to warrantlessly track peoples’ locations, new reporting has confirmed.
For years, the internet advertising industry has been sucking up our data, including our location data, to serve us “more relevant ads.” At the same time, we know that federal law enforcement agencies have been buying up our location data from shady data brokers that most people have never heard of.
Now, a new report gives us direct evidence that Customs and Border Protection (CBP) has used location data taken from the internet advertising ecosystem to track phones. In a document uncovered by 404 Media, CBP admits what we’ve been saying for years: The technical systems powering creepy targeted ads also allow federal agencies to track your location.
The document acknowledges that a program by the agency to use “commercially available marketing location data” for surveillance drew from the process used to select the targeted ads shown to you on nearly every website and app you visit. In this blog post, we’ll tell you what this process is, how it can and is being used for state surveillance, and what can be done about it—by individuals, by lawmakers, and by the tech companies that enable these abuses.
Advertising Surveillance Enables Government Surveillance
The online advertising industry has built a massive surveillance machine, and the government can co-opt it to spy on us.
In the absence of strong privacy laws, surveillance-based advertising has become the norm online. Companies track our online and offline activity, then share it with ad tech companies and data brokers to help target ads. Law enforcement agencies take advantage of this advertising system to buy information about us that they would normally need a warrant for, like location data. They rely on the multi-billion-dollar data broker industry to buy location data harvested from people’s smartphones.
We’ve known for years that location data brokers are one part of federal law enforcement’s massive surveillance arsenal, including immigration enforcement agencies like CBP and Immigration and Customs Enforcement (ICE). ICE, CBP and the FBI have purchased location data from the data broker Venntell and used it to identify immigrants who were later arrested. Last year, ICE purchased a spy tool called Webloc that gathers the locations of millions of phones and makes it easy to search for phones within specific geographic areas over a period of time. Webloc also allows them to filter location data by the unique advertising IDs that Apple and Google assign to our phones.
But a document recently obtained by 404 Media is the first time CBP has acknowledged the location data it buys is partially sourced from the system powering nearly every ad you see online: real-time bidding (RTB). As CBP puts it, “RTB-sourced location data is recorded when an advertisement is served.”
Even though this document is about a 2019-2021 pilot use of this data, CBP and other federal agencies have continued to purchase and use commercially obtained location data. ICE has purchased location tracking tools since then and recently requested information on “Ad Tech” tools it could use for investigations.
The CBP document acknowledges two sources of location data that it relies on: software development kits (SDKs) and RTB, both methods of location-tracking that EFF has written about before. Apps for weather, navigation, dating, fitness, and “family safety” often request location permissions to enable key features. But once an app has access to your location, it could share it with data brokers directly through SDKs or indirectly (and often without the app developers’ knowledge) through RTB. Data brokers can collect location data from SDKs that they pay developers to put in their apps. When relying on RTB, data brokers don’t need any direct relationship with the apps and websites they’re collecting location data from. RTB is facilitated by ad companies that are already plugged into most websites and apps.
How Real-Time Bidding Works
RTB is the process by which most websites and apps auction off their ad space. Unfortunately, the milliseconds-long auctions that determine which ads you see also expose your information, including location data, to thousands of companies a day. At a high-level, here’s how RTB works:
The moment you visit a website or app with ad space, it asks an ad tech company to determine which ads to display for you.
This ad tech company packages all the information they can gather about you into a “bid request” and broadcasts it to thousands of potential advertisers.
The bid request may contain information like your unique advertising ID, your GPS coordinates, IP address, device details, inferred interests, demographic information, and the app or website you’re visiting. The information in bid requests is called “bidstream data” and typically includes identifiers that can be linked to real people.
Advertisers use the personal information in each bid request, along with data profiles they’ve built about you over time, to decide whether to bid on the ad space.
The highest bidder gets to display an ad for you, but advertisers (or the adtech companies that represent them) can collect your bidstream data regardless of whether or not they bid on the ad space.
A key vulnerability of real-time bidding is that while only one advertiser wins the auction, all participants receive data about the person who would see their ad. As a result, anyone posing as an ad buyer can access a stream of sensitive data about billions of individuals a day. Data brokers have taken advantage of this vulnerability to harvest data at a staggering scale. For example, the FTC found that location data broker Mobilewalla collected data on over a billion people, with an estimated 60% sourced from RTB auctions. Leaked data from another location data broker, Gravy Analytics, referenced thousands of apps, including Microsoft apps, Candy Crush, Tinder, Grindr, MyFitnessPal, pregnancy trackers and religious-focused apps. When confronted, several of these apps’ developers said they had never heard of Gravy Analytics.
As Venntel, one of the location data brokers that has sold to ICE, puts it, “Commercially available bidstream data from the advertising ecosystem has long been one of the most comprehensive sources of real-time location and device data available.” But the privacy harms of RTB are not just a matter of misuse by individual data brokers. RTB auctions broadcast the average person’s data to thousands of companies, hundreds of times per day, with no oversight of how this information is ultimately exploited. Once your information is broadcast through RTB, it’s almost impossible to know who receives it or control how it’s used.
What You Can Do To Protect Yourself
Revelations about the government’s exploitation of this location data shows how dangerous online tracking has become, but we’re not powerless. Here are two basic steps you can take to better protect your location data:
Disable your mobile advertising ID (see instructions for iPhone/Android). Apple and Google assign unique advertising IDs to each of their phones. Location data brokers use these advertising IDs to stitch together the information they collect about you from different apps.
Review apps you’ve granted location permissions to. Apps that have access to your location could share it with other companies, so make sure you’re only granting location permission to apps that really need it in order to function. If you can’t disable location access completely for an app, limit it to only when you have the app open or only approximate location instead of precise location.
For more tips, check out EFF’s guide to protecting yourself from mobile-device based location tracking. Keep in mind that the security plan that’s best for you will vary in different situations. For example, you may want to take stronger steps to protect your location data when traveling to a sensitive location, like a protest.
What Tech Companies and Lawmakers Must Do
Legislators and tech companies must act so that individuals don’t bear the burden of defending their data every time they use the internet.
Ad tech companies must reckon with their role in warrantless government surveillance, among other privacy harms. The systems they built for targeted advertising are actively used to track people’s location. The best way to prevent online ads from fueling surveillance is to stop targeting ads based on detailed behavioral profiles. Ads can still be targeted contextually—based on the content people are viewing—without collecting or exposing their sensitive personal information. Short of moving to contextual advertising, tech companies can limit the use of their systems for government location tracking by:
Stopping the use of precise location data for targeted advertising. Ad tech companies facilitating ad auctions can and should remove precise location data from bid requests. Ads can be targeted based on people’s coarse location, like the city they’re in, without giving data brokers people’s exact GPS coordinates. Precise location data can reveal where we work, where we live, who we meet, where we protest, where we worship, and more. Broadcasting it to thousands of companies a day through RTB is dangerous.
Removing advertising IDs from devices, or at minimum, disabling them by default. Advertising IDs have become a linchpin of the data broker economy and are actively used by law enforcement to track people’s location. Advertising IDs were added to phones in 2012 to let companies track you, and removing them is not a far-fetched idea. When Apple forced apps to request access to people’s advertising IDs starting in 2021 (if you have an iPhone you’ve probably seen the “Ask App Not to Track” pop-ups), 96% of U.S. users opted out, essentially disabling advertising IDs on most iOS devices. One study found that iPhone users were less likely to be victims of financial fraud after Apple implemented this change. Google should follow Apple’s lead and disable advertising IDs by default.
Lawmakers also need to step up to protect their constituents’ privacy. We need strong, federal privacy laws to stop companies from spying on us and selling our personal information. EFF advocates for data privacy legislation with teeth and a ban on ad targeting based on online behavioral profiles, as it creates a financial incentive for companies to track our every move.
Legislators can and must also close the “data broker loophole” on the Fourth Amendment. Instead of obtaining a warrant signed by a judge, law enforcement agencies can just buy location data from private brokers to find out where you’ve been. Last year, Montana became the first state in the U.S. to pass a law blocking the government from buying sensitive data it would otherwise need a warrant to obtain. And in 2024, Senator Ron Wyden’s EFF-endorsed Fourth Amendment is Not for Sale Act passed the House before dying in the Senate. Others should follow suit to stop this end-run around constitutional protections.
Online behavioral advertising isn’t just creepy–it’s dangerous. It’s wrong that our personal information is being silently harvested, bought by shadow-y data brokers, and sold to anyone who wants to invade our privacy. This latest revelation of warrantless government surveillance should serve as a frightening wakeup call of how dangerous online behavioral advertising has become.
The Trump administration is loosening restrictions on the sharing of law enforcement information with the CIA and other intelligence agencies, officials said, overriding controls that have been in place for decades to protect the privacy of U.S. citizens.
Government officials said the changes could give the intelligence agencies access to a database containing hundreds of millions of documents — from FBI case files and banking records to criminal investigations of labor unions — that touch on the activities of law-abiding Americans.
Administration officials said they are providing the intelligence agencies with more information from investigations by the FBI, Drug Enforcement Administration and other agencies to combat drug gangs and other transnational criminal groups that the administration has classified as terrorists.
But they have taken these steps with almost no public acknowledgement or notification to Congress. Inside the government, officials said, the process has been marked by a similar lack of transparency, with scant high-level discussion and little debate among government lawyers.
“None of this has been thought through very carefully — which is shocking,” one intelligence official said of the moves to expand information sharing. “There are a lot of privacy concerns out there, and nobody really wants to deal with them.”
A spokesperson for the Office of the Director of National Intelligence, Olivia Coleman, declined to answer specific questions about the expanded information sharing or the legal basis for it.
Instead, she cited some recent public statements by senior administration officials, including one in which the national intelligence director, Tulsi Gabbard, emphasized the importance of “making sure that we have seamless two-way push communications with our law enforcement partners to facilitate that bi-directional sharing of information.”
In the aftermath of the Watergate scandal, revelations that Presidents Lyndon Johnson and Richard Nixon had used the CIA to spy on American anti-war and civil rights activists outraged Americans who feared the specter of a secret police. The congressional reforms that followed reinforced the long-standing ban on intelligence agencies gathering information about the domestic activities of U.S. citizens.
Compared with the FBI and other federal law enforcement organizations, the intelligence agencies operate with far greater secrecy and less scrutiny from Congress and the courts. They are generally allowed to collect information on Americans only as part of foreign intelligence investigations. Exemptions must be approved by the U.S. attorney general and the director of national intelligence. The National Security Agency, for example, can intercept communications between people inside the United States and terror suspects abroad without the probable cause or judicial warrants that are generally required of law enforcement agencies.
Since the terror attacks of Sept. 11, 2001, the expansion of that surveillance authority in the fight against Islamist terrorism has been the subject of often intense debates among the three branches of government.
Word of the Trump administration’s efforts to expand the sharing of law enforcement information with the intelligence agencies was met with alarm by advocates for civil liberties protections.
“The Intelligence Community operates with broad authorities, constant secrecy and little-to-no judicial oversight because it is meant to focus on foreign threats,” Sen. Ron Wyden of Oregon, a senior Democrat on the Senate Select Committee on Intelligence, said in a statement to ProPublica.
Giving the intelligence agencies wider access to information on the activities of U.S. citizens not suspected of any crime “puts Americans’ freedoms at risk,” the senator added. “The potential for abuse of that information is staggering.”
Most of the current and former officials interviewed for this story would speak only on condition of anonymity because of the secrecy of the matter and because they feared retaliation for criticizing the administration’s approach.
Virtually all those officials said they supported the goal of sharing law enforcement information more effectively, so long as sensitive investigations and citizens’ privacy were protected. But after years in which Republican and Democratic administrations weighed those considerations deliberately — and made little headway with proposed reforms — officials said the Trump administration has pushed ahead with little regard for those concerns.
“There will always be those who simply want to turn on a spigot and comingle all available information, but you can’t just flip a switch — at least not if you want the government to uphold the rule of law,” said Russell Travers, a former acting director of the National Counterterrorism Center who served in senior intelligence roles under both Republican and Democratic administrations.
The 9/11 attacks — which exposed the CIA’s failure to share intelligence with the FBI even as Al Qaida moved its operatives into the United States — led to a series of reforms intended to transform how the government managed terrorism information.
A centerpiece of that effort was the establishment of the NCTC, as the counterterrorism center is known, to collect and analyze intelligence on foreign terrorist groups. The statutes that established the NCTC explicitly prohibit it from collecting information on domestic terror threats.
National security officials have spent much less time trying to remedy what they have acknowledged are serious deficiencies in the government’s management of intelligence on organized crime groups.
In 2011, President Barack Obama noted those problems in issuing a new national strategy to “build, balance and integrate the tools of American power to combat transnational organized crime.” Although the Obama plan stressed the need for improved information-sharing, it led to only minimal changes.
President Donald Trump has seized on the issue with greater urgency. He has also declared his intention to improve information-sharing across the government, signing an executive order to eliminate “information silos” of unclassified information.
More consequentially, he went on to brand more than a dozen Latin American drug mafias and criminal gangs as terrorist organizations.
The administration has used those designations to justify more extreme measures against the criminal groups. Since last year, it has killed at least 148 suspected drug smugglers with missile strikes in the Caribbean and the eastern Pacific, steps that many legal experts have denounced as violations of international law.
Some administration officials have argued that the terror designations entitle intelligence agencies to access all law enforcement case files related to the Sinaloa Cartel, the Jalisco New Generation Cartel and other gangs designated by the State Department as foreign terrorist organizations.
The first criterion for those designations is that a group must “be a foreign organization.” Yet unlike Islamist terror groups such as al-Qaida or al-Shabab, Latin drug mafias and criminal gangs like MS-13 have a large and complex presence inside the United States. Their members are much more likely to be U.S. citizens and to live and operate here.
Those steps were seen by some intelligence experts as potentially opening the door for the CIA and other agencies to monitor Americans who support antifa in violation of their free speech rights. The approach also echoed justifications that both Johnson and Nixon used for domestic spying by the CIA: that such investigations were needed to determine whether government critics were being supported by foreign governments.
The wider sharing of law enforcement case files is also being driven by the administration’s abrupt decision to disband the Justice Department office that for decades coordinated the work of different agencies on major drug trafficking and organized crime cases. That office, the Organized Crime Drug Enforcement Task Force, was abruptly shut down on Sept. 30 as the Trump administration was setting up a new network of Homeland Security Task Forces designed by the White House homeland security adviser, Stephen Miller.
The new task forces, which were first described in detail by ProPublica last year, are designed to refocus federal law enforcement agencies on what Miller and other officials have portrayed as an alarming nexus of immigration and transnational crime. The reorganization also gives the White House and the Department of Homeland Security new authority to oversee transnational crime investigations, subordinating the DEA and federal prosecutors, who were central to the previous system.
That reorganization has set off a struggle over the control of OCDETF’s crown jewel, a database of some 770 million records that is the only central, searchable repository of drug trafficking and organized crime case files in the federal government.
Until now, the records of that database, which is called Compass, have only been accessible to investigators under elaborate rules agreed to by the more than 20 agencies that shared their information. The system was widely viewed as cumbersome, but officials said it also encouraged cooperation among the agencies while protecting sensitive case files and U.S. citizens’ privacy.
Although the Homeland Security Task Forces took possession of the Compass system when their leadership moved into OCDETF’s headquarters in suburban Virginia, the administration is still deciding how it will operate that database, officials said.
However, officials said, intelligence agencies and the Defense Department have already taken a series of technical steps to connect their networks to Compass so they can access its information if they are permitted to do so.
The White House press office did not respond to questions about how the government will manage the Compass database and whether it will remain under the control of the Homeland Security Task Forces.
The National Counterterrorism Center, under its new director, Joe Kent, has been notably forceful in seeking to manage the Compass system, several officials said. Kent, a former Army Special Forces and CIA paramilitary officer who twice ran unsuccessfully for Congress in Washington state, was previously a top aide to the national intelligence director, Tulsi Gabbard.
The FBI, DEA and other law enforcement agencies have strongly opposed the NCTC effort, the officials said. In internal discussions, they added, the law enforcement agencies have argued that it makes no sense for an intelligence agency to manage sensitive information that comes almost entirely from law enforcement.
“The NCTC has taken a very aggressive stance,” one official said. “They think the agencies should be sharing everything with them, and it should be up to them to decide what is relevant and what U.S. citizen information they shouldn’t keep.”
The FBI declined to comment in response to questions from ProPublica. A DEA spokesperson also would not discuss the agency’s actions or views on the wider sharing of its information with the intelligence community. But in a statement the spokesman added, “DEA is committed to working with our IC and law enforcement partners to ensure reliable information-sharing and strong coordination to most effectively target the designated cartels.”
Even with the Trump administration’s expanded definition of what might constitute terrorist activity, the information on terror groups accounts for only a small fraction of the records in the Compass system, current and former officials said.
The records include State Department visa records, some files of U.S. Postal Service inspectors, years of suspicious transaction reports from the Treasury Department and call records from the Bureau of Prisons.
Investigative files of the FBI, DEA and other law enforcement agencies often include information about witnesses, associates of suspects and others who have never committed any crimes, officials said.
“You have witness information, target information, bank account information,” the former OCDETF director, Thomas Padden, said in an interview. “I can’t think of a dataset that would not be a concern if it were shared without some controls. You need checks and balances, and it’s not clear to me that those are in place.”
Officials familiar with the interagency discussions said NCTC and other intelligence officials have insisted they are interested only in terror-related information and that they have electronic systems that can appropriately filter out information on U.S. persons.
But FBI and other law enforcement agencies have challenged those arguments, officials said, contending that the NCTC proposal would almost inevitably breach privacy laws and imperil sensitive case information without necessarily strengthening the fight against transnational criminals.
Already, NCTC officials have been pressing the FBI and DEA to share all the information they have on the criminal groups that have been designated as terrorist organizations, officials said.
The DEA, which had previously earned a reputation for jealously guarding its case files, authorized the transfer of at least some of those files, officials said, adding to pressure on the FBI to do the same.
Administration lawyers have argued that such information sharing is authorized by the Intelligence Reform and Terrorism Prevention Act of 2004, the law that reorganized intelligence activities after 9/11. Officials have also cited the 2001 Patriot Act, which gives law enforcement agencies power to obtain financial, communications and other information on a subject they certify as having ties to terrorism.
The central role of the NCTC in collecting and analyzing terrorism information specifically excludes “intelligence pertaining exclusively to domestic terrorists and domestic counterterrorism.” But that has not stopped Kent or his boss, intelligence director Gabbard, from stepping over red lines that their predecessors carefully avoided.
In October, Kent drew sharp criticism from the FBI after he examined files from the bureau’s ongoing investigation of the assassination of Charlie Kirk, the right-wing activist. That episode was first reported by The New York Times.
Last month, Gabbard appeared to lead a raid at which the FBI seized truckloads of 2020 presidential voting records from an election center in Fulton County, Georgia. Officials later said she was sent by Trump but did not oversee the operation.
In years past, officials said, the possibility of crossing long-settled legal boundaries on citizens’ privacy would have precipitated a flurry of high-level meetings, legal opinions and policy memos. But almost none of that internal discussion has taken place, they said.
“We had lengthy interagency meetings that involved lawyers, civil liberties, privacy and operational security types to ensure that we were being good stewards of information and not trampling all over U.S. persons’ privacy rights,” said Travers, the former NCTC director.
When administration officials abruptly moved to close down OCDETF and supplant it with the Homeland Security Task Forces network, they seemed to have little grasp of the complexities of such a transition, several people involved in the process said.
The agencies that contributed records to OCDETF were ordered to sign over their information to the task forces, but they did so without knowing if the system’s new custodians would observe the conditions under which the files were shared.
Nor were they encouraged to ask, officials said.
While both the FBI and DEA have objected to a change in the protocols, officials said smaller agencies that contributed some of their records to the OCDETF system have been “reluctant to push back too hard,” as one of them put it.
The NCTC, which faced budget cuts during the Biden administration, has been among those most eager to service the new Homeland Security Task Forces. To that end, it set up a new fusion center to promote “two-way intelligence sharing of actionable information between the intelligence community and law enforcement,” as Gabbard described it.
The expanded sharing of law enforcement and intelligence information on trafficking groups is also a key goal of the Pentagon’s new Tucson, Arizona-based Joint Interagency Task Force-Counter Cartel. In announcing the task force’s creation last month, the U.S. Northern Command said it would work with the Homeland Security Task Forces “to ensure we are sharing all intelligence between our Department of War, law enforcement and Intelligence Community partners.”
In the last months of the Biden administration, a somewhat similar proposal was put forward by the then-DEA administrator, Anne Milgram. That plan involved setting up a pair of centers where DEA, CIA and other agencies would pool information on major Mexican drug trafficking groups.
At the time, one particularly strong objection came from the Defense Department’s counternarcotics and stabilization office, officials said. The sharing of such law enforcement information with the intelligence community, an official there noted, could violate laws prohibiting the CIA from gathering intelligence on Americans inside the United States.
The Pentagon, he warned, would want no part of such a plan.
The SAFE act, introduced by Senators Mike Lee and Dick Durbin, is the first of many likely proposals we will see to reauthorize Section 702 of the Foreign Intelligence Surveillance Act (FISA) Amendments Act of 2008—and while imperfect, it does propose a litany of real and much-needed reforms of Big Brother’s favorite surveillance authority.
The irresponsible 2024 reauthorization of the secretive mass surveillance authority Section 702 not only gave the government two more years of unconstitutional surveillance powers, it also made the policy much worse. But, now people who value privacy and the rule of law get another bite at the apple. With expiration for Section 702 looming in April 2026, we are starting to see the emergence of proposals for how to reauthorize the surveillance authority—including calls from inside the White House for a clean reauthorization that would keep the policy unchanged. EFF has always had a consistent policy: Section 702 should not be reauthorized absent major reforms that will keep this tactic of foreign surveillance from being used as a tool of mass domestic espionage.
What is Section 702?
Section 702 was intended to modernize foreign surveillance of the internet for national security purposes. It allows collection of foreign intelligence from non-Americans located outside the United States by requiring U.S.-based companies that handle online communications to hand over data to the government. As the law is written, the intelligence community (IC) cannot use Section 702 programs to target Americans, who are protected by the Fourth Amendment’s prohibition on unreasonable searches and seizures. But the law gives the intelligence community space to target foreign intelligence in ways that inherently and intentionally sweep in Americans’ communications.
We live in an increasingly globalized world where people are constantly in communication with people overseas. That means, while targeting foreigners outside the U.S. for “foreign intelligence Information” the IC routinely acquires the American side of those communications without a probable cause warrant. The collection of all that data from U.S telecommunications and internet providers results in the “incidental” capture of conversations involving a huge number of people in the United States.
But, this backdoor access to U.S. persons’ data isn’t “incidental.” Section 702 has become a routine part of the FBI’s law enforcement mission. In fact, the IC’s latest Annual Statistical Transparency Report documents the many ways the Federal Bureau of Investigation (FBI) uses Section 702 to spy on Americans without a warrant. The IC lobbied for Section 702 as a tool for national security outside the borders of the U.S., but it is apparent that the FBI uses it to conduct domestic, warrantless surveillance on Americans. In 2021 alone, the FBI conducted 3.4 million warrantless searches of US person’s 702 data.
The Good
Let’s start with the good things that this bill does. These are reforms EFF has been seeking for a long time and their implementation would mean a big improvement in the status quo of national security law.
First, the bill would partially close the loophole that allows the FBI and domestic law enforcement to dig through 702-collected data’s “incidental” collection of the U.S. side of communications. The FBI currently operates with a “finders keeper” mentality, meaning that because the data is pre-collected by another agency, the FBI believes it can operate with almost no constraints on using it for other purposes. The SAFE act would require a warrant before the FBI looked at the content of these collected communications. As we will get to later, this reform does not go nearly far enough because they can query to see what data on a person exists before getting a warrant, but it is certainly an improvement on the current system.
Second, the bill addresses the age-old problem of parallel construction. If you’re unfamiliar with this term, parallel construction is a method by which intelligence agencies or domestic law enforcement find out a piece of information about a subject through secret, even illegal or unconstitutional methods. Uninterested in revealing these methods, officers hide what actually happened by publicly offering an alternative route they could have used to find that information. So, for instance, if police want to hide the fact that they knew about a specific email because it was intercepted under the authority of Section 702, they might use another method, like a warranted request to a service provider, to create a more publicly-acceptable path to that information. To deal with this problem, the SAFE Act mandates that when the government seeks to use Section 702 evidence in court, it must disclosure the source of this evidence “without regard to any claim that the information or evidence…would inevitably have been discovered, or was subsequently reobtained through other means.”
Next, the bill proposes a policy that EFF and other groups have nonetheless been trying to get through Congress for over five years: ending the data broker loophole. As the system currently stands, data brokers who buy and sell your personal data collected from smartphone applications, among other sources, are able to sell that sensitive information, including a phone’s geolocation, to the law enforcement and intelligence agencies. That means that with a bit of money, police can buy the data (or buy access to services that purchase and map the data) that they would otherwise need a warrant to get. A bill that would close this loophole, the Fourth Amendment is Not For Sale Act passed through the House in 2024 but has yet to be voted on by the Senate. In the meantime, states have taken it upon themselves to close this loophole with Montana being the first state to pass similar legislation in May 2025. The SAFE Act proposes to partially fix the loophole at least as far as intelligence agencies are concerned. This fix could not come soon enough—especially since the Office of the Director of National Intelligence has signaled their willingness to create one big, streamlined, digital marketplace where the government can buy data from data brokers.
Another positive thing about the SAFE Act is that it creates an official statutory end to surveillance power that the government allowed to expire in 2020. In its heyday, the intelligence community used Section 215 of the Patriot Act to justify the mass collection of communication records like metadata from phone calls. Although this legal authority has lapsed, it has always been our fear that it will not sit dormant forever and could be reauthorized at any time. This new bill says that its dormant powers shall “cease to be in effect” within 180 of the SAFE Act being enacted.
What Needs to Change
The SAFE Act also attempts to clarify very important language that gauges the scope of the surveillance authority: who is obligated to turn over digital information to the U.S. government. Under Section 702, “electronic communication service providers” (ECSP) are on the hook for providing information, but the definition of that term has been in dispute and has changed over time—most recently when a FISA court opinion expanded the definition to include a category of “secret” ECSPs that have not been publicly disclosed. Unfortunately, this bill still leaves ambiguity in interpretation and an audit system without a clear directive for enforcing limitations on who is an ECSP or guaranteeing transparency.
As mentioned earlier, the SAFE Act introduces a warrant requirement for the FBI to read the contents of Americans’ communications that have been warrantlessly collected under Section 702. However, the law does not in its current form require the FBI to get a warrant before running searches identifying whether Americans have communications present in the database in the first place. Knowing this information is itself very revealing and the government should not be able to profit from circumventing the Fourth Amendment.
When Congress reauthorized Section 702 in 2014, they did so through a piece of policy called the Reforming Intelligence and Securing America Act (RISAA). This bill made 702 worse in several ways, one of the most severe being that it expanded the legal uses for the surveillance authority to include vetting immigrants. In an era when the United States government is rounding up immigrants, including people awaiting asylum hearings, and which U.S officials are continuously threatening to withhold admission to the United States from people whose politics does not align with the current administration, RISAA sets a dangerous precedent. Although RISAA is officially expiring in April, it would be helpful for any Section 702 reauthorization bill to explicitly prohibit the use of this authority for that reason.
Finally, in the same way that the SAFE Act statutorily ends the expired Section 215 of the Patriot Act, it should also impose an explicit end to “Abouts collection” a practice of collecting digital communications, not if their from suspected people, but if their are “about” specific topics. This practice has been discontinued, but still sits on the books, just waiting to be revamped.
OpenAI, the maker of ChaptGPT, is rightfully facing widespread criticism for its decisions to fill the gap the U.S. Department of Defense (DoD) created when rival Anthropic refused to drop its restrictions against using its AI for surveillance and autonomous weapons systems. After protests from both users and employees who did not sign up to support government mass surveillance—early reports show that ChaptGPT uninstalls rose nearly 300% after the company announced the deal—Sam Altman, CEO of OpenAI, conceded that the initial agreement was “opportunistic and sloppy.” He then re-published an internal memo on social media stating that additions to the agreement made clear that “Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, [and] FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”
Trouble is, the U.S. government doesn’t believe “consistent with applicable laws” means “no domestic surveillance.” Instead, for the most part, the government has embraced a lax interpretation of “applicable law” that has blessed mass surveillance and large-scale violations of our civil liberties, and then fought tooth and nail to prevent courts from weighing in.
“Intentionally” is also doing an awful lot of work in that sentence. For years the government has insisted that the mass surveillance of U.S. persons only happens incidentally (read: not intentionally) because their communications with people both inside the United States and overseas are swept up in surveillance programs supposedly designed to only collect communications outside the United States.
The company’s amendment to the contract continues in a similar vein, “For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.” Here, “deliberate” is the red flag given how often intelligence and law enforcement agencies rely on incidental or commercially purchased data to sidestep stronger privacy protections.
Here’s another one: “The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.” What, one wonders, does “unconstrained” mean, precisely—and according to whom?
Lawyers sometimes call these “weasel words” because they create ambiguity that protects one side or another from real accountability for contract violations. As with the Anthropic negotiations, where the Pentagon reportedly agreed to adhere to Anthropic’s red lines only “as appropriate,” the government is likely attempting to publicly commit to limits in principle, but retain broad flexibility in practice.
OpenAI also notes that the Pentagon promised the NSA would not be allowed to use OpenAI’s tools absent a new agreement, and that its deployment architecture will help it verify that no red lines are crossed. But secret agreements and technical assurances have never been enough to rein in surveillance agencies, and they are no substitute for strong, enforceable legal limits and transparency.
OpenAI executives may indeed be trying, as claimed, to use the company’s contractual relationship with the Pentagon to help ensure that the government should use AI tools only in a way consistent with democratic processes. But based on what we know so far, that hope seems very naïve.
Moreover, that naïvete is dangerous. In a time when governments are willing to embrace extreme and unfounded interpretations of “applicable laws,” companies need to put some actual muscle behind standing by their commitments. After all, many of the world’s most notorious human rights atrocities have historically been “legal” under existing laws at the time. OpenAI promises the public that it will “avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power,” but we know that enabling mass surveillance does both.
OpenAI isn’t the only consumer-facing company that is, on the one hand, seeking to reassure the public that they aren’t participating in actions that violate human rights while, on the other, seeking to cash in on government mass surveillance efforts. Despite this marketing double-speak, it is very clear that companies just cannot do both. It’s also clear that companies shouldn’t be given that much power over the limits of our privacy to begin with. The public should not have to rely on a small group of people—whether CEOs or Pentagon officials—to protect our civil liberties.
Senator Ron Wyden says that when a secret interpretation of Section 702 is eventually declassified, the American public “will be stunned” to learn what the NSA has been doing. If you’ve followed Wyden’s career, you know this is not a man prone to hyperbole — and you know his track record on these warnings is perfect.
Just last month, we wrote about the Wyden Siren — the pattern where Senator Ron Wyden sends a cryptic public signal that something terrible is happening behind the classification curtain, can’t say what it is, and then is eventually proven right. Every single time. The catalyst then was a two-sentence letter to CIA Director Ratcliffe expressing “deep concerns about CIA activities.”
Well, the siren is going off once again. This time, Wyden took to the Senate floor to deliver a lengthy speech, ostensibly about the since approved (with support of many Democrats) nomination of Joshua Rudd to lead the NSA. Wyden was protesting that nomination, but in the context of Rudd being unwilling to agree to basic constitutional limitations on NSA surveillance. But that’s just a jumping off point ahead of Section 702’s upcoming reauthorization deadline. Buried in the speech is a passage that should set off every alarm bell:
There’s another example of secret law related to Section 702, one that directly affects the privacy rights of Americans. For years, I have asked various administrations to declassify this matter. Thus far they have all refused, although I am still waiting for a response from DNI Gabbard. I strongly believe that this matter can and should be declassified and that Congress needs to debate it openly before Section 702 is reauthorized. In fact,when it is eventually declassified, the American people will be stunnedthat it took so long and that Congress has been debating this authority with insufficient information.
Here’s a sitting member of the Senate Intelligence Committee — someone with access to the classified details — is telling his colleagues and the public that there is a secret interpretation of Section 702 that “directly affects the privacy rights of Americans,” that he’s been asking multiple administrations to declassify it, that they’ve all refused, and that when it finally comes out, people will be stunned.
If you’ve followed Wyden for any amount of time, this all sounds very familiar. In 2011, Wyden warned that the government had secretly reinterpreted the PATRIOT Act to mean something entirely different from what Congress and the public understood. He couldn’t say what. Nobody believed it could be that bad. Then the Snowden revelations showed the NSA was engaged in bulk collection of essentially every American’s phone metadata. In 2017, he caught the Director of National Intelligence answering a different question than the one Wyden asked about Section 702 surveillance. The pattern repeats. The siren sounds. Years pass. And then, eventually, we find out it was worse than we imagined.
Now here he is, doing the exact same thing with Section 702 yet again, now that it’s up for renewal. Congress is weeks away from a reauthorization vote, and Wyden is explicitly telling his colleagues (not for the first time) they are preparing to vote on a law whose actual meaning is being kept secret from them as well as from the American public:
The past fifteen years have shown that, unless the Congress can have an open debate about surveillance authorities, the laws that are passed cannot be assumed to have the support of the American people. And that is fundamentally undemocratic. And, right now, the government is relying on secret law with regard to Section 702 of FISA. I’ve already mentioned the provision that was stuck into the last reauthorization bill, that could allow the government to force all sorts of people to spy on their fellow citizens. I have explained the details of how the Biden Administration chose to interpret it, and how the Trump Administration will interpret it, are a big secret. Americans have the right to be confused and angry that this is how the government and Congress choose to do business.
That’s a United States senator who has a long history of calling out secret interpretations that lead to surveillance of Americans — standing on the Senate floor and warning, once again, that there’s a secret interpretation of Section 702 authorities. One that almost certainly means mass surveillance.
And Wyden knows exactly how this plays out. He’s been through the reauthorization cycle enough times to know the playbook the intelligence community runs every time 702 is up for renewal:
I’ve been doing this a long time, so I know how this always goes. Opponents of reforming Section 702 don’t want a real debate where Members can decide for themselves which reform amendments to support. So what always happens is that a lousy reauthorization bill magically shows up a few days before the authorization expires and Members are told that there’s no time to do anything other than pass that bill and that if they vote for any amendments, the program will die and terrible things will happen and it will be all their fault.
Don’t buy into that.
He’s right. Every time reauthorization is on the table, no real debate happens, and then just before the authorization is about to run out, some loyal soldier of the surveillance brigade in Congress will scream “national security” at the top of their lungs, insist there’s no time to debate this or people will die, and then promises that we need to just re-authorize for a few more years, at which point we’ll be able to hold a debate on the surveillance.
A debate that never arrives.
But even setting aside the secret interpretation Wyden can’t discuss, his speech highlights something almost as damning: just how spectacularly the supposed “reforms” from the last reauthorization have failed. Remember, one of the big “concessions” to get the last reauthorization across the finish line was a requirement that “sensitive searches” — targeting elected officials, political candidates, journalists, and the like — would need the approval of the FBI’s Deputy Director.
This was in response to some GOP elected officials being on the receiving end of investigations during the Biden era, freaking out that the NSA appeared to be doing the very things plenty of civil society and privacy advocates had been telling them about for over a decade while they just yelled “national security” back at us.
So how are those small “reforms” working out? Here’s Wyden:
The so-called big reform was to require the approval of the Deputy FBI Director for these sensitive searches.
Until two months ago, the Deputy FBI Director was Dan Bongino. As most of my colleagues know, Mr. Bongino is a longtime conspiracy theorist who has frequently called for specious investigations of his political opponents. This is the man whom the President and the U.S. Senate put in charge of these incredibly sensitive searches. And Bongino’s replacement as Deputy Director, Andrew Bailey, is a highly partisan election denier who recently directed a raid on a Georgia election office in an effort to justify Donald Trump’s conspiracy theories. I don’t know about my colleagues, but this so-called reform makes me feel worse, not better.
So the grand reform that was supposed to provide meaningful oversight of the FBI’s most sensitive surveillance activities ended up placing that authority in the hands of a conspiracy theorist, followed by a partisan election denier. And just to make the whole thing even more farcical, Wyden notes that the FBI has refused to even keep a basic record of these searches:
But it’s even worse than it looks. The FBI has refused to even keep track of all of the sensitive searches the Deputy Director has considered. The Inspector General urged the FBI to just put this information into a simple spreadsheet and they refused to do it. That is how much the FBI does not want oversight.
They won’t maintain a spreadsheet. The Inspector General asked them to track their use of a sensitive surveillance power using what amounts to a basic Excel file, and the FBI said no. That’s the state of “reform” for Section 702 after the last re-auth.
Wyden has also been sounding the alarm about the expansion of who can be forced to spy on behalf of the government, thanks to a provision jammed into the last reauthorization that expanded the definition of “electronic communications service provider” to cover essentially anyone with access to communications equipment. As Wyden explained:
Two years ago, during the last reauthorization debacle, something really bad happened. Over in the House, existing surveillance law was changed so that the government could force anyone with “access” to communications to secretly collect those communications for the government. As I pointed out at the time, that could mean anyone installing or repairing a cable box, or anyone responsible for a wifi router. It was a jaw-dropping expansion of authorities that could end up forcing countless ordinary Americans to secretly help the government spy on their fellow citizens.
The Biden administration apparently promised to use this authority narrowly. But, of course, the Trump administration has made no such promise. As we say with every expansion of executive authority, just imagine how the worst possible president from the opposing party would use it. And now we don’t have to wonder any more.
Wyden correctly points out that secret promises from a prior administration are worth exactly nothing:
But here’s the other thing – whatever secret promise the Biden Administration made about using these vast, unchecked authorities with restraint, the current administration clearly isn’t going to feel bound by that promise. So whatever the previous administration intended to accomplish with that provision, there is absolutely nothing preventing the current administration from conscripting those cable repair and tech support men and women to secretly spy on Americans.
So to tally this up: Congress is about to vote on reauthorizing Section 702 with a secret legal interpretation that Wyden says will stun the public when it’s eventually revealed, with “reforms” that placed surveillance approval authority in the hands of conspiracy theorists who won’t even keep a spreadsheet, with a massively expanded definition of who can be forced to help the government spy, with secret promises about restraint that the current administration has no intention of honoring, and with a nominee to lead the NSA who won’t commit to following the Constitution.
The Wyden Siren is blaring. And if history is any guide — and it has been, without exception — whatever is behind the classification curtain is worse than what we can see from the outside.
We are calling on technology companies like Meta and Google to stand up for their users by resisting the Department of Homeland Security’s (DHS) lawless administrative subpoenas for user data.
In the past year, DHS has consistently targeted people engaged in First Amendment activity. Among other things, the agency has issued subpoenas to technology companies to unmask or locate people who have documented ICE’s activities in their community, criticized the government, or attended protests.
These subpoenas are unlawful, and the government knows it. When a handful of users challenged a few of them in court with the help of ACLU affiliates in Northern California and Pennsylvania, DHS withdrew them rather than waiting for a decision.
But it is difficult for the average user to fight back on their own. Quashing a subpoena is a fast-moving process that requires lawyers and resources. Not everyone can afford a lawyer on a moment’s notice, and non-profits and pro-bono attorneys have already been stretched to near capacity during the Trump administration.
That is why we, joined by the ACLU of Northern California, have asked several large tech platforms to do more to protect their users, including:
Insist on court intervention and an order before complying with a DHS subpoena, because the agency has already proved that its legal process is often unlawful and unconstitutional;
Give users as much notice as possible when they are the target of a subpoena, so the user can seek help. While many companies have already made this promise, there are high-profileexamples of it not happening—ultimately stripping users of their day in court;
Resist gag orders that would prevent companies from notifying their users that they are a target of a subpoena.
We sent the letter to Amazon, Apple, Discord, Google, Meta, Microsoft, Reddit, SNAP, TikTok, and X.
Recipients are not legally compelled to comply with administrative subpoenas absent a court order
An administrative subpoena is an investigative tool available to federal agencies like DHS. Many times, these are sent to technology companies to obtain user data. A subpoena cannot be used to obtain the content of communications, but they have been used to try and obtain some basic subscriber information like name, address, IP address, length of service, and session times.
Unlike a search warrant, an administrative subpoena is not approved by a judge. If a technology company refuses to comply, an agency’s only recourse is to drop it or go to court and try to convince a judge that the request is lawful. That is what we are asking companies to do—simply require court intervention and not obey in advance.
It is unclear how many administrative subpoenas DHS has issued in the past year. Subpoenas can come from many places—including civil courts, grand juries, criminal trials, and administrative agencies like DHS. Altogether, Google received 28,622 and Meta received 14,520 subpoenas in the first half of 2025, according to their transparency reports. The numbers are not broken out by type.
DHS is abusing its authority to issue subpoenas
In the past year, DHS has used these subpoenas to target protected speech. The following are just a few of the known examples.
On April 1, 2025, DHS sent a subpoena to Google in an attempt to locate a Cornell PhD student in the United States on a student visa. The student was likely targeted because of his brief attendance at a protest the year before. Google complied with the subpoena without giving the student an opportunity to challenge it. While Google promises to give users prior notice, it sometimes breaks that promise to avoid delay. This must stop.
In September 2025, DHS sent a subpoena and summons to Meta to try to unmask anonymous users behind Instagram accounts that tracked ICE activity in communities in California and Pennsylvania. The users—with the help of the ACLU and its state affiliates— challenged the subpoenas in court, and DHS withdrew the subpoenas before a court could make a ruling. In the Pennsylvania case, DHS tried to use legal authority that its own inspector general had already criticized in a lengthy report.
In October 2025, DHS sent Google a subpoena demanding information about a retiree who criticized the agency’s policies. The retiree had sent an email asking the agency to use common sense and decency in a high-profile asylum case. In a shocking turn, federal agents later appeared on that person’s doorstep. The ACLU is currently challenging the subpoena.
EFF is againstage gating and age verification mandates, and we hope we’ll win in getting existing ones overturned and new ones prevented. But mandates are already in effect, and every day many people are asked to verify their age across the web, despite prominentcases of sensitive data getting leaked in the process.
At some point, you may have been faced with the decision yourself: should I continue to use this service if I have to verify my age? And if so, how can I do that with the least risk to my personal information? This is our guide to navigating those decisions, with information on what questions to ask about the age verification options you’re presented with, and answers to those questions for some of the top most popular social media sites. Even though there’s no way to implement mandated age gates in a way that fully protects speech and privacy rights, our goal here is to help you minimize the infringement of your rights as you manage this awful situation.
Follow the Data
Since we know that leaks happen despite the best efforts of software engineers, we generally recommend submitting the absolute least amount of data possible. Unfortunately, that’s not going to be possible for everyone. Even facial age estimation solutions where pictures of your face never leave your device, offering some protection against data leakage, are not a good option for all users: facial age estimation works less well for people of color, trans and nonbinary people, and people with disabilities. There are some systems that use fancy cryptography so that a digital ID saved to your device won’t tell the website anything more than if you meet the age requirement, but access to that digital ID isn’t available to everyone or for all platforms. You may also not want to register for a digital ID and save it to your phone, if you don’t want to take the chance of all the information on it being exposed upon request of an over-zealous verifier, or you simply don’t want to be a part of a digital ID system
If you’re given the option of selecting a verification method and are deciding which to use, we recommend considering the following questions for each process allowed by each vendor:
Data: What info does each method require?
Access: Who can see the data during the course of the verification process?
Retention: Who will hold onto that data after the verification process, and for how long?
Audits: How sure are we that the stated claims will happen in practice? For example, are there external audits confirming that data is not accidentally leaked to another site along the way? Ideally these will be in-depth, security-focused audits by specialized auditors like NCC Group or Trail of Bits, instead of audits that merely certify adherence to standards.
Visibility: Who will be aware that you’re attempting to verify your age, and will they know which platform you’re trying to verify for?
We attempt to provide answers to these questions below. To begin, there are two major factors to consider when answering these questions: the tools each platform uses, and the overall system those tools are part of.
In general, most platforms offer age estimation options like face scans as a first line of age assurance. These vary in intrusiveness, but their main problem is inaccuracy, particularly for marginalized users. Third-party age verification vendors Private ID and k-ID offer on-device facial age estimation, but another common vendor, Yoti, sends the image to their servers during age checks by some of the biggest platforms. This risks leaking the images themselves, and also the fact that you’re using that particular website, to the third party.
Then, there’s the document-based verification services, which require you to submit a hard identifier like a government-issued ID. This method thus requires you to prove both your age and your identity. A platform can do this in-house through a designated dataflow, or by sending that data to a third party. We’ve already seen examples of how this can fail. For example, Discord routed users’ ID data through its general customer service workflow so that a third-party vendor could perform manual review of verification appeals. No one involved ever deleted users’ data, so when the system was breached, Discord had to apologize for the catastrophic disclosure of nearly 70,000 photos of users’ ID documents. Overly long retention periods expose documents to risk of breaches and historical data requests. Some document verifiers have retention periods that are needlessly long. This is the case with Incode, which provides ID verification for Tiktok. Incode holds onto images forever by default, though TikTok should automaticallystart the deletion process on your behalf.
Some platforms offer alternatives, like proving that you own a credit card, or asking for your email to check if it appears in databases associated with adulthood (like home mortgage databases). These tend to involve less risk when it comes to the sensitivity of the data itself, especially since credit cards can be replaced, but in general still undermine anonymity and pseudonymity and pose a risk of tracking your online activity. We’d prefer to see more assurances across the board about how information is handled.
Each site offers users a menu of age assurance options to choose from. We’ve chosen to present these options in the rough order that we expect most people to prefer. Jump directly to a platform to learn more about its age checks:
Meta – Facebook, Instagram, WhatsApp, Messenger, Threads
Inferred Age
If Meta can guess your age, you may never even see an age verification screen. Meta, which runs Facebook, Threads, Instagram, Messenger, and WhatsApp, first tries to use information you’ve posted to guess your age, like looking at “Happy birthday!” messages. It’s a creepy reminder that they already have quite a lot of information about you.
If Meta cannot guess your age, or if Meta infers you’re too young, it will next ask you to verify your age using either facial age estimation, or by uploading your photo ID.
Face Scan
If you choose to use facial age estimation, you’ll be senttoYoti, a third-party verification service. Your photo will be uploaded to their servers during this process. Yoti claims that “as soon as an age has been estimated, the facial image is immediately and permanently deleted.” Though it’s not as good as not having that data in the first place, Yoti’s security measures include a bug bounty program and annual penetration testing. Researchers from Mint Secure found that Yoti’s app and website are filled with trackers, so the fact that you’re verifying your age could be not only shared to Yoti, but leaked to third-party data brokers as well.
You may not want to use this option if you’re worried about third parties potentially being able to know you’re trying to verify your age with Meta. You also might not want to use this if you’re worried about a current picture of your face accidentally leaking—for example, if elements in the background of your selfie might reveal your current location. On the other hand, if you consider a selfie to be less sensitive than a photograph of your ID, this option might be better. If you do choose (or are forced to) use the face check system, be sure to snap your selfie without anything you’d be concerned with identifying your location or embarrassing you in the background in case the image leaks.
Upload ID
If Yoti’s age estimation decides your face looks too young, or if you opt out of facial age estimation, your next recourse is to send Meta a photo of your ID. Meta sends that photo to Yoti to verify the ID. Meta says it will hold onto that ID image for 30 days, then delete it. Meanwhile, Yoti claims it will delete the image immediately after verification. Of course, bugs and process oversights exist, such as accidentally replicating information in logs or support queues, but at least they have stated processes. Your ID contains sensitive information such as your full legal name and home address. Using this option not only runs the (hopefully small, but never nonexistent) risk of that data getting leaked through errors or hacking, but it also lets Meta see the information needed to tie your profile to your identity—which you may not want. If you don’t want Meta to know your name and where you live, or rely on both Meta and Yoti to keep to their deletion promises, this option may not be right for you.
Google – Gmail, YouTube
Inferred Age
If Google can guess your age, you may never even see an age verification screen. Your Google account is typically connected to your YouTube account, so if (like mine) your YouTube account is old enough to vote, you may not need to verify your Google account at all. Google first uses information it already knows to try to guess your age, like how long you’ve had the account and your YouTube viewing habits. It’s yet another creepy reminder of how much information these corporations have on you, but at least in this case they aren’t likely to ask for even more identifying data.
If Google cannot guess your age, or decides you’re too young, Google will next ask you to verify your age. You’ll be given a variety of options for how to do so, with availability that will depend on your location and your age.
Google’s methods to assure your age include ID verification, facial age estimation, verification by proxy, and digital ID. To prove you’re over 18, you may be able to use facial age estimation, give Google your credit card information, or tell a third-party provider your email address.
Face Scan
If you choose to use facial age estimation, you’ll be sent to a website run by Private ID, a third-party verification service. The website will load Private ID’s verifier within the page—this means that your selfie will be checked without any images leaving your device. If the system decides you’re over 18, it will let Google know that, and only that. Of course, no technology is perfect—should Private ID be mandated to target you specifically, there’s nothing to stop it from sending down code that does in fact upload your image, and you probably won’t notice. But unless your threat model includes being specifically targeted by a state actor or Private ID, that’s unlikely to be something you need to worry about. For most people, no one else will see your image during this process. Private ID will, however, be told that your device is trying to verify your age with Google and Google will still find out if Private ID thinks that you’re under 18.
If Private ID’s age estimation decides your face looks too young, you may next be able to decide if you’d rather let Google verify your age by giving it your credit card information, photo ID, or digital ID, or by letting Google send your email address to a third-party verifier.
Email Usage
If you choose to provide your email address, Google sends it on to a company called VerifyMy. VerifyMy will use your email address to see if you’ve done things like get a mortgage or paid for utilities using that email address. If you use Gmail as your email provider, this may be a privacy-protective option with respect to Google, as Google will then already know the email address associated with the account. But it does tell VerifyMy and its third-party partners that the person behind this email address is looking to verify their age, which you may not want them to know. VerifyMy uses “proprietary algorithms and external data sources” that involve sending your email address to “trusted third parties, such as data aggregators.” It claims to “ensure that such third parties are contractually bound to meet these requirements,” but you’ll have to trust it on that one—we haven’t seen any mention of who those parties are, so you’ll have no way to check up on their practices and security. On the bright side, VerifyMy and its partners do claim to delete your information as soon as the check is completed.
Credit Card Verification
If you choose to let Google use your credit card information, you’ll be asked to set up a Google Payments account. Note that debit cards won’t be accepted, since it’s much easier for many debit cards to be issued to people under 18. Google will then charge a small amount to the card, and refund it once it goes through. If you choose this method, you’ll have to tell Google your credit card info, but the fact that it’s done through Google Payments (their regular card-processing system) means that at least your credit card information won’t be sitting around in some unsecured system. Even if your credit card information happens to accidentally be leaked, this is a relatively low-risk option, since credit cards come with solid fraud protection. If your credit card info gets leaked, you should easily be able to dispute fraudulent charges and replace the card.
Digital ID
If the option is available to you, you may be able to use your digital ID to verify your age with Google. In some regions, you’ll be given the option to use your digital ID. In some cases, it’s possible to only reveal your age information when you use a digital ID. If you’re given that choice, it can be a good privacy-preserving option. Depending on the implementation, there’s a chance that the verification step will “phone home” to the ID provider (usually a government) to let them know the service asked for your age. It’s a complicated and varied topic that you can learn more about by visiting EFF’s page on digital identity.
Upload ID
Should none of these options work for you, your final recourse is to send Google a photo of your ID. Here, you’ll be asked to take a photo of an acceptable ID and send it to Google. Though the help page only states that your ID “will be stored securely,” the verification process page says ID “will be deleted after your date of birth is successfully verified.” Acceptable IDs vary by country, but are generally government-issued photo IDs. We like that it’s deleted immediately, though we have questions about what Google means when it says your ID will be used to “improve [its] verification services for Google products and protect against fraud and abuse.” No system is perfect, and we can only hope that Google schedules outside audits regularly.
TikTok
Inferred Age
If TikTok can guess your age, you may never even see an age verification notification. TikTok first tries to use information you’ve posted to estimate your age, looking through your videos and photos to analyze your face and listen to your voice. By uploading any videos, TikTok believes you’ve given it consent to try to guess how old you look and sound.
If TikTok decides you’re too young, appeal to revoke their age decision before the deadline passes. If TikTok cannot guess your age, or decides you’re too young, it will automatically revoke your access based on age—including either restricting features or deleting your account. To get your access and account back, you’ll have a limitedamount of time to verify your age. As soon as you see the notification that your account is restricted, you’ll want to act fast because in some places you’ll have as little as 23 days before the deadline passes.
When you get that notification, you’re given variousoptions to verify your age based on your location.
Face Scan
If you’re given the option to use facial age estimation, you’ll be sent to Yoti, a third-party verification service. Your photo will be uploaded to their servers during this process. Yoti claims that “as soon as an age has been estimated, the facial image is immediately and permanently deleted.” Though it’s not as good as not having that data in the first place, Yoti’s security measures include a bug bounty program and annual penetration testing. However, researchers from Mint Secure found that Yoti’s app and website are filled with trackers, so the fact that you’re verifying your age could be leaked not only to Yoti, but to third-party data brokers as well.
You may not want to use this option if you’re worried about third parties potentially being able to know you’re trying to verify your age with TikTok. You also might not want to use this if you’re worried about a current picture of your face accidentally leaking—for example, if elements in the background of your selfie might reveal your current location. On the other hand, if you consider a selfie to be less sensitive than a photograph of your ID or your credit card information, this option might be better. If you do choose (or are forced to) use the face check system, be sure to snap your selfie without anything you’d be concerned with identifying your location or embarrassing you in the background in case the image leaks.
Credit Card Verification
If you have a credit card in your name, TikTok will acceptthat as proof that you’re over 18. Note that debit cards won’t be accepted, since it’s much easier for many debit cards to be issued to people under 18. TikTok will charge a small amount to the credit card, and refund it once it goes through. It’s unclear if this goes through their regular payment process, or if your credit card information will be sent through and stored in a separate, less secure system. Luckily, these days credit cards come with solid fraud protection, so if your credit card gets leaked, you should easily be able to dispute fraudulent charges and replace the card. That said, we’d rather TikTok provide assurances that the information will be processed securely.
Credit Card Verification of a Parent or Guardian
Sometimes, if you’re between 13 and 17, you’ll be giventhe option to let your parent or guardian confirm your age. You’ll tell TikTok their email address, and TikTok will send your parent or guardian an email asking them (a) to confirm your date of birth, and (b) to verify their own age by proving that they own a valid credit card. This option doesn’t always seem to be offered, and in the one case we could find, it’s possible that TikTok never followed up with the parent. So it’s unclear how or if TikTok verifies that the adult whose email you provide is your parent or guardian. If you want to use credit card verification but you’re not old enough to have a credit card, and you’re ok with letting an adult know you use TikTok, this option may be reasonable to try.
Photo with a Random Adult?
Bizarrely, if you’re between 13 and 17, TikTok claimsto offer the option to take a photo with literally any random adult to confirm your age. Its help page says that any trusted adult over 25 can be chosen, as long as they’re holding a piece of paper with the code on it that TikTok provides. It also mentions that a third-party provider is used here, but doesn’t say which one. We haven’t found any evidence of this verification method being offered. Please do let us know if you’ve used this method to verify your age on TikTok!
Photo ID and Face Comparison
If you aren’t offered or have failed the other options, you’ll have to verify your age by submitting a copy of your ID and matching photo of your face. You’ll be sent to Incode, a third-party verification service. In a disappointing failure to meet the industry standard, Incode itself doesn’t automatically delete the data you give it once the process is complete, but TikTok does claimto “start the process to delete the information you submitted,” which should include telling Incode to delete your data once the process is done. If you want to be sure, you can ask Incode to delete that data yourself. Incode tells TikTok that you met the age threshold without providing your exact date of birth, but then TikTok wants to know the exact date anyway, so it’ll ask for your date of birth even after your age has been verified.
TikTok itself might not see your actual ID depending on its implementation choices, but Incode will. Your ID contains sensitive information such as your full legal name and home address. Using this option not only runs the (hopefully small, but never nonexistent) risk of that data getting accidentally leaked through errors or hacking. If you don’t want TikTok or Incode to know your name, what you look like, and where you live—or if you don’t want to rely on both TikTok and Incode to keep to their deletion promises—then this option may not be right for you.
Everywhere Else
We’ve covered the major providers here, but age verification is unfortunately being required of many other services that you might use as well. While the providers and processes may vary, the same general principles will apply. If you’re trying to choose what information to provide to continue to use a service, consider the “follow the data” questions mentioned above, and try to find out how the company will store and process the data you give it. The less sensitive information, the fewer people have access to it, and the more quickly it will be deleted, the better. You may even come to recognize popular names in the age verification industry: Spotify and OnlyFans use Yoti (just like Meta and Tiktok), Quora and Discord use k-ID, and so on.
Unfortunately, it should be clear by now that none of the age verification options are perfect in terms of protecting information, providing access to everyone, and safely handling sensitive data. That’s just one of the reasons that EFF is against age-gating mandates, and is working to stop and overturn them across the United States and around the world.
You might recall how Republicans (with help from Democrats) suffered a three year embolism over the national security, privacy, and propaganda problems inherent with TikTok — only to turn around and let Trump sell the platform to his technofascist billionaire friends. Who are now already hard at work preparing to do all of the stuff they claimed the Chinese were doing. And probably worse.
“Before this update, the app did not collect the precise, GPS-derived location data of US users. Now, if you give TikTok permission to use your phone’s location services, then the app may collect granular information about your exact whereabouts.”
That’s not great in a country that’s too corrupt to pass even a baseline privacy law, or to regulate dodgy data brokers that hoover up this sensitive location data and then share it with pretty much any nitwit with two nickels to rub together (including domestic and foreign intelligence agencies).
The “new U.S. TikTok” is already seeing a bunch of weird technical problems. And there are already influencers saying that their criticism of ICE is more frequently running afoul of “community standards guidelines,” though I’ve yet to see a good report fleshing these claims out yet.
As we noted last December, this latest TikTok deal is kind of the worst of all worlds. The Chinese still have an ownership stake in the app, and the companies and individual investors who’ve taken over the app have a long, rich history of supporting authoritarianism and widespread privacy violations.
These Trump-linked billionaires clearly didn’t buy TikTok to protect national security, fix propaganda, or address consumer privacy. They clearly don’t support the kind of policies it would take to actually address those issues, like meaningful privacy laws, media consolidation limits, data broker regulation, media literacy education funding, or kicking corrupt authoritarians out of the White House.
And they didn’t just buy TikTok to make money or undermine a competitor they repeatedly failed to out-innovate in the short-form video space (though that’s certainly a lot of it). They did it to expand surveillance. And, as Musk did with Twitter, to control the modern information space in a way that will coddle their ideologies and marginalize or censor opposition voices they disagree with.
As men like Larry Ellison and Marc Andreessen have made abundantly clear to anyone paying attention, their ideologies are unchecked greed and far right wing anti-democratic extremism. Billionaires attempting to dominate media to confuse the public and protect their own, usually selfish best interests is a tale as old as time. And that is, contrary to their claims, the play here as well.
With a new board full of foundationally terrible people, it’s only a matter of time before they, like Elon Musk before them, inevitably start fiddling with the platform and its algorithms to shut down debate and ideology they don’t like. Larry Ellison in particular is clearly attempting to buy up what’s left of crumbling U.S. corporate media and turn it into a safe space for the planet’s unpopular autocrats.
It’s worth reiterating that this was all built on the back of four years of fear mongering about TikTok privacy, propaganda, and national security issues by Republicans who couldn’t actually give the slightest shit about any of those subjects. And aided by the bumbling Keystone Cops in the Democratic party, who actively helped Trump offload the platform to his billionaire buddies.
Then propped up by a lazy corporate press that’s increasingly incapable of explaining to the public what’s actually happening, especially if it involves rich right wingers trying to dominate media.
I suspect the company will try very hard for a year or so to insist that nothing whatsoever has changed to avoid a mass exodus of TikTok users. Especially in the wake of the promise of new, performative hearings by lawmakers who helped the whole mess happen in the first place.
But the ownership won’t be able to help themselves. Steadily and progressively things will get worse, driving users to another new pesky social media upstart, at which point the billionaire quest for total information control will start all over again.
We spent a lot of time last year calling out how dangerous it was that Elon Musk and his inexperienced 4chan-loving DOGE boys were gaining access to some of the most secure government systems. We also highlighted how it seemed likely that they were violating many laws in the process. One specific point of concern was DOGE’s desire to take control over Social Security data, something that many people warned would be abused for political reasons, in particular to make misleading or false claims about voting records.
For all the people who insisted that this was hyperbolic nonsense, and DOGE was just there to root out “waste, fraud, and abuse,” well… the DOJ last week quietly admitted that the DOGE boys almost certainly violated the Hatch Act and had given social security data to conspiracy theorists claiming Trump won the 2020 election (he did not).
Oh, and this only came out because the DOJ realized it had lied to a court (they claim it was because the Social Security Administration officials had given them bad info, but the net effect is the same) and had to correct the record.
Shapiro’s previously unreported disclosure, dated Friday, came as part of a list of “corrections” to testimony by top SSA officials during last year’s legal battles over DOGE’s access to Social Security data. They revealed that DOGE team members shared data on unapproved “third-party” servers and may have accessed private information that had been ruled off-limits by a court at the time.
Shapiro said the case of the two DOGE team members appeared to undermine a previous assertion by SSA that DOGE’s work was intended to “detect fraud, waste and abuse” in Social Security and modernize the agency’s technology.
Also in his March 12 declaration, Mr. Russo attested that, “[t]he overall goal of the work performed by SSA’s DOGE Team is to detect fraud, waste and abuse in SSA programs and to provide recommendations for action to the Acting Commissioner of SSA, the SSA Office of the Inspector General, and the Executive Office of the President.”….
However, SSA determined in its recent review that in March 2025,a political advocacy group contacted two members of SSA’s DOGE Team with a request to analyze state voter rollsthat the advocacy group had acquired.The advocacy group’s stated aim was to find evidence of voter fraud and to overturn election resultsin certain States. In connection with these communications,one of the DOGE team members signed a “Voter Data Agreement,” in his capacity as an SSA employee, with the advocacy group. He sent the executed agreement to the advocacy group on March 24, 2025.
The filing goes on to admit that the declaration from a Social Security administration employee that there were safeguards in place against sharing data, and that everyone had received training in not sharing data, was apparently wrong.
However, SSA has learned that, beginning March 7, 2025, and continuing until March 17 (approximately one week before the TRO was entered), members of SSA’s DOGE Team were using links to share data through the third-party server “Cloudflare.” Cloudflare is not approved for storing SSA data and when used in this manner is outside SSA’s security protocols. SSA did not know, until its recent review, that DOGE Team members were using Cloudflare during this period. Because Cloudflare is a third-party entity, SSA has not been able to determine exactly what data were shared to Cloudflare or whether the data still exist on the server.
Cool cool. No big deal. DOGE boys just put incredibly private data on a third party server and no one knows what data was there or even if it’s still there.
Have I got some waste, fraud, and abuse for you to check out!
Separately, the filing reveals that Elon Musk’s right hand man, Steve Davis—the “fixer” Musk deploys across all his organizations—was copied on an email containing an encrypted file of SSA data. The filing is careful to note that DOGE itself “never had access to SSA systems of record,” but that’s a distinction without much difference when your guy is getting emailed password-protected files derived from those systems. Oh and: SSA still can’t open the file to figure out exactly what was in it.
However, SSA has determined that on March 3, 2025—three weeks prior to entry of the TRO—an SSA DOGE Team member copied Mr. Steve Davis, who was then a senior advisor to Defendant U.S. DOGE Temporary Organization, as well as a DOGE-affiliated employee at the Department of Labor (“DOL”), on an email to Department of Homeland Security (“DHS”). The email attached an encrypted and password-protected file that SSA believes contained SSA data. Despite ongoing efforts by SSA’s Chief Information Office, SSA has been unable to access the file to determine exactly what it contained. From the explanation of the attached file in the email body and based on what SSA had approved to be released to DHS, SSA believes that the encrypted attachment contained PII derived from SSA systems of record, including names and addresses of approximately 1,000 people.
Looks like some more waste, fraud, and abuse right there.
So to recap: the team that stormed in to root out “waste, fraud, and abuse” committed what looks an awful lot like actual fraud and abuse—sharing data on unauthorized servers, misleading courts, cutting deals with election conspiracy groups, and emailing around encrypted files of PII that the agency itself can’t even open anymore. All of it now documented in federal court filings—not that anyone will do anything about it. Accountability is for people who don’t have Elon Musk on speed dial.