Mario Trujillo's Techdirt Profile

Mario Trujillo

About Mario Trujillo

Posted on Techdirt - 20 February 2026 @ 03:47pm

Open Letter To Tech Companies: Protect Your Users From Lawless DHS Subpoenas

We are calling on technology companies like Meta and Google to stand up for their users by resisting the Department of Homeland Security’s (DHS) lawless administrative subpoenas for user data. 

In the past year, DHS has consistently targeted people engaged in First Amendment activity. Among other things, the agency has issued subpoenas to technology companies to unmask or locate people who have documented ICE’s activities in their community, criticized the government, or attended protests.   

These subpoenas are unlawful, and the government knows it. When a handful of users challenged a few of them in court with the help of ACLU affiliates in Northern California and Pennsylvania, DHS withdrew them rather than waiting for a decision. 

But it is difficult for the average user to fight back on their own. Quashing a subpoena is a fast-moving process that requires lawyers and resources. Not everyone can afford a lawyer on a moment’s notice, and non-profits and pro-bono attorneys have already been stretched to near capacity during the Trump administration.  

 That is why we, joined by the ACLU of Northern California, have asked several large tech platforms to do more to protect their users, including: 

  1.  Insist on court intervention and an order before complying with a DHS subpoena, because the agency has already proved that its legal process is often unlawful and unconstitutional;  
  2. Give users as much notice as possible when they are the target of a subpoena, so the user can seek help. While many companies have already made this promise, there are high-profile examples of it not happening—ultimately stripping users of their day in court;  
  3. Resist gag orders that would prevent companies from notifying their users that they are a target of a subpoena. 

 We sent the letter to Amazon, Apple, Discord, Google, Meta, Microsoft, Reddit, SNAP, TikTok, and X.  

Recipients are not legally compelled to comply with administrative subpoenas absent a court order 

 An administrative subpoena is an investigative tool available to federal agencies like DHS. Many times, these are sent to technology companies to obtain user data. A subpoena cannot be used to obtain the content of communications, but they have been used to try and obtain some basic subscriber information like name, address, IP address, length of service, and session times.  

Unlike a search warrant, an administrative subpoena is not approved by a judge. If a technology company refuses to comply, an agency’s only recourse is to drop it or go to court and try to convince a judge that the request is lawful. That is what we are asking companies to do—simply require court intervention and not obey in advance. 

It is unclear how many administrative subpoenas DHS has issued in the past year. Subpoenas can come from many places—including civil courts, grand juries, criminal trials, and administrative agencies like DHS. Altogether, Google received 28,622 and Meta received 14,520 subpoenas in the first half of 2025, according to their transparency reports. The numbers are not broken out by type.   

DHS is abusing its authority to issue subpoenas 

In the past year, DHS has used these subpoenas to target protected speech. The following are just a few of the known examples. 

On April 1, 2025, DHS sent a subpoena to Google in an attempt to locate a Cornell PhD student in the United States on a student visa. The student was likely targeted because of his brief attendance at a protest the year before. Google complied with the subpoena without giving the student an opportunity to challenge it. While Google promises to give users prior notice, it sometimes breaks that promise to avoid delay. This must stop.   

In September 2025, DHS sent a subpoena and summons to Meta to try to unmask anonymous users behind Instagram accounts that tracked ICE activity in communities in California and Pennsylvania. The users—with the help of the ACLU and its state affiliates— challenged the subpoenas in court, and DHS withdrew the subpoenas before a court could make a ruling. In the Pennsylvania case, DHS tried to use legal authority that its own inspector general had already criticized in a lengthy report.  

In October 2025, DHS sent Google a subpoena demanding information about a retiree who criticized the agency’s policies. The retiree had sent an email asking the agency to use common sense and decency in a high-profile asylum case. In a shocking turn, federal agents later appeared on that person’s doorstep. The ACLU is currently challenging the subpoena.  

Read the full letter here

Originally posted to the EFF’s Deeplinks blog.

Posted on Techdirt - 1 December 2025 @ 08:03pm

Victory! Court Ends Dragnet Electricity Surveillance Program In Sacramento

A California judge ordered the end of a dragnet law enforcement program that surveilled the electrical smart meter data of thousands of Sacramento residents.

The Sacramento County Superior Court ruled that the surveillance program run by the Sacramento Municipal Utility District (SMUD) and police violated a state privacy statute, which bars the disclosure of residents’ electrical usage data with narrow exceptions. For more than a decade, SMUD coordinated with the Sacramento Police Department and other law enforcement agencies to sift through the granular smart meter data of residents without suspicion to find evidence of cannabis growing.

EFF and its co-counsel represent three petitioners in the case: the Asian American Liberation Network, Khurshid Khoja, and Alfonso Nguyen. They argued that the program created a host of privacy harms—including criminalizing innocent people, creating menacing encounters with law enforcement, and disproportionately harming the Asian community.

The court ruled that the challenged surveillance program was not part of any traditional law enforcement investigation. Investigations happen when police try to solve particular crimes and identify particular suspects. The dragnet that turned all 650,000 SMUD customers into suspects was not an investigation.

“[T]he process of making regular requests for all customer information in numerous city zip codes, in the hopes of identifying evidence that could possibly be evidence of illegal activity, without any report or other evidence to suggest that such a crime may have occurred, is not an ongoing investigation,” the court ruled, finding that SMUD violated its “obligations of confidentiality” under a data privacy statute.

Granular electrical usage data can reveal intimate details inside the home—including when you go to sleep, when you take a shower, when you are away, and other personal habits and demographics.

In creating and running the dragnet surveillance program, according to the court, SMUD and police “developed a relationship beyond that of utility provider and law enforcement.” Multiple times a year, the police asked SMUD to search its entire database of 650,000 customers to identify people who used a large amount of monthly electricity and to analyze granular 1-hour electrical usage data to identify residents with certain electricity “consumption patterns.” SMUD passed on more than 33,000 tips about supposedly “high” usage households to police.

While this is a victory, the Court unfortunately dismissed an alternate claim that the program violated the California Constitution’s search and seizure clause. We disagree with the court’s reasoning, which misapprehends the crux of the problem: At the behest of law enforcement, SMUD searches granular smart meter data and provides insights to law enforcement based on that granular data.

Going forward, public utilities throughout California should understand that they cannot disclose customers’ electricity data to law enforcement without any “evidence to support a suspicion” that a particular crime occurred.

EFF, along with Monty Agarwal of the law firm Vallejo, Antolin, Agarwal, Kanter LLP, brought and argued the case on behalf of Petitioners.

Reposted from the EFF’s Deeplinks blog.

Posted on Techdirt - 18 November 2025 @ 03:30pm

The Legal Case Against Ring’s Face Recognition Feature

Amazon Ring’s upcoming face recognition tool has the potential to violate the privacy rights of millions of people and could result in Amazon breaking state biometric privacy laws.

Ring plans to introduce a feature to its home surveillance cameras called “Familiar Faces,” to identify specific people who come into view of the camera. When turned on, the feature will scan the faces of all people who approach the camera to try and find a match with a list of pre-saved faces. This will include many people who have not consented to a face scan, including friends and family, political canvassers, postal workers, delivery drivers, children selling cookies, or maybe even some people passing on the sidewalk.

Many biometric privacy laws across the country are clear: Companies need your affirmative consent before running face recognition on you. In at least one state, ordinary people with the help of attorneys can challenge Amazon’s data collection. Where not possible, state privacy regulators should step in.

Sen. Ed Markey (D-Mass.) has already called on Amazon to abandon its plans and sent the company a list of questions. Ring spokesperson Emma Daniels answered written questions posed by EFF, which can be viewed here.

What is Ring’s “Familiar Faces”?

Amazon describes “Familiar Faces” as a tool that “intelligently recognizes familiar people.” It says this tool will provide camera owners with “personalized context of who is detected, eliminating guesswork and making it effortless to find and review important moments involving specific familiar people.” Amazon plans to release the feature in December.

The feature will allow camera owners to tag particular people so Ring cameras can automatically recognize them in the future. In order for Amazon to recognize particular people, it will need to perform face recognition on every person that steps in front of the camera. Even if a camera owner does not tag a particular face, Amazon says it may retain that biometric information for up to six months. Amazon said it does not currently use the biometric data for “model training or algorithmic purposes.”

In order to biometrically identify you, a company typically will take your image and extract a faceprint by taking tiny measurements of your face and converting that into a series of numbers that is saved for later. When you step in front of a camera again, the company takes a new faceprint and compares it to a list of previous prints to find a match. Other forms of biometric tracking can be done with a scan of your fingertip, eyeball, or even your particular gait.

Amazon has told reporters that the feature will be off by default and that it would be unavailable in certain jurisdictions with the most active biometric privacy enforcement—including the states of Illinois and Texas, and the city of Portland, Oregon. The company would not promise that this feature will remain off by default in the future.

Why is This a Privacy Problem?

Your biometric data, such as your faceprint, are some of the most sensitive pieces of data that a company can collect. Associated risks include mass surveillance, data breach, and discrimination.

Today’s feature to recognize your friend at your front door can easily be repurposed tomorrow for mass surveillance. Ring’s close partnership with police amplifies that threat. For example, in a city dense with face recognition cameras, the entirety of a person’s movements could be tracked with the click of a button, or all people could be identified at a particular location. A recent and unrelated private-public partnership in New Orleans unfortunately shows that mass surveillance through face recognition is not some far flung concern.

Amazon has already announced a related tool called “search party” that can identify and track lost dogs using neighbors’ cameras. A tool like this could be repurposed for law enforcement to track people. At least for now, Amazon says it does not have the technical capability to comply with law enforcement demanding a list of all cameras in which a person has been identified. Though, it complies with other law enforcement demands.

In addition, data breaches are a perpetual concern with any data collection. Biometrics magnify that risk because your face cannot be reset, unlike a password or credit card number. Amazon says it processes and stores biometrics collected by Ring cameras on its own servers, and that it uses comprehensive security measure to protect the data.

Face recognition has also been shown to have higher error rates with certain groups—most prominently with dark-skinned women. Similar technology has also been used to make questionable guesses about a person’s emotionsage, and gender.

Will Ring’s “Familiar Faces” Violate State Biometric Laws?

Any Ring collection of biometric information in states that require opt-in consent poses huge legal risk for the company. Amazon already told reporters that the feature will not be available in Illinois and Texas—strongly suggesting its feature could not survive legal scrutiny there. The company said it is also avoiding Portland, Oregon, which has a biometric privacy law that similar companies have avoided.

Its “familiar faces” feature will necessarily require its cameras to collect a faceprint from of every person who comes into view of an enabled camera, to try and find a match. It is impossible for Amazon to obtain consent from everyone—especially people who do not own Ring cameras. It appears that Amazon will try to unload some consent requirements onto individual camera owners themselves. Amazon says it will provide in-app messages to customers, reminding them to comply with applicable laws. But Amazon—as a company itself collecting, processing, and storing this biometric data—could have its own consent obligations under numerous laws.

Lawsuits against similar features highlight Amazon’s legal risks. In Texas, Google paid $1.375 billion to settle a lawsuit that alleged, among other things, that Google’s Nest cameras “indiscriminately capture the face geometry of any Texan who happens to come into view, including non-users.” In Illinois, Facebook paid $650 million and shut down its face recognition tools that automatically scanned Facebook photos—even the faces of non-Facebook users—in order to identify people to recommend tagging. Later, Meta paid another $1.4 billion to settle a similar suit in Texas.

Many states aside from Illinois and Texas now protect biometric data. While the state has never enforced its law, Washington in 2017 passed a biometric privacy law. In 2023, the state passed an ever stronger law that protects biometric privacy, which allows individuals to sue on their own behalf. And at least 16 states have recently passed comprehensive privacy laws that often require companies to obtain opt-in consent for the collection of sensitive data, which typically includes biometric data. For example, in Colorado, a company that jointly with others determines the purpose and means of processing biometric data must obtain consent. Maryland goes farther, and such companies are essentially prohibited from collecting or processing biometric data from bystanders.

Many of these comprehensive laws have numerous loopholes and can only be enforced by state regulators—a glaring weakness facilitated in part by Amazon lobbyists.

Nonetheless, Ring’s new feature provides regulators a clear opportunity to step up to investigate, protect people’s privacy, and test the strength of their laws.

Originally posted to the EFF’s Deeplinks blog.